Listening (Data and Machine Learning)

This project uses computer vision and machine learning as tools for creating music with color. A real-time camera image is computerized; the image colors are translated into data which control the computer sound. For turning and mapping the camera feed into sounds, a polynomial( Neural Network) regression algorithm is used.

1. Background

In the process of study Data and Machine Learning for Artistic Practice, I further realized how to analyze data and how to control data. All of this knowledge has given me more enlightenment,  which changed my ways of solving problems. By understanding the significance of processing data, I found it easier to analyze the data of the issue using Data and Machine Learning rather than solving the issue directly.

Based on several applications for data and machine learning applications, many examples demonstrated by Instructor: Dr. Rebecca Fiebrink "has" changed the way "of how" traditional music is expressed and "how to use" data to create incredible music. I didn't know anything about music.  I also started a new attempt to create my electronic music and use data to create my instrument.

2. Research Questions

In this project, the main question is how one may compose and create a new digital music use data and machine learning technology? I suggested that we look more closely at the role of data and machine learning technology by asking these specific questions:

1.  What is the relationship between data, human activity, and music? How can they create beautiful music uses machine learning technology?

2. How can Machine Learning technology help Musicians and Artists to create an interactive art installation?

3. Future developments are using Machine Learning to develop art-works. 

3. Creative Motivation

My project name is Listening; like the name, it is a work that makes sounds use machine learning; This is my first-time use machine learning to create sounds. 

Nowadays, music has become an indispensable part of people's daily life. In this diverse society, various kinds of music have been produced. Including great electronic music, as a person who is hard without headphone, also have a dream that like a superstar standing on the stage to perform my music for audiences, but unfortunately, I don't know anything about create music.

As a creative coder, I often see some audiovisual works, But many times, the creators control these works, and the audience can't participate in them. This situation has often hindered the audience from feeling the process of creation. Therefore, I hope that I can create a music machine that allows everyone to participate in it.


4. Inspiration

A lot of artworks have given me the inspiration for creation, and I have borrowed a lot of resources during the period.


quote 1:Annie Tadne: computer vision controlled audio sequencer:  https://annietxdne.tumblr.com/post/173546992301/dust-a-physical-sound-sequencer-this-program 


DUST - - - a physical sound sequencer


This work has given me a lot of inspiration for this work. After combining the music knowledge after I got learning Max/MSP, I started to have a plan for my project if I can create something through computer vision? 


quote 2:Material Artifact, Responding to Light, Emitting Tones:

https://vimeo.com/19980514


This art work combines sensors and sound controllers in a new way and is combined with the creators. That give me a deep impression on the creation of music more intuitively through interaction with people.


quote 3: Matt Wright:  Programming Max: Structuring Interactive Software for Digital Arts  

https://www.kadenze.com/courses/programming-max-structuring-interactive-software-for-digital-arts-i/info


Max is a powerful platform that accommodates and connects a wide variety of tools for sound, graphics, music and interactivity using a flexible patching and programming environment. Max allows most computer users to write a simple, meaningful program within a few minutes, even with limited programming knowledge. But to do something more substantial it's necessary to approach Max as an actual programming language, by taking advantage of its various mechanisms for abstracting program elements into scalable, reusable components that can be combined in increasingly powerful ways.

After learning Max, I first learned the elements of electronic music composition, and I was able to complete the patch I needed to a certain extent.


quote 4: Rebecca Fiebrink: Machine Learning for Musicians and Artists

https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/info


In this course, I studied how fundamental machine learning techniques that can be used to make sense of human gesture, musical audio, and other real time data. As well as how to use the machine learning software Wekinator.


5. Creative work Design Research

Musicians have always accompanied the music, and music still has been used as a narrative tool. At the same time, when I saw the colorfulness of the city, the passing pedestrians took headphones, and they are listing ti their city voice. Which inspires me if I can create a city voice, and then I hope that I can create a tool that can make music based on color. 

Every note is uncertain, and the music likes the uncertainty of the city. So I started a series of attempts, a significant question placed in front of me, how to convert real-time color into data? After a period of research, I found that I can use the webcam to capture images in real time and get the data by converting the image into a 10×10 color grid, In this 10×10 color grid, I can get input of 100 color data, train the data through the Wekinator, and output 5 data to control the sound.


The music part is made by Max/MSP, I use these 5 data to control my music, and they are controlled Beats separately, Scale, Tempo, Ch-Scale and Scale-Sel-B, the first 3 elements are necessary, they make up my music, the last two things like some option choice, only make sounds different. At the same time in the Max/MSP patch, three filters( Filter for the Music input. Produces lowpass, highpass, and bandpass simultaneously, enabling individual control for different frequencies. ) can be manually adjusted and a key/base note/ tonic selector. The filtered frequencies are being distorted through lookup~ and waveshaping. Draw a wave shape; it gets smoothed automatically — Bang smooth button for more smooth. Bang reset for redrawing.  (Based on an example found in Dynamics tutorial about distortion, in Max/MSP reference.)


The machine learning part that I use Max/MSP captures video feed from camera with a resolution 640*480, and resamples incoming video feed into a grid ten by ten; As well as tells the jit.cellblock object to extract the amount of color from cells created by the jit.matrix; Next Transforms the input value in a float number, and send these data to Wekinator.


image62
image63
image64

6. Challenges and Change (in the future)

Throughout the process, I got a piece of live music generated, and the music can change in any situation. However, at this stage, I spend a lot of time each time I train my data. For this reason, my choice of machine learning algorithms may not be the best choice. On the other hand, I only choose a simple 10×10 color grid for the value of color, which means that my color value may not be very accurate, and in many cases, it will be confused. At the same time, in the process of collecting data, I did not take high-contrast colors to generate my music, which may have a certain impact on the integrity and gracefulness of my music.


In the future, if there is more time to adjust the project, you can optimize the training data and reduce the training time. And add more music types, so that the music changes become more substantial through the color change when collecting data, use the color with a massive color difference as the input data. At the same time, I want to create more factors to influence and control my music, making the music change more obvious. In addition, if there are good conditions, I will choose more different environments, try to create more kinds of music in different ways under different environments, and make this project look richer.

7. Conclusion

This project, entitled "Data and machine learning to create music for computer vision." proposes a new form of music which inspired by the digital data. After completing the preliminary research work, and also I was obtained the corresponding results. Through utilizing digital software such as Max/MSP, Wekinator, I have attempted to redefine the relationship between environment, color and human active by Data and machine learning. 


This report was included of three parts; the first one was forced on a new viewpoint of machine learning alongside trying to understand the relationship between color environment and music further. By analyzing from a theoretical, technical and artistic framework perspective, I can explain how each element can respire new life into machine learning.


The second part focused on what is machine learning algorithm, and how it may use classification, regression, and segmentation combine the digital sense of human actions and sensor data. Furthermore, through study how to use the software Max/MSP, to understand how to use digital data to create music, including how to use Max/MSP create music synthesis, algorithmic composition and interactive control (e.g., from QWERTY keyboard, mouse, USB devices, Open Sound Control).


The last part involved the design of the project, the process of which was the creation of a way of the machine learning algorithm and generative music. 


In summary, this project focused on achieving combine machine learning and real-time music. It also aimed to give me an enlightenment about using Data and machine learning when I create art-works, as a result of this study, I have concluded that to create a new form of music, the use of machine learning methods.


Reference:

Programming Max: Structuring Interactive Software for Digital Arts | Kadenze [WWW Document], n.d. URL https://www.kadenze.com/courses/programming-max-structuring-interactive-software-for-digital-arts-i/info (accessed 5.9.19).


Machine Learning for Musicians and Artists | Kadenze [WWW Document], n.d. URL https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists-v/info (accessed 5.9.19).


MARtLET [WWW Document], n.d. URL http://michellenagai.com/Site/MARtLET.html (accessed 5.9.19).


OpenCV [WWW Document], n.d. URL https://opencv.org/ (accessed 5.9.19).


annietxdne — DUST - - - a physical sound sequencer This program... [WWW Document], n.d. URL https://annietxdne.tumblr.com/post/173546992301/dust-a-physical-sound-sequencer-this-program (accessed 5.9.19).

Some codes are adapted from:

"Paroxysm" patch by Annie- table 

Dynamics tutorial about distortion, in Max/MSP reference. 


"HW4starter" 2016 by Matthew Weight. This programme is part of the Kadenze course " Programming Max: structuring Interactive Software for Digital Arts."


"Special topics In programming for Performance and Installation" patch by Balandino Di Donato Goldsmiths


Downloads

Sources of code Download here 

Listening (Data and Machine Learning) (zip)

Download