Report: Data and Machine Learning

Pitch Experiment on


Ideology and outcome

In this project I explored music as a creative medium or experimentation utilising game mechanics but disregarding conventional game ideologies surrounding set goals and win states. This is not a new concept within games nor is it a new concept within music. According to Winston and Saywood (2019) in 2010 a subgenre of music was developed which specifically focused on relaxing background noise, this genre was called Lo-fi it seems to have first appeared as a subdivision of hip-hop music.

“Lo-fi hip hop, named for its producers’ deliberate introduction of “low fidelity,” analog-style sonic imperfections into their digitally produced tracks.” (Winston and Saywood, 2019)

This sub genre was a direct influence on the development of the game as it perfectly embodied the ideology of an unpressured experimentation with an acceptance of imperfections.The final version of my project explores the visual auditory functions of music using a piece I programmed and designed through a combination of open source sounds and programmes such as eJay 4 and Unity.


Existing academic and artistic inspiration

Music has long been part of video game creation, it has a fantastic ability to not only aid concentration but also to set a particular mood. In addition to these aesthetic qualities that music possesses, it also has the ability to provide positive neurological impact. Within this project I explored the concept of digitally created virtual music with the aim of considering ways in which new methods can be applied to the use of video game software to create more widely accessible musical opportunities for experimentation.

It has been proposed by a range of researchers that music is an innate part of human nature. I feel this is best summarised by an article by the University of Amsterdam (2016) when they said “A sense of rhythm is a uniquely human characteristic”. This concept which was put forward by the University of Amsterdam was also demonstrated by Bobby McFerrin (2009) in a TED talk he gave showing an example of people’s natural prerequisite understanding of the pentatonic scale.

Lo-fi music is an interesting juxtaposition as it, usually, refers to Lo-fi hip-hop music, and hip-hop as a genre is not widely known for its relaxing tune. Despite this Lo-fi hip-hop, which is primarily available online, has become an exceedingly popular working background music, it fits the prescriptions described by Petitpas (1998) as it has no lyrics and is well suited to repetitive tasks as it incorporates the repetitive tune beat that is a intrinsic part of hip-hop music in general.

“Classical or instrumental music has been shown to enhance mental performance more than music with lyrics. For strong focus, music that has little variety and little to no lyrics are best.”(Petitpas, 1998)

Whether inside or outside of video games music’s ability to be used to aid in increasing productivity within repetitive tasks which in turn can have positive neurological effects, has come a long way but is still an area under key development, there is for example currently very little academic research on Lo-fi music.

Music is also used in some fields to treat certain neurological declines. As is explored by Valcarecel (2019) the human brain has the ability to be altered at a neurological level through repetitive activities, in many fields this has been referred to as Neuroplasticity. Video games provide repetitiveness not just within their audio, but also, within their activities so lend themselves well to be used in a way that could be positively neurologically impactful.

“… musicians learn and repeatedly practice the Association of motor actions with specific sound and visual patterns(musical notation) While receiving continuous multisensory feedback.”(Wan, C. and Schlaug, G., 2010.)

Wan and Schlaug (2010) explored how the range of skills involved in playing a musical instrument, where a musician is translating a symbolic language “…into sequential, bimanual motor activity dependent on multi-sensory feedback…”, contribute towards positive neurological development.

I decided to develop the project as a musical exploration as opposed to a conventional game as despite the innate musical understanding that is possessed by the human brain there is also a level of skill and practice required to gain a better understanding of sound and be able to manipulate it by ear.

“…many musician-advantages in the neural encoding of sound, auditory perception, and auditory-cognitive skills correlate with their extent of musical training”(Barrett, K., Ashley, R., Strait, D. and Kraus, N., 2010)

In comparison to the innate human musical ability there are also opportunities to learn through practice and receptive tasks, I feel that freedom to experiment and practice creates a better and more likely opportunity for discovery. I decided to explore pitch in particular within my work as I felt that this was an innate part of daily life that is widely relatable but also uniquely perceived by individuals. Pitch is a universally integrated part of language and communication and has the ability to alter moods and perceptions, Patel (2010) in particular has explored music’s relationship with language and the brain, he also explores how important pitch is to the communication of emotion. Patel gives the example of it being generally widely accepted that high pitches are usually associated with happiness whereas low pitches are often associated with sadness and foreboding. In addition it has also been summarised by Oliveira (2017) that Ada Lovelace, often heralded as an integral pioneer in the development of computing, also postulated in 1843 on the idea of scientific analysis being able to influence or in fact compose pieces of music based on the “harmony of science”. As a result of this I was inspired to focus on pitch as a feature, I felt that pitch had such an interesting background, unique emotive transformative abilities and a deep rooted relationship with a wide range of potential users.


Applications and improvements

My performance for this project was done using a game that I designed using a combination of gaming software and some of the concepts such as regression and classification that we have been exploring within Data and Machine Learning.

[first version screenshot]

For the first version of my project I created a digital musical instrument that takes the location of an object, which is attached to the cursor, in respect to the centre of the screen and alters both the pitch and speed simultaneously. This early version of the project was quite simple and relied solely on one object and a single sound following the mouse, this was in order to test the idea of the screen position altering the pitch and sound.

[second version screenshot]

In later developments I added to the original idea in order to increase its complexity, I added multiple inputs and different audio attached to each. I also included a visual representation of the music source.

[final version screenshot]

Within the final stages of development I added a background sound and incorporated the position of other objects into a calculation to alter the pitch based on their collective location, thus creating an adaptable learning system. In addition to this the pitches of all the audio in the scene are constantly displayed in the bottom left hand corner. The final version is available online on

Source statement

Defining the use and adaptation of third party resources and things I created myself

The links to all external sources described below can be found within the references at the end of this document.

The pitch code was created by adapting a code which was found on the online Unity forum which discussed how to adjust volume based on proximity, I found the response given by the user GutoThomas on 5th June 2012 particularly useful. The original code was for a 3D space, I altered the code to apply to a 2D space. Instead of altering volume I rewrote the code to focus on pitch and instead of hard coding in a number, I used a calculation to reflect the changes of other objects within the environment.

I created the code to display the pitch value as it changes for each audio myself. In order to refine my code, I used a Unity forum on converting a float to an integer to show me how to limit the number of decimal places displayed. The most useful post for my purposes was one by the user Corrosius on the 19th of November 2013.

The sprite movement code was adapted from the online Unity tutorial on clicking and dragging 2D sprites which was made and published by Nade on their own YouTube channel.

The reset code section of the button codes was a fairly simple code found online on a Unity forum which discussed how to reset a scene using a keyboard input. I particularly found the answer by Doublemax on the 9th of October 2016 to be the most useful. I altered it to be used as a button instead of as a keyboard input, it works by restarting the scene to undo any changes to the shape locations.

The sounds for the final version of the game came from the open source DJ programme Dance eJay 4 (1997), but within the experimental development I incorporated some sound from the online open source website Sound Bible.

I drew and coloured all of the all of the game Sprites and general artwork myself specifically for this project.

As I am more familiar with Unity’s version of C# I chose to use the Unity games programming engine and Microsoft Visual Studio. As I am fairly new to programming I found it challenging to achieve the same result using other methods due to my lack of knowledge of other programming languages.


Methods, calculations and software application

When I first conceived of the idea of having a location based audio response I thought classification would be an appropriate model for the outcome I wanted but through testing I found using this method to be quite rigid, it lacked the fluidity that I desired to exemplify within the music and sounds. This inspired me to use regression calculations as a starting point for my model. This was a far more successful method as not only did it provide a smooth transition to the alternative states it also provided a model for a simple linear regression which made the ability to control the programme more easily understandable through experimentation.

Within my development I did consider the polynomial regression model but decided against using it for both aesthetic and functional reasons. I felt, firstly, that it slightly increased the complexity of the controls which I did not want and, secondly, that a linear regression model better reflected the pentatonic scale in reference to the pitch values, which in unity are measured in semitones. Another feature of using the linear model meant that you could have the same semitone value for different shapes without having to overlap them on the screen.

The individual shapes utilise a simple linear regression calculation in the form of Y = ax + b, where Y is the dependent feature, which is the pitch of the audio and a and b are independent variables defined by the numerical value of the X and Y axes inputs provided by the location of the shapes. In contrast the background audio code uses a multiple linear regression calculation in the form of Y =ax1 + bx2 + cx3 + d, in this calculation the dependent feature is the pitch of the background audio and the independent variables are the pictures of the shapes at any given moment during the running of the programme. According to Murphy (2012) within a regression calculation the x represents the decision boundaries, this would be true for both of the calculations described above.

[ The X and Y axis affect on the pitch value in the games] — excerpt from Appendix 3

The table above is an example of some of the semitone values based on the X and Y position of the shapes relative to the audio sources, the audio sources are visually represented as an animated boombox. The pitch values are displayed on screen during the experience. If I were to develop the calculations further I would like to experiment with combining linear and polynomial regression calculations differently for each sound to control not just the pitch but also control echoing and surround sound.

[Screenshot of background music Unity code in Microsoft Visual Studio] — for full code see Appendix 2

In addition to the main part of the programme there is also a separate calculation that uses the position of all of the shapes combined to influence the pitch of the background music in order to compliment the piece. I divided the combined calculation of the pitch values of the sounds used in the scene by 80. The reason for doing this was because the pitch semitone values altered the pitch of the background audio too drastically and also eventually crashed the entire system.


Reflection of creative aims and possible developments

I was pleased with the final version of my piece as I felt that it accurately gave the desired result not just of turning a simple input into a way to play a musical instrument but also having a visual system as a way to give feedback through audio inspired by how musicians read and interpret music sheets. This was a challenging but integral part of this system.

Unfortunately I was unable to incorporate the range of sensors that I had hoped to include in my project though the final piece still clearly explores and demonstrates the concept that I wished to explore. If I were to take this project further I would like to change the space from a digital screen to a digital virtual reality world where the inputs would be performed in an immersive 3D environment. I also would like to utilise the unique attributes of surround sound and echoing that can be applied within a 3D environment.


  1. Bailey, H. (2012). Open Broadcaster Software. OBS Project.
  2. Barrett, K., Ashley, R., Strait, D. and Kraus, N., 2010. Art and science: how musical training shapes the brain. The US National Library of medicine National institute of health, [online] Available at: <> [Accessed 24 April 2020].
  3. Ciortuz, L., n.d. Machine Learning.
  4. Crystal, D., 1993. The Cambridge Paperback Encyclopedia. 1st ed. Avon: Cambridge University Press.
  5. Corrosius, 2013. Converting float to integer [online] Available at: <> [Accessed 01 May 2020].
  6. doublemax, 2016. How To Restart The Game — Unity Answers. [online] Available at: <> [Accessed 28 April 2020].
  7. Drucker, J., 2011. [DOC] Humanities Approaches To Graphical Display. Digital Humanities Quarterly.<> [Accessed 23 April 2020].
  8. Dance eJay 4. (1997). eJay, Explosive Games, Microsoft,.
  9. Fiebrink, R., Sonami, L. and Caramiaux, B., 2020. Machine Learning For Musicians And Artists.
  10. Fonts2u. 2020. Fonts2u.Com Download Fonts. [online] Available at: <> [Accessed 6 November 2019].
  11. GutoThomas, 2012. Adjust Audio Based On Proximity — Unity Answers. [online] Available at: <> [Accessed 8 April 2020].
  12. Levitin, D., 2008. This Is Your Brain On Music: Understanding A Human Obsession. [Unknown City ]: Paw Prints.
  13. Murphy, K., 2012. Machine Learning: A Probabilistic Perspective. London: MIT PRESS.
  14. Nade, 2019. Unity Tutorial | Clicking And Dragging 2D Sprites. [video] Available at: <> [Accessed 8 April 2020].
  15. Oliveira, A., 2017. The Digital Mind How Science Is Redefining Humanity. London: The MIT Press.
  16. Patel, A., 2010. Music, Language, And The Brain. Unknown City: Oxford University Press.
  17. Patri0t_Gamer, 2016. How To Move In Game 2D Game Object With Mouse? — Unity Answers. [online] Available at: <> [Accessed 8 April 2020].
  18. Petitpas, A. (1998). Listen While You Work. Athletic Therapy Today, [online] 3(3), pp.10–11. Available at: [Accessed 1 Jan. 2020].
  19. Sound Bible. 2020. open source sounds. [online] Available at: <> [Accessed 9 December 2019].
  20. TED Talks, 2009. Bobby Mcferrin: Watch Me Play … The Audience. [video] Available at: <> [Accessed 24 April 2020].
  21. Unity Technologies, 2020. Unity — Scripting API: Input.Mouseposition. [online] Available at: <> [Accessed 8 April 2020].
  22. University of Amsterdam. 2016 Brain picks up the beat of music automatically. ScienceDaily [online] Available at: <> [Accessed 24 April 2020].
  23. Valcarcel, C., 2019. Treating Dyslexia With Music & Video Games. Medium. [online] Available at: <> [Accessed 25 April 2020].
  24. Wan, C. and Schlaug, G., 2010. Music Making as a Tool for Promoting Brain Plasticity across the Life Span. The US National Library of medicine National institute of health, [online] Available at: <> [Accessed 24 April 2020].
  25. Winston, E. and Saywood, L., 2019. Beats to Relax/Study To: Contradiction and Paradox in Lo-fi Hip Hop. Iaspm Journal [online] Available at: <>



Eclectically skilled indie games dev & researcher interested in crafting in games, budding curator, & lingerie researcher {Neurodiverse — Dyslexic Team}

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Shanique Thompson

Shanique Thompson


Eclectically skilled indie games dev & researcher interested in crafting in games, budding curator, & lingerie researcher {Neurodiverse — Dyslexic Team}