Project Presentation: Arduino code and photo documentation can be found here https://docs.google.com/presentation/d/1Ig4Z9lAPDBJCiE4rniZuDi3mMZ-nqn-pAv8sS34mBl8/edit?usp=sharing My project is a physical sound visualizer that is "graphing calculator sized" has LEDs to show the amplitude, frequency, and waveform of sound. There are 4 LEDs each for amplitude, frequency, waveform that light up according to changes in amplitude, frequency, and waveform. There is also an LCD to show what color represents what sound aspect is changing. A physical sound visualizer could be used at music events for those are hearing impaired, in the emergency room for those who are hearing impaired, or for hunters who are trying to distinguish their prey. The device works by having a user speak out loud and through a laptop microphone, p5.speech rec and p5.sound detect the amplitude, frequency, and waveform of what was said. It then uses serial communication to send messages to Arduino IDE and then the LED and LCD. Through this process, I learned how much time perfecting the final details and debugging can take. In the future, I want to make the device more stable and reliable and try it in different settings to see how it would react in different room types. Instructions:
Twist the potentiometer to the right to turn on the LCD display screen. Go in the p5 code link. Press play and when the screen loads the user will see three incoming numbers representing amplitude, frequency, and waveform change. Adjust voice to see the LEDs light up according to the user's voice's amplitude, frequency, and waveform.
0 Comments
My dream idea is a device that puts intonation on Braille. The user would talk into device and the device would print what was said into Braille along with the amplitude, pitch, and resonance the user spoke in. The amplitude, pitch, and resonance 3D printed on a “receipt like” paper (think mountains 3D printed). The user would also be required to take a voice test before using the device for a baseline amplitude, pitch, and resonance. My realistic take on this dream idea turns speech into text and displays the text, frequency, amplitude, and waveform of the speech. Presentation: https://docs.google.com/presentation/d/1hmDc8wMB_AN_X-b-ZS3E5VNQY9bc8bZ539Bm3rzh1ks/edit?usp=sharing Project: https://editor.p5js.org/[email protected]/present/y2kPBRRLh Project w/ code: https://editor.p5js.org/[email protected]/sketches/y2kPBRRLh Instructions:
Speak into your device and see the amplitude, frequency, and waveform of what you said! The words will print when the user pauses or finishes a sentence. Key: Frequency (pitch) - blue and purple parts on the left and right Amplitude (volume) - red circle in the center Waveform (volume/time) - moving circles in the background This week a made a piano-like synthesizer with do-re-mi notes that corresponds with the '1'-'8' keys on a laptop. When any key from '1'-'8' is pressed, the synthesizer will change color and an ellipse will go to that key's corresponding spot on the synthesizer. This was a realistic take on our dream idea of an interactive footpath sequencer. The dream sequencer would be tiles placed onto the sidewalk so users could work together-or alone-to create a melody by walking on the tiles. The tiles would have a pressure sensor corresponding with each note. Presentation: https://docs.google.com/presentation/d/1dDqSApRayEsu51S5JZgkTcrYMgE4A-BJvE9229_2n10/edit?usp=sharing Code: https://editor.p5js.org/[email protected]/sketches/D8wsQI8ao Instructions:
Press any key from '1'-'8' on a laptop or tablet for a different sound. Play around and and make your own melody! When any key from '1'-'8' is pressed, the synthesizer will change color and an ellipse will go to that key's corresponding spot on the synthesizer. This week I made a synthesizer with a drone that allows for users to listen to and compare the sound differences between a major scale, a minor scale, a chord progression, and a fifth. Project: https://editor.p5js.org/[email protected]/present/hViQVxxdS Project w/ code: https://editor.p5js.org/[email protected]/sketches/hViQVxxdS Instructions:
After the sketch is loaded, a black screen will appear. The screen is divided into 4 equal parts: major, minor, chords, and fifths. To trigger one, click anywhere on the black screen. When clicking in the major or minor quadrants, a random major or minor scale will be generated and the user must keep clicking to hear the next note in the scale. When clicking the chord or fifth quadrants, a random chord or fifth chord will be played. Key: Top left-major Top right-chords Bottom left-minor Bottom right-fifths This week I made three different timbres with additive synthesis, two different timbres with subtractive synthesis, and two different timbres with FM synthesis all done on p5.js. The instructions for each synthesis are on the top of the code on the left. Additive Synthesis:
https://editor.p5js.org/[email protected]/sketches/jhI7X2eHn https://editor.p5js.org/[email protected]/sketches/_2I-3an4k https://editor.p5js.org/[email protected]/sketches/3B-mFhvBE Presentation: https://docs.google.com/presentation/d/1T07A1NwIuiP3rNpRuv31aNc26b9nC1xT0GpDxVtOWq4/edit?usp=sharing For this week, my dream idea is a device that can take any video and turn it into a harmonious melody. Each video would be split into multiple frames and each frame would be reduced to how much red, blue, yellow, black ,and white are in the frame. Depending on what percentage of each color is in the frame, the device would output a sound. The sounds would then be turned into a-hopefully-harmonious melody. A way to prevent dissonance would be an algorithm that only allowed certain sounds to play when they would be consonant with other sounds (third, fifth, octave). But, for now, that's only a dream concept. This week my project loads a live video capture and allows for users to generate sounds by dragging their mouse along the live video. The sound/s played are determined by the percentage of red and blue in the area of the live video capture that the mouse is on. The color the mouse is on can be seen in the square on the top left corner of the preview window. Present: https://editor.p5js.org/[email protected]/present/vXO1_0LBm Edit: https://editor.p5js.org/[email protected]/sketches/vXO1_0LBm Instructions for links above:
How to turn your video capture into a melody: First, hit play and wait for the video capture to appear. Be sure to click down once in the preview window to ensure the mouse is synced with the canvas. Then, drag your mouse along different colors in the video capture to generate sounds. Tip: Drag the mouse along places with more color variation for different notes. Note: The color the mouse is on can be seen in the square on the top left corner of the preview window. For this week, I chose Ableton's chord progression tool, Ableton's basslines tool, and Music Lab's arpeggios tool to explore harmony. After making progressions in each one, I feel that Ableton's basslines was the easiest to understand. It was so straightforward that even me-a beginner-didn't have any complaints. While basslines was the most user friendly to me, I felt that Ableton's chord progression was the most expressive. Maybe it's just my soft spot for piano-like sounds, but the double octave range really allowed for emotion to pull through. Ableton's set up is simple, but allows for a lot of ways to be expressive: double octaves, playing two sounds at once, stretching notes, having a bar to show what part of the melody is being played, bpm control, and key choices. However, I would have preferred if they had different instrument or sound options for their chord progressions. While I like the piano sound, it would be useful to be able to hear what I made on multiple different instruments.
Music Lab's arpeggios tool included multiple different arpeggio patterns that could be played in a range of keys. I find the tool's layout very clever and interesting. They were able to include so many different functions while still remaining mostly user friendly. While it was user friendly, I feel there are a few things they could change. On the tool, they have two arrows that allow the user to switch between sound patterns. When I first used the tool, I didn't know there were multiple pattern options and I wouldn't have known if this wasn't an assignment. Because we had to play with each one for at least 3 minutes, it forced me to explore all the different functions. I also feel that this tool lacks control and wish I could make my own pattern instead of being restricted to the preset patterns. Below are 2 videos I made in Ableton. I didn't record one for Music Lab because it didn't have (or my laptop couldn't load) an option to save the pattern I made. https://vimeo.com/327301677 https://vimeo.com/327301715 https://editor.p5js.org/[email protected]/sketches/F1Oarh90w
For this melody sequencer, I have 3 octaves with 21 steps for full range of sound. The sounds are complied from 4 synth sounds I downloaded from freesounds.org. To use it, a user would click the step/s they want and then click play. When playing, the note being currently played will be highlighted with a purple tint. https://editor.p5js.org/[email protected]/sketches/Fw1uW_Ip2 For this melody sequencer, I wanted it to resemble traditionally written music. Here, I have 2 octaves and no vertical lines. Instead of rectangles, I opted for ellipses to resemble the look of a music note. The sounds are traditional music sounds to be more similar to traditional instruments. To use it, a user would click the step/s they want and then click play. When playing, the note being currently played will be highlighted with a tint.When For the future, I'd like to make each octave a treble clef and a bass clef. https://editor.p5js.org/[email protected]/sketches/IFvE0ExC4 For this melody sequencer, I focused more on visuals. To use it, a user would click the step/s they want and then click play. After they click play, the background goes from black to a purple pattern dictated by the bars/columns of the sequencer. The sounds are complied from 2 synth sounds I downloaded from freesounds.org and 2 traditional music sounds. I wanted to experiment and see if the sounds would get some funky meshed effect, but they always ended separating-either 2 synth sounds and the beginning and 2 regular at the end or vice versa. When playing, the note being currently played will be highlighted with a purple tint. This week we used Music Lab's Melody Maker and Kandinsky, and Abelton's learning tools for Notes and Scales, Basslines, and Melodies. Out of the 5, I felt that Melody Maker and all 3 of Abelton's functions were very easy to use and understand. Although those 4 tools were all user friendly, I prefer Abelton because it had more options-stretching a note-for making music. The layout of notes on the left column was very straightforward and felt like playing a piano. I played piano for 12 years when I was younger, so that may be why I feel biased towards piano related layouts. While I feel that Kandinsky was very limiting in terms of making beats, I really like the idea of putting corresponding sounds to a drawing. In terms of expressiveness, I feel Abelton's functions allowed me to be more expressive because it allowed more variety which is useful if someone already has a music background. However, I feel that Kandinsky may be more expressive to someone who has never played an instrument because it allows the user to translate an expression of art into music without the user having to be able to read music. Only Kandinsky did not give the option to change bpm which was limiting because speed tends to effect the tone of music very heavily resulting in less expressiveness.
Thanks for the extension! I was sick.
In Chapter 1, Levitin focuses on fundamental music theory and its connection to the brain and auditory system. Chapter 2 focuses on the distinction between rhythm, tempo, melody, and frequency in the brain. Rhythm is the duration of each note in a series of notes, and how they group together to form units. Tempo refers to the pace of the piece and frequency is the property of sound that most determines pitch. Together, rhythm and the change in frequency from one note to the other combine to form melody. Levitin then goes on to explain how the brain is able to tell the same song is playing even if its rhythm, temp, melody, or frequency are changed. Recognizing a piece played in different keys or pitches is called melodic transposition. Gestalt psychologists were interested in this and wondered how a piece could retain its identity even when all its parts are altered. Levitin relates this to the Mona Lisa and how we would not be able to recognize the piece if her eyes, nose, mouth, and hair were all changed. Auditory grouping is different than the other sense. Our brains do not hear each note or harmony, but instead hear the instrument as a whole-piano, oboe, trumpet. Consonance and dissonance in the brain are also explored. We find sounds pleasing when that are consonant and displeasing when they are dissonant. This is controlled in our brain stem and the dorsal cochlear nucleus-primitive structures found in all vertebrae-before reaching the higher level thinking part of the human brain-the cortex. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
May 2019
Categories |