User Tools

Site Tools


2014:fpga_midi_synth

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
2014:fpga_midi_synth [2014/12/18 23:17]
criley1
2014:fpga_midi_synth [2014/12/18 23:49] (current)
criley1
Line 13: Line 13:
 ====== Why We Did It ====== ====== Why We Did It ======
  
-We were looking to try something new and neither of us had any previous audio or MIDI experience, ​so we were interested in the subject area. Extending a previous project would allow us to narrow our scope and focus on working with MIDI files.+We were looking to try something new and neither of us had any previous audio or MIDI experience, ​but we were interested in that subject area (think chiptunes). Extending a previous project would allow us to narrow our scope and focus on working with MIDI files.
  
 ====== How We Did It ====== ====== How We Did It ======
Line 20: Line 20:
 **MIDI Files:** **MIDI Files:**
  
-We used a Python library for working with MIDI in order to break the files apart. MIDI files have an identifying header at the beginning, then a pattern containing tracks. Most importantly for us, the header contains ​information ​which helps us convert ​MIDI time units (ticks) into tempo. Tracks begin simultaneously and each have their own series of events. The format varies, but typically the first track contains the time signature and tempo changes for the song. Songs that have multiple notes playing at a time may either have one track which contains all notes or one track for each line (one for vocals, one for bass, one for guitar, etc.). Notes are specified with noteOn and noteOff events which contain all the information about the note such as instrument, channel, key, etc.+We used a Python library for working with MIDI in order to break the files apart. MIDI files have an identifying header at the beginning, then a pattern containing tracks. Most importantly for us, the header contains ​the resolution of the track, ​which is the number of MIDI "​ticks"​ per beat. All MIDI time is measured in ticks, so the resolution allows us to go from tempo (beats per time) to ticks per time. We can then convert that to clock cycles ​ per tick. 
 + 
 +Tracks begin simultaneously and each have their own series of events. The track proceeds sequentially,​ so events only contain the time delay since the last event instead of the absolute time from start. The format varies, but typically the first track contains the time signature and tempo changes for the song. Songs that have multiple notes playing at a time may either have one track which contains all notes or one track for each line (one for vocals, one for bass, one for guitar, etc.). Notes are specified with noteOn and noteOff events which contain all the information about the note such as instrument, channel, key, etc. The length of a note can be determined by adding the length of all the events between it and its associated noteOff event.
  
 **Processing MIDI:** **Processing MIDI:**
  
-When given a file as input, our program records every note’s pitch, octave, and duration in FPGA cycles as 29-bit ​binary number ​and writes them into a file. Based on tempo events in the file, we can convert from MIDI ticks into regular time into FPGA clock cycles. Rests and pauses between notes are treated as notes with a pitch/​volume of zero.+When given a file as input, our program records every note’s pitch (12 bits for the chromatic scale), octave ​(four bits for zero to eight octaves), and length (13 bits) into 29bit binary number. Based on tempo events in the file, we can convert from MIDI ticks into regular time into FPGA clock cycles. Depending on the song, more bits may have to be allocated to the note length or the clock may have to be "​slowed down" so that the length remains under the size limit. Rests and pauses between notes are treated as notes with a pitch/​volume of zero. The Python library handles other aspects such as track organization.
  
 **FPGA:** **FPGA:**
  
-We used Caitlin Riley’s design and circuit, which can be found at http://​wikis.olin.edu/​ca/​doku.php?​id=projects:​fpga_note_generator. In case the link is broken, the FPGA contains a direct digital synthesizer,​ which uses a sine wave lookup table and a phase accumulator to output a sine wave with a frequency ​determined by the lowest switch that is flipped. This sine wave runs through a R-2R resistor ladder which acts as a digital-to-analog converter so that an attached speaker can play the note. We will run through the additions to Caitlin’s design.+We used Caitlin Riley’s design and circuit. In case the link at the top of the page is broken, the FPGA contains a direct digital synthesizer,​ which uses a sine wave lookup table and a phase accumulator to output a sine wave with a particular ​frequency. In her case, she used the FPGA hardware switches. We use a value loaded from memory. This sine wave runs through a R-2R resistor ladder which acts as a digital-to-analog converter so that an attached speaker can play the note. We will run through the additions to Caitlin’s design.
  
 **Loading MIDI Data:** **Loading MIDI Data:**
  
-We implemented a simple memory module which essentially loads the MIDI file into a large register. When an address ​inside the register ​is provided, the memory outputs the 29-bit binary number contained at that address. This memory is read-only.+We implemented a simple memory module which essentially loads the MIDI file into a large set of registers. When a valid address is provided, the memory outputs the 29-bit binary number contained at that address's register. This memory is read-only.
  
 **Providing Phase Accumulator Input:** **Providing Phase Accumulator Input:**
  
-The direct digital synthesizer takes a phase increment value which is the pitch of the desired note multiplied by approximately 1.3425. Instead of having the eight FPGA switches correspond to eight values, we use 12 bits of our input to provide the correct value for the pitch and then three bits of the input to shift the value to the right octave.+The direct digital synthesizer takes a phase increment valuewhich is a 26-bit binary number representing ​the frequency ​of the desired note multiplied by approximately 1.3425. Instead of having the eight FPGA switches correspond to eight values, we use 12 bits of our input (there are twelve notes in a chromatic scale) ​to provide the correct value for the pitch and then three bits of the input to shift the value to the right octave.
  
 **Changing Memory Addresses:​** **Changing Memory Addresses:​**
  
-We use the FPGA internal clock, which runs at 50mhz. Due to the huge number of clock cycles contained in longer musical notes, we then subdivide this clock according to the values determined by the Python script (this is put in by hand). The outer loop runs at the speed of one MIDI “tick.” All note lengths are expressed in whole numbers of ticks. When the outer loop has run a number of times equal to the length in the current memory address, the counter resets and the address increments by one. The synthesizer then takes in the new information to produce a new note or a rest.+We use the FPGA internal clock, which runs at 50mhz. Due to the huge number of clock cycles ​that can be contained in longer musical notes, we then subdivide this clock according to the values determined by the Python script (this is put in by hand). The outer loop runs at the speed of one MIDI “tick,” since all note lengths are expressed in whole numbers of ticks. When the outer loop has run a number of times equal to the length in the current memory address, the counter resets and the address increments by one. The synthesizer then takes in the new information to produce a new note or a rest.
  
 ====== How Can It Be Built Upon? ====== ====== How Can It Be Built Upon? ======
Line 47: Line 49:
 **Code:** **Code:**
  
-The code is provided for download in the wiki. sample MIDI file and its accompanying ​generated memory file are included ​as well as the XIlinx project and the Python script. ​Code is commented.+The commented ​code is provided for download in the wiki. The archive file includes helpful links about MIDI files, the Python script, the Xilinx project, the sample MIDI file used, and the generated memory file. You can download the Python MIDI library at [[https://​github.com/​vishnubob/​python-midi|github]],​ but it is also included ​in the archive. 
 + 
 +You can set the Python script ​to output every event in a MIDI file with the lines:  
 + 
 +import midi 
 + 
 +data = midi.read_midifile(FILE NAME HERE) 
 + 
 +print(data) 
 + 
 +If you want to simulate the synthTop.v file inside Modelsim, you have to make your own test bench and remove the black box DDS since the DDS is built into Xilinx only. 
 + 
 +To load the project, install Xilinx ISE Design Suite and load NoteGeneratorCopy.xise. The code should already be synthesized (unless you alter it). You can then "​configure target device"​ and load the code onto an FPGA.
  
 **Schematics:​** **Schematics:​**
Line 60: Line 74:
  
 **Debug Time!** **Debug Time!**
-At the very end of our project, we encountered a problem with the circuit we were using. This was unexpected because we were using Caitlin’s proven design. Instead of playing music, the op-amp in the circuit became scalding hot. We had not taken the time to debug the circuit into account and thus didn’t include any other output such as LED lights or LCD text, so we couldn’t ​tell whether JUST the circuit broke and our code was working or both were flawed in some way.+ 
 +At the very end of our project, we encountered a problem with the circuit we were using. This was unexpected because we were using Caitlin’s proven design. Instead of playing music, the op-amp in the circuit became scalding hot. We had not taken the time to debug the circuit into account and thus didn’t include any other output such as LED lights or LCD text, so we couldn’t ​systematically determine ​whether JUST the circuit broke and our code was working or both were flawed in some way. This was the first time we had code on the FPGA that ran without accepting any sort of input once it was on there and we didn't know what it was doing at any given moment.
  
 **Work Plan Reflection** **Work Plan Reflection**
  
-Since meetings were with two people, we weren’t meeting as regularly as we hoped due to overlapping obligations. However, we were still able to achieve our goal of becoming more familiar with MIDI files and understanding audio generation on an FPGA. The big flaws were not meeting for a few days at the beginning or the project and not including time for debugging (which we could have used that extra time at the beginning for).+Since meetings were with two people, we weren’t meeting as regularly as we hoped due to overlapping obligations. The big flaws were not meeting for a few days at the beginning or the project and not including time for debugging (which we could have used that extra time at the beginning for). However, we were still able to reach our goal of becoming more familiar with MIDI files and understanding audio generation on an FPGA. We completed all green goals in our original project proposal except for producing square waves (which was changed for producing sine waves).
  
 **Possible To-Do’s:​** **Possible To-Do’s:​**
  
--Currently, the Python script parses tracks in order and assumes that only one note is playing at a time, meaning that, for example, the melody will play, then the bass will play sequentially. The script could be altered to parse all tracks simultaneously or combine all tracks.+-Currently, the Python script parses tracks in order and assumes that only one note is playing at a time, meaning that, for example, the melody will play, then the bass will play afterwards without overlapping. The script could be altered to parse all tracks simultaneously or combine all tracks
 + 
 +-The FPGA code could be altered so that multiple notes could be played at once. Additive synthesis (adding sine waves together) can produce additional tones and chords. This would accompany the above suggestion of changing the output of the Python script.
  
--The FPGA code could be altered so that multiple ​notes could be played at once. Additive synthesis (adding sine waves together) can produce ​additional tones and chords.+-Since binary numbers are integers and we're shifting low pitches up to change octaves, the phase increment inputs will be slightly inaccurate for higher ​notes. Future work could include experimenting with different methods to produce ​accurate phase increment input for the entire range of notes.
  
 +{{:​2014:​finalzip.zip|}}
2014/fpga_midi_synth.1418962678.txt.gz · Last modified: 2014/12/18 23:17 by criley1