User Tools

Site Tools


2014:fpga_midi_synth

This is an old revision of the document!


FPGA MIDI Synthesizer

by Tom Chen and Jaehee Park

What We Did

We extended Caitlin Riley's project from last year that used the switches on an FPGA (Field-programmable Gate Array, essentially a customizable integrated circuit) to produce sine waves corresponding to the eight notes of an A minor scale. Our extension allowed the FPGA to produce sine waves at frequencies for any note on an 88 note piano as well as accept MIDI files as input.

A MIDI file contains instructions for playing music such as note pitch and duration, tempo, and velocity/volume. Compare this to an audio file such as an MP3, which actually contains sound data. MIDI files are used to communicate music between different devices such as computers or digital instruments. We used a short program written in Python to read through MIDI files and convert them into binary numbers then loaded them onto the FPGA as memory. The FPGA interprets these binary numbers to make the sine wave, which is then converted into an analog signal fed into a speaker.

Why We Did It

We were looking to try something new and neither of us had any previous audio or MIDI experience, but we were interested in that subject area (think chiptunes). Extending a previous project would allow us to narrow our scope and focus on working with MIDI files.

How We Did It

MIDI Files:

We used a Python library for working with MIDI in order to break the files apart. MIDI files have an identifying header at the beginning, then a pattern containing tracks. Most importantly for us, the header contains information which helps us convert MIDI time units (ticks) into tempo. Tracks begin simultaneously and each have their own series of events. The format varies, but typically the first track contains the time signature and tempo changes for the song. Songs that have multiple notes playing at a time may either have one track which contains all notes or one track for each line (one for vocals, one for bass, one for guitar, etc.). Notes are specified with noteOn and noteOff events which contain all the information about the note such as instrument, channel, key, etc.

Processing MIDI:

When given a file as input, our program records every note’s pitch, octave, and duration in FPGA cycles as a 29-bit binary number and writes them into a file. Based on tempo events in the file, we can convert from MIDI ticks into regular time into FPGA clock cycles. Rests and pauses between notes are treated as notes with a pitch/volume of zero.

FPGA:

We used Caitlin Riley’s design and circuit, which can be found at http://wikis.olin.edu/ca/doku.php?id=projects:fpga_note_generator. In case the link is broken, the FPGA contains a direct digital synthesizer, which uses a sine wave lookup table and a phase accumulator to output a sine wave with a frequency determined by the lowest switch that is flipped. This sine wave runs through a R-2R resistor ladder which acts as a digital-to-analog converter so that an attached speaker can play the note. We will run through the additions to Caitlin’s design.

Loading MIDI Data:

We implemented a simple memory module which essentially loads the MIDI file into a large register. When an address inside the register is provided, the memory outputs the 29-bit binary number contained at that address. This memory is read-only.

Providing Phase Accumulator Input:

The direct digital synthesizer takes a phase increment value which is the pitch of the desired note multiplied by approximately 1.3425. Instead of having the eight FPGA switches correspond to eight values, we use 12 bits of our input to provide the correct value for the pitch and then three bits of the input to shift the value to the right octave.

Changing Memory Addresses:

We use the FPGA internal clock, which runs at 50mhz. Due to the huge number of clock cycles contained in longer musical notes, we then subdivide this clock according to the values determined by the Python script (this is put in by hand). The outer loop runs at the speed of one MIDI “tick.” All note lengths are expressed in whole numbers of ticks. When the outer loop has run a number of times equal to the length in the current memory address, the counter resets and the address increments by one. The synthesizer then takes in the new information to produce a new note or a rest.

How Can It Be Built Upon?

Code:

The code is provided for download in the wiki. A sample MIDI file and its accompanying generated memory file are included as well as the XIlinx project and the Python script. Code is commented.

Schematics:

The schematics for the physical circuit do not differ from Caitlin’s design. You can view her project for circuit diagrams/pictures.

Gotchas:

Remembering Register Width:

I (Tom) spent an embarassingly long time trying to subdivide the clock because I forgot to instantiate the clkdivide register with a width. I was asking a 1bit register to count up to 46150.

Debug Time!

At the very end of our project, we encountered a problem with the circuit we were using. This was unexpected because we were using Caitlin’s proven design. Instead of playing music, the op-amp in the circuit became scalding hot. We had not taken the time to debug the circuit into account and thus didn’t include any other output such as LED lights or LCD text, so we couldn’t tell whether JUST the circuit broke and our code was working or both were flawed in some way. This was the first time we had code on the FPGA that ran without accepting any sort of input once it was on there and we didn't know what it was doing at any given moment.

Work Plan Reflection

Since meetings were with two people, we weren’t meeting as regularly as we hoped due to overlapping obligations. However, we were still able to achieve our goal of becoming more familiar with MIDI files and understanding audio generation on an FPGA. The big flaws were not meeting for a few days at the beginning or the project and not including time for debugging (which we could have used that extra time at the beginning for).

Possible To-Do’s:

-Currently, the Python script parses tracks in order and assumes that only one note is playing at a time, meaning that, for example, the melody will play, then the bass will play sequentially. The script could be altered to parse all tracks simultaneously or combine all tracks.

-The FPGA code could be altered so that multiple notes could be played at once. Additive synthesis (adding sine waves together) can produce additional tones and chords.

2014/fpga_midi_synth.1418962866.txt.gz · Last modified: 2014/12/18 23:21 by criley1