By Steven Cooreman
I made a blazing fast video signal converter in VHDL.
Traditionally, transmission of critical video signals in aircraft has been done as an analog signal over copper coaxial cable. However, as manufacturers look for new ways to reduce the take-off weight of aircraft, the focus has been shifted to Plastic Optical Fiber (POF). This requires a new protocol to transmit video, because optical fiber can realistically only carry digital signals. The standards body that governs aircraft electronics has thus drafted the ARINC-818 spec. This specs the transmission of uncompressed video, requiring a link speed of 1.0625 to 8.5 Gbit/s depending on the resolution.
The company I am working with wants to introduce this new interface in their line of cockpit displays, and therefore needs to have an implementation created in-house of both a receiver (to incorporate in the display) and a transmitter (to test the functioning of that receiver). In the timeframe of this project, I worked on the transmitter, because it turns out to be the easiest part to implement.
I had a couple of different options when I was looking for a thesis subject. What made me choose this one were a couple of different reasons:
I first set out to read the actual specification of the ARINC-818 protocol. This protocol is packet-based, which means video data is encapsulated in packets, and then sent over the link. The protocol does not specify actual timing and data packetization formats, but leaves that up to an accompanying document for each implementation: the ICD. As a reference, I used the Great River XGA ICD. It stipulates that video is sent line-synchronous, which means that video data arrives line-by-line, and the line frequency is constant.
Between two different video frames, there are a couple of blank lines, to be able to synchronize to the video source. This is especially important because of timing issues that arise when trying to communicate at two different clocks: the speed at which the video comes in (dictated externally), and the fixed reference clock used for fiber transmission.
With this in mind, I started drawing a preliminary systems diagram. In order to successfully translate the video, I knew that I had to buffer it somehow, because the actual timing at which pixel data arrives (see XGA timing details w.r.t. blanking and sync pulses) varies. This was solved by adding a FIFO to buffer incoming pixels until they are ready to be sent out. The FIFO itself is implemented using on-chip Block RAM, and the implementation is taken care of by the Xilinx IPCore generator.
I did have the advantage that the analog video signal would be converted to a digital one outside of the FPGA, so I did not have to implement any ADC conversion.
Another part of the diagram was the timing checker. This module is intended to check the timing of the incoming video, and see whether it matches up with a supported resolution by measuring the horizontal and vertical line periods at the reference clock. If the resolution matches up, it indicates to the Finite State Machine that video is ready to be sent out. It also provides a pulse that indicates the start of a new frame, so that the packetizer can sync up to the incoming signal’s refresh rate. spec_timing.docx
Then comes the clocking. The whole thing runs off of a 106.25 MHz oscillator, because it is an integer fractional of the prospective 2.125 GHz, and so we can generate the transmit clock with a phase-locked loop. It also provides an opportunity to run our packetizer at exactly the right frequency for the transmission: we have to provide the transmitter 32 bytes for every 40 bytes sent, so the data needs to be clocked in 32 bytes at a time at 2.125 GHz / 40 = 53.125 MHz: exactly half of the reference clock.
This 32 bytes per 40 sent is a direct result of the encoding applied at the link level: 8b/10b encoding, 4 times in parallel. This type of encoding provides for a couple of benefits:
Lastly, the state machine takes care of the generation of the packets. On reset, it will wait for the ‘valid’ signal to become true, indicating that valid video is being received. When valid video is present, it will wait until the start of a frame, reset (clear) the FIFO, wait a specified amount of time, and then transmit an “Object 0”. In ARINC818 terms, this is the start-of-frame object, which contains a couple of parameters relevant to the link status, video properties, number of the incoming frame, etc.
After that, it will wait for a predetermined amount of line intervals (because the spec is line-synchronous), and start issuing “Object 2” objects. It is determined by the ICD that each object 2 will have pixel data for half of a video line, in RGB format with 8 bits per color. This comes down to 24 bits per pixel. Since XGA is 1024 pixels wide, this amounts to 512*3 = 1536 bytes of pixel data (not including object header and CRC) per packet. XGA is also 768 lines high, thus 768*2 = 1536 object 2 packages have to be sent for one frame of video. When all of these packets have been sent, the FSM will wait for the next start-of-frame signal from the timing checker. It will do so on a line-synchronous basis, meaning that it will only start sending video on the start of a new blanking line. The concept of a blanking line is a relic from analog video where you needed time to set the cathode ray back to its home position. We still use the term because nothing better is available. In Fiber Channel, a blanking line is just subsequent “IDLE” characters for the time it would normally take to send a line of video.
While this is still a work-in-progress by me, and I plan to continue working on this, another person would need the following:
Because of confidentiality, the code is not provided for download on the wiki.
I originally planned to work on this throughout the semester, with the intention of having a finalized transmitter by Christmas. But, being an exchange student, I didn't fully realize how heavy the workload of four Olin classes would be. This caused my work on this to get pushed back. I was very glad to have gotten the opportunity to work on this as a CompArch final project, so that I don't have to go home empty-handed.
As far as the work for the final project goes, I'm actually quite pleased with my progress. There are certain bits lacking, but generally speaking, I have functional modules and a good system-level design, so that there is only fine-tuning left (and the implementation of CRC, but I will have to see if a core for that is not already available internally).
In general, I'd say mission accomplished :)