LMLJPEG

SourceForge Logo
Helpful Links


What is LMLJPEG?
The LMLJPEG library provides streaming video functionality for the LML33 video card. It's primary usage is to compress raw RGB data into MJPEG that can be displayed by the Zoran chipset.

Current applications include generating MJPEG streams off-line for later display by the LML. Future applications may include on-the-fly de-compression, modification, and re-compression at various frame-rates.

How is this different from Video4Linux? Well, for one, it's self- contained. And two, it's more than just a codec, it's a customizable, object-oriented, video pipeline. Basically, LMLJPEG brings the whole process under one roof. Unlike V4L, it is not designed to address all types of configurations- just one, the Linux Media Labs LML33.


What is the State of the Library?
Houston, we have a prototype.

The code for this library is taking form. To date, there is sufficient code to both compress and decompress LML-compatible MJPEG files (LMJPEG). The code is not optimized yet, so 2 FPS (500MHz) is currently the best I have to offer at D1 resolutions. Obviously, the goal is to increase this substantially, but the first pass is to build a library for offline stream generation. Real-time work is deferred until all the features are in. Contributions are welcome.

The library is written in C++. This is principally for prototyping reasons. As the library distills into an optimal implementation, the C++ will most likely become a simple wrapper class with the core functionality operating in C underneath. The DCT and IDCT routines will most likely need to be assembly since they are truly the most CPU intensive operations in the pipeline.

I intend to continue to add features until there is sufficient functionality to implement a rudimentary non-linear video editor.


How Do I Get Started?
As source for various objects is moved into CVS, small sample programs for these objects are added to the examples directory. Since the complete package is not on the site, your best bet is to pull down the project using CVS. You should also install SDL if you want to view any of the images on the monitor under X.

Once you have pulled down the lmljpeg module from CVS, you will need to run autoconf to build the configure file. Autoconf is available from the GNU website. The next step is the typical ./configure followed by make. There is no reason to do a 'make install' since the code is not configured to go anywhere yet. Moreover, it's configured to build a static library at present.

Change into the examples subdirectory and do another make. This will build the test programs. As of this writing, the programs are:

  • sdltest - displays a 720x480 RGBImage, changing shades from red to blue
  • yuvtest - converts a sample RGBImage into a YUVImage and outputs the three color planes to files. If you have the zjpeg utility on the Linux Media Labs website, you can reconstruct them into a single JPEG image from the command line.
  • strmtest - builds a memory-based stream and pipes it to a file-based stream mapped to stdout.
  • matrixtest - demonstrates matrix operations including DCT/IDCT.
  • qtest - quantizes sample data and demonstrates DCT/IDCT error.
  • htest - demonstrates compression using the DC luminance table.
  • imgtest - complete JPEG compression example, outputs to file
  • dcmptest - complete JPEG compression/decompression example, outputs to the display
  • frmtest - JPEG compression to LMJPEG spec, output to file and optionally to the LML.
  • cliptest - oh, happy day. Given a stream (D1 NTSC) recorded from the LML, cliptest decompresses the frames and displays them in a window on the screen. At 500MHz, I'm getting 2 FPS for the unoptimized code (2001.02.25). Improvements to follow.


What's Done?
A full compression/decompression pipeline exists end to end. It is not optimized.

The following objects are modestly completed.

  • RGBImage - image frame buffer (32 bit pixels for ARGB images)
  • YUVImage - luminance and chrominance based image. Chrominance planes are half horizontal resolution.
  • JPEGStream - abstraction for memory and file-based streams.
  • JPEGMatrix - 8x8 integer matrix for DCT and IDCT operations
  • QuantTable - quantization matrices; supports variable Q values.
  • HuffmanTable - Huffman compression implementation
  • JPEGCodec - jpeg compression/decompression implementation
  • JPEGImage - single field jpeg image
  • LMLFrame - single frame jpeg image, both fields, LMJPEG format
  • LMLClip - LMJPEG stream wrapper


What's Next?
The following items are slated for completion. Order is loosely sorted from high priority to low.
  • Utilities which rely on the basic feature set. Need a motivating force behind each aspect of the library.
  • Pipeline performance profiling.
  • Optimized pipeline.
  • Core C functionality. Not everyone appreciates C++, no harm in that. Besides, a critical evaluation of the data structures will be useful in optimizing the pipeline.
  • Better support for PAL, SECAM. Some beta-testers would be helpful as I only have access to NTSC equipment.
  • Package clean up. Need a proper build and shared library.


Project Status
Date   Comment
2001.08.13   Time for an update. I am still actively working on both LMLJPEG and a new library called "Skate" which will handle all the image surface management and colorspace conversions. Also, I have written utilities which allow using the buz drivers instead of the LML supplied drivers; however, they are not in the project yet. Let me know if you need them beforehand.
2001.02.25   Completed the LMLClip object which allows decompressing an entire stream. The example program cliptest will render the image to an SDL surface. Initial performance was surprisingly good considering that the utilities that I was working with beforehand were actually slower. The code is not geared for PAL or SECAM yet, but the necessary code changes aren't particularly complex; I just have no way to test it.
2001.02.24   Completed the decompression half of the JPEGFrame object. Example program compresses and decompresses a sample image. It then alternates display of the original and processed image for 5 seconds, which helps to highlight the JPEG artifacts. A gradient was chosen specifically to enhance this effect.
2001.02.24   Completed the compression half of the JPEGFrame object. Object takes an image, splits it into even and odd fields (if applicable) and generates an LMJPEG compatible image. Example program will output to a file and optionally to the LML directly via the /dev/mvideo/frame device. As written, it only supports NTSC at the moment, but that's due to my having insufficient information rather than laziness. I will contact LML about the correct parameters.

Enjoy! The library is almost functional!
2001.02.24   The pipeline will now decompress images that it generates. It should be capable of decompressing anything that comes out of the LML. NOTE: the LML-specific JPEG extensions are not being parsed at this time. That's coming next.

The Matrix class and its superclasses have been prefaced with 'JPEG' (eg. JPEGWorkMatrix) to avoid namespace conflicts with other libraries.
2001.02.20   I have commit the JPEGCodec and JPEGImage objects which allow compression of JPEG images. The example program creates a 720x480 image and writes it out to a file. Using a program like ee, you can view the image. LML-specific code to follow.
2001.02.16   I commit the Huffman tables for stream encoding. There are separate objects for all four cases of AC/DC and Luminance/Chrominance. The tables were gleaned from the JPEG output of the LML33 card and expanded directly into tables, thereby bypassing most of the Huffman algorithm dealing with trees and optimal allocation. I'm not even certain that the Zoran chipset can accept tables other than these. At present, this code does not handle decompression.
2001.02.15   Commit the stream, matrix, and quantization table objects. Huffman compression is next on the list. Sample code demonstrates DCT/IDCT errors, particularly with regard to quantization. Transforms do not use MMX yet.
2001.02.14   I just got the SourceForge account working.

The initial work for this project is in C++, and I will be committing individual objects as I audit them. Future work will include an underlying C layer which will make most people happy. Presently, my primary focus is on functionality and code legibility. As such, I have built the library from a top-down perspective. C++ is my language of choice for this type of work.

The first three objects commit are the RGBImage, YUVImage, and Surface objects. The RGBImage is a predictable wrapper for a framebuffer. The YUVImage converts RGB to luminance and chrominance, while simultaneously decimating the chrominance values 2:1 per the JPEG format used by the LML. The Surface object is an SDL-based window which can display an RGBImage. This is handy for debugging the video stream pipeline.

Since I'm not especially proficient with autoconf, I have not added an SDL test. If SDL is not installed, the configure will probably choke. Any assistance with that would be appreciated.