QuickTime for Java: A Developer's Notebook/Video Media

From WikiContent

(Difference between revisions)
Jump to: navigation, search
(Initial conversion from Docbook)
(Initial conversion from Docbook)

Revision as of 13:22, 7 March 2008

QuickTime for Java: A Developer's Notebook

It probably seems like half of this book has already been about video—I've assumed you had video media for the chapters on playback, editing, and components (Chapter 2 and Chapter 4), even though the material there would be perfectly well suited for use on audio-only media like MP3 files. Well, this chapter is only about video, showing a handful of useful tricks for working with video.

Because video is simply a progression of images, alternated quickly enough to suggest movement, you probably won't be too surprised to know that the material covered in the QuickDraw graphics chapter (Chapter 5) pays off in this chapter. QuickDraw and QD-like APIs are the means by which you create and/or manipulate video media. If you skipped that chapter and have problems herein with QDGraphics (a.k.a. GWorlds), Matrixes, GraphicsImporters, or compression, you might need to check back there. But I'll try to keep things fairly self-explanatory.

Contents

Combining Video Tracks

It's not hard to understand how two audio tracks can coexist in a movie—the sounds are mixed and played together. But the idea of combining video tracks is less intuitive.

By default, if you have two video tracks of the same size in a movie, one will totally overlap the other. But you can change the default behavior by specifying 2D transformations with Matrix objects, and the Z-axis ordering by setting "layering" behavior.

One way to play with Matrix-based spatial arrangement is to set up a picture-in-picture movie. In such a movie, the foreground video is scaled and moved into a corner relative to the background video.

How do I do that?

To do a picture-in-picture effect, you must have a movie with two video tracks and you must do three things to the foreground video track:

  • Scale it to a size smaller than the background track.
  • Optionally move it to a location other than (0,0).
  • Set layering to ensure it appears above the background track.

Fortunately, a few methods in the Track class provide all of this. The application in Example 8-1 brings up a window with a picture-in-picture effect achieved with matrix transformations and layering.

Note

Run this example from the downloaded book code with ant run-ch08-matrixvideotracks.

Example 8-1. Matrix-based video picture-in-picture

package com.oreilly.qtjnotebook.ch08;
 
import quicktime.*;
import quicktime.std.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.std.image.*;
import quicktime.io.*;
import quicktime.qd.*;
import quicktime.util.*;
import quicktime.app.view.*;
 
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
import java.awt.*;
 
public class MatrixVideoTracks extends Frame {
 
static Movie foreMovie, backMovie;
 
public static void main(String[  ] args) {
  try {
      QTSessionCheck.check( );
      // get background movie
      QTFile file = 
          QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes);
      OpenMovieFile omf = OpenMovieFile.asRead(file);
      backMovie = Movie.fromFile (omf);
      // get foreground movie
      file = QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes);
      omf = OpenMovieFile.asRead(file);
      foreMovie = Movie.fromFile (omf);
      // get frame
      Frame frame = new MatrixVideoTracks (backMovie, foreMovie);
      frame.pack( );
      frame.setVisible (true);
  } catch (QTException qte) {
      qte.printStackTrace( );
  }
}
 
public MatrixVideoTracks (Movie backMovie, Movie foreMovie)
  throws QTException {
  super ("Matrix Video Tracks");
  Movie matrixMovie = new Movie( );
  // build tracks
  Track foreTrack = addVideoTrack (foreMovie, matrixMovie);
  Track backTrack = addVideoTrack (backMovie, matrixMovie);
  // set matrix transformation
  Matrix foreMatrix = new Matrix( );
  // set matrix to move fore to bottom right 1/4 or back
  QDRect foreFrom =
      new QDRect (0, 0,
                  foreTrack.getSize( ).getWidth( ),
                  foreTrack.getSize( ).getHeight( ));
  QDRect foreTo = 
      new QDRect (backTrack.getSize( ).getWidth( ) / 2,
                  backTrack.getSize( ).getHeight( ) / 2,
                  backTrack.getSize( ).getWidth( ) / 2,
                  backTrack.getSize( ).getHeight( ) / 2);
  System.out.println ("foreTo is = " + foreTo);
  foreMatrix.rect (foreFrom, foreTo);
  foreTrack.setMatrix (foreMatrix);
  // set foreTrack's layer
  foreTrack.setLayer (-1);
  // now get component and add to frame
  MovieController controller = new MovieController(matrixMovie);
  Component c = QTFactory.makeQTComponent(controller).asComponent( );
  add (c);
}
 
public Track addVideoTrack (Movie sourceMovie, Movie targetMovie)
  throws QTException { 
  // find first video track
  Track videoTrack = 
      sourceMovie.getIndTrackType (1,
                                   StdQTConstants.videoMediaType,
                                   StdQTConstants.movieTrackMediaType);
  if (videoTrack =  = null)
      throw new QTException ("can't find a video track");
  // add videoTrack to targetMovie
  Track newTrack =
      targetMovie.newTrack (videoTrack.getSize( ).getWidthF( ),
                            videoTrack.getSize( ).getHeightF( ),
                            1.0f);
  VideoMedia newMedia = 
      new VideoMedia (newTrack,
                      videoTrack.getMedia( ).getTimeScale( ),
                      new DataRef(new QTHandle( )));
  videoTrack.insertSegment (newTrack,
                           0,
                           videoTrack.getDuration( ),
                           0);
  return newTrack;
}
}

When this is run, the user is shown two consecutive movie-opening dialogs, for the background and foreground movies, respectively. Assuming that both have video tracks, the result looks like Figure 8-1.

Note

This example looks for a track with video media, so don't use audio-only files, or MPEG-1, which has a special "MPEG media" track instead of video.

What just happened?

After the two movies are loaded, this demo creates a new empty target movie and, through a convenience method called addVideoTrack( ) , finds the video tracks of the selected movies, creates new video tracks in the target movie, and inserts the VideoMedia from the source movies. This produces a movie with two concurrent video tracks.

Figure 8-1. Matrix-based transformation of foreground video track

Matrix-based transformation of foreground video track

To scale and move the foreground track, you use a Matrix transformation. In this case, the example takes the background movie's video track size and finds its center point, then sets up a destination rectangle with that point as its upper-left corner, with width and height equal to half the foreground's width and height, respectively. Finally, it tells the foreground track to use this matrix by calling Track.setMatrix() :

Note

Chapter 5 introduced Matrix. It's a mathematical object used in QuickTime to describe 2D transformations like scaling, rotation, etc.

QDRect foreFrom =
  new QDRect (0, 0,
              foreTrack.getSize( ).getWidth( ),
              foreTrack.getSize( ).getHeight( ));
QDRect foreTo = 
  new QDRect (backTrack.getSize( ).getWidth( ) / 2,
              backTrack.getSize( ).getHeight( ) / 2,
              backTrack.getSize( ).getWidth( ) / 2,
              backTrack.getSize( ).getHeight( ) / 2);
foreMatrix.rect (foreFrom, foreTo);
foreTrack.setMatrix (foreMatrix);

Next, to ensure that the foreground track draws above the background—if it doesn't, all this matrix work will be wasted—the demo calls Track.setLayer(-1) . The layers are numbered from -32,767 to 32,767, with lower-numbered layers appearing above higher-numbered layers. The background track keeps its default layer, 0, so setting the foreground to -1 forces it to be on top.

What about...

...the point of this? Am I really ever going to want to overlay video tracks? It's more common than you might think. Consider Apple's iChat AV application—it uses a very similar picture-in-picture effect, so you can see yourself when you videoconference with a friend.

But there's one other interesting thing that iChat AV does: it shows the video of you as a mirror image . This, presumably, is more natural for users—if you raise your right hand, it somehow makes more sense to see your hand go up on the right side of the preview window, even if that's not what the camera is really seeing. Fortunately, a mirror image is really simple to do with a Matrix transformation.

In the preceding example, add the following two lines right after the Matrix is created:

foreMatrix.scale (-1, 1, 0, 0);
foreMatrix.translate ((float) foreTrack.getSize( ).getWidth( ), 0f );

The scale( ) call makes the matrix multiply all pixels by -1, effectively "flipping" them around the x-axis. The y-coordinates are unchanged, so the scaling factor there is 1. The last two arguments define the "anchor point." By using 0, this says "flip around the x-axis" (the y-coordinate is similar but irrelevant here). Given an image width of w, this scaling operation makes the pixels run from -w to 0. The translate( ) call moves the coordinates back into positive coordinate space. Figure 8-2 shows this transformation conceptually.

Figure 8-2. Matrix-based mirror image transformation steps: original, scaled by x-factor of -1, translated by adding width

Matrix-based mirror image transformation steps: original, scaled by x-factor of -1, translated by adding width

For this to work you also need to change the Matrix.rect() call to Matrix.map( ). rect( ) clears out any previous transformations, essentially defining a new matrix that represents only the translate-and-scale from one rectangle to another, while map( ) maintains the previous transformations and then applies the translate-and-scale.

Figure 8-3 shows the demo running with this mirror image added to the foreground transformation. For this figure, I've used the same video source for foreground and background, to make the mirror transformation more obvious.

Figure 8-3. Matrix-based mirror image of foreground video track

Matrix-based mirror image of foreground video track

This mirror effect is pretty handy, and you might use it all by itself for doing something like a capture preview. Because the Matrix can be used on movie tracks, GraphicsImporters, and various other parts of the QuickTime API, mastering Matrix transformations will get you pretty far.

Note

Did you notice the capture settings dialog in Chapter 6 showed a mirror image? You could use a Matrix to make the motion detector in that chapter render a mirror image, too.

Overlaying Video Tracks

When one video track is drawn on top of another, the top doesn't necessarily have to obscure the bottom. QuickTime gives you the option of specifying a GraphicsMode to combine pixels from multiple video layers to create interesting effects.

How do I do that?

You can create a GraphicsMode object to describe the means of combining overlapping colors. To try it out, take the previous lab's code and replace all the matrix stuff (after the foreTrack and backTrack are created, but before the MovieController is created) with the following:

GraphicsMode gm = new GraphicsMode (QDConstants.addMax,
                                  QDColor.green);
VisualMediaHandler foreHandler = 
  (VisualMediaHandler) foreTrack.getMedia( ).getHandler( );
foreHandler.setGraphicsMode(gm);
foreTrack.setLayer(-1);

Note

Run this example with ant run-ch08-composit-evideotracks.

When run, this sample program asks you to open two movies, then creates a new movie with video tracks from the source movies' media, and combines the pixels of the foreground movie with the background, so the foreground appears atop the background. The result is shown in Figure 8-4.

Figure 8-4. Composited video tracks with addMax graphics mode

Composited video tracks with addMax graphics mode

What just happened?

Setting a GraphicsMode instructs QuickTime to apply a specific behavior to combine overlapping pixels. The GraphicsMode has a "mode" int, which indicates which kind of behavior to use, and a QDColor that is used by some behaviors to indicate a color to operate on. For example, you might use mode QDConstants.transparent and QDColor.green to make all green pixels transparent. The default mode is srcCopy, which simply copies one set of pixels on top of another.

Note

Chapter 5 showed how to set up GraphicsMode compositing of still images. Video works in pretty much the same way.

To apply this GraphicsMode to overlapping video tracks, you call setGraphicsMode( ) , a method not defined by Track but, rather, by the VideoMediaHandler. As a reminder, movies have tracks, tracks have media, and media have handlers. Actually, the setGraphicsMode() is defined by the VisualMediaHandler interface, making it available for all visual media (MPEGMedia, TextMedia, etc.).

The addMax behavior combines background and foreground pixels, using the maximum red, green, and blue values of each. This has the effect of producing something of a washed-out combination of the two video tracks, because bright colors in either source will be copied over to the screen.

The available QDConstant modes offer several dozen behaviors—check them out in the QuickTime documentation by searching Apple's site for "Graphics Transfer Modes"—though some of them aren't suitable for color images, and many of them produce garish results with real-world video. For example, Figure 8-5 shows the rather psychedelic effect of using the srcBic mode.

Figure 8-5. Composited video tracks with srcBic graphics mode

Composited video tracks with srcBic graphics mode

What about...

...practical uses for this? Granted, compositing two full-frame natural images is atypical, but composited video is used all the time in TV production. Modern video often represents many layers of overlapping sources. Watch a football game and you might see a shot of the game, overlaid by a graphic of a player and his stats (and maybe a video "head shot" of him), overlaid with a scoreboard for the corner, overlaid with a moving "bug" of the network's logo in another corner. Each source contains some amount of "useful" video, and the rest is a solid color (often black for synthetic video, green or blue for real-world video). The solid color becomes transparent, so only the useful data is copied over to the target. In terms of GraphicsModes, this would be the transparent mode, with the specified color as the operand.

Tip

If you're serious about shooting bluescreen video, there are sites on the Internet that list the supplies you'll need. For example, http://www.studiodepot.com/ sells chroma-key-friendly fabric and tape for making bluescreen and greenscreen backdrops.

Building a Video Track from Raw Samples

You can create a video track "from scratch" by adding video samples, one by one, to the video media. This is perhaps the ultimate in low-level access to QuickTime video, because it makes you responsible for every pixel in every frame. One way to demonstrate this is by making a movie from a still image and using slightly different parts of it in each frame to suggest a camera moving across the image.

Tip

This concept is called the "Ken Burns Effect" in Apple's iMovie, after the documentary filmmaker who used the technique extensively in documentaries like The Civil War, for which no film or video sources were available.

How do I do that?

To build a movie from samples taken from an image, use the following approach:

  1. Import an image.
  2. Pick source and destination rectangles.
  3. Calculate a series of rectangles between the source and destination. These represent which part of the source image will be used for each frame.
  4. Create an empty movie, new video track, and new video media.
  5. Use a Matrix to convert each source rectangle to the size of the movie.
  6. Compress each frame and add it to the VideoMedia.

You might already know how to do some of this; the new part is how to compress frames into a movie. Chapter 5 made use of the QTImage.compress( ) method to compress QDGraphics (a.k.a. GWorlds) into EncodedImages, but video is a little different in that you use a CSequence, short for compression sequence. The difference is that in many video compression formats, you may need information from previous or subsequent frames to render a specific frame. In other words, some frames are encoded as just the data that has changed from a previous frame. So, you can't compress a single image in isolation; you must work with a sequence of images. This is called temporal compression because it is time-based.

The VideoSampleBuilder demo, shown in Example 8-2, creates a movie called videotrack.mov from a source graphic.

Tip

This is the most involved example in the book and uses concepts from several chapters, such as enabling editing and adding a new Track (Chapter 3), using a GraphicsImporter (Chapter 4), setting up an off-screen GWorld (Chapter 5), using Matrix-based image manipulation (Chapter 5 and this chapter), and adding raw samples to a Media (a sound equivalent was shown in Chapter 7). So, don't be intimidated if it seems a little complicated the first time you read it.

Example 8-2. Building a video track from image samples

package com.oreilly.qtjnotebook.ch08;
 
import quicktime.*;
import quicktime.io.*;
import quicktime.util.QTPointer;
import quicktime.qd.*;
import quicktime.std.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.std.image.*;
import quicktime.util.*;
 
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
import java.io.*;
import java.util.Random;
import java.util.Properties;
 
public class VideoSampleBuilder extends Object {
 
  public static final int VIDEO_TRACK_WIDTH = 320;
  public static final int VIDEO_TRACK_HEIGHT = 240;
  public static final int VIDEO_TRACK_VOLUME = 0;
  public static final int KEY_FRAME_RATE = 30;
 
  Properties userProps = new Properties( );
  QDRect startRect = null;
  QDRect endRect = null;
 
  public VideoSampleBuilder( ) throws QTException, IOException {
 
      /* try to load "videoSampleBuilder.properties" from 
         current directory.  this contains file.location and
         start.x/y/width/height and end.x/y/width/height params
       */
      try {
          userProps.load (new FileInputStream (
              new File ("videosamplebuilder.properties")));
          System.out.println ("Loaded properties");
      } catch (Exception e) {
          System.out.println ("Couldn't load properties");
      }
 
      int CODEC_TYPE = QTUtils.toOSType ("SVQ3");
 
      // create a new empty movie
      QTFile movFile = new QTFile (new java.io.File("videotrack.mov"));
      Movie movie =
          Movie.createMovieFile(movFile,
                 StdQTConstants.kMoviePlayer,
                 StdQTConstants.createMovieFileDeleteCurFile |
                 StdQTConstants.createMovieFileDontCreateResFile);
      System.out.println ("Created Movie");
 
      // now create an empty video track
      int timeScale = 600; // 100 units per second
      Track videoTrack = movie.addTrack (VIDEO_TRACK_WIDTH,
                                         VIDEO_TRACK_HEIGHT,
                                         VIDEO_TRACK_VOLUME);
      System.out.println ("Added empty Track");
 
      // now we need media for this track
      VideoMedia videoMedia = new VideoMedia(videoTrack,
                                             timeScale);
 
      // get image file from props or dialog
      QTFile imgFile = getImageFile( );
      if (imgFile =  = null)
          return;
 
      // get a GraphicsImporter
      GraphicsImporter importer = new GraphicsImporter (imgFile);
      System.out.println ("Got GraphicsImporter - Bounds are " +
                          importer.getNaturalBounds( ));
 
      // Create an offscreen QDGraphics / GWorld that's the
      // size of our frames.  Importer will draw into this,
      // and we'll then hand it to the CSequence
      QDGraphics gw =
          new QDGraphics (new QDRect (0, 0,
                                      VIDEO_TRACK_WIDTH,
                                      VIDEO_TRACK_HEIGHT));
      System.out.println ("Created GWorld, - Bounds are " +
                          gw.getBounds( ));
 
      // get start, end rects
      getRects (importer);
      System.out.println ("startRect = " + startRect);
      System.out.println ("endRect = " + endRect);
 
      // set importer's gworld
      importer.setGWorld (gw, null);
      System.out.println ("Reset importer's GWorld, now: " +
                          importer.getGWorld( ));
                          
      // get to work
      videoMedia.beginEdits( );
 
      // figure out per-frame offsets
      QDRect gRect = new QDRect (0, 0,
                                    VIDEO_TRACK_WIDTH,
                                    VIDEO_TRACK_HEIGHT);
      int frames = 300;
      int startX = startRect.getX( );
      int startY = startRect.getY( );
      int endX = endRect.getX( );
      int endY = endRect.getY( );
      float xOffPerFrame = ((float)(endX - startX) / (float)frames);
      float yOffPerFrame = ((float)(endY - startY) / (float)frames);
      float widthOffPerFrame = ((float) (endRect.getWidth( ) -
                                         startRect.getWidth( )) /
                                (float) frames);
      float heightOffPerFrame = ((float) (endRect.getHeight( ) -
                                         startRect.getHeight( )) /
                                (float) frames);
 
      System.out.println ("xOffPerFrame=" + xOffPerFrame +
                          ", yOffPerFrame=" + yOffPerFrame +
                          ", widthOffPerFrame=" + widthOffPerFrame +
                          ", heightOffPerFrame=" + heightOffPerFrame);
 
      // reserve an image with enough space to hold compressed image
      // this is needed by the last arg of CSequence.compressFrame
      int rawImageSize =
          QTImage.getMaxCompressionSize (gw, 
                                         gRect, 
                                         gw.getPixMap( ).getPixelSize( ),
                                         StdQTConstants.codecNormalQuality, 
                                         CODEC_TYPE,
                                         CodecComponent.bestFidelityCodec);
      QTHandle imageHandle = new QTHandle (rawImageSize, true);
      imageHandle.lock( );
      RawEncodedImage compressedImage =
          RawEncodedImage.fromQTHandle(imageHandle);
 
      // create a CSequence
      CSequence seq = new CSequence (gw,
                                     gRect, 
                                     gw.getPixMap( ).getPixelSize( ),
                                     CODEC_TYPE,
                                     CodecComponent.bestFidelityCodec,
                                     StdQTConstants.codecNormalQuality, 
                                     StdQTConstants.codecNormalQuality, 
                                     KEY_FRAME_RATE,
                                     null,
                                     StdQTConstants.codecFlagUpdatePrevious);
 
      // remember an ImageDescription from this sequence definition
      ImageDescription imgDesc = seq.getDescription( );
 
      // loop through the specified number of frames, drawing 
      // scaled instances into our GWorld and compressing those
      // to the CSequence
      for (int i=1; i<frames; i++) {
          System.out.println ("i=  =" + i);
 
          // compute a rect for this frame
          int x = startX + (int) (xOffPerFrame * i);
          int y = startY + (int) (yOffPerFrame * i);
          int width = startRect.getWidth( ) + (int) (widthOffPerFrame * i);
          int height = startRect.getHeight( ) + (int) (heightOffPerFrame * i);
          QDRect fromRect = new QDRect (x, y, width, height);
 
          // create a Matrix to represent the move/scale from
          // the fromRect to the GWorld and make importer use it
          Matrix drawMatrix = new Matrix( );
          drawMatrix.rect (fromRect, gRect);
          System.out.println ("fromRect = " + fromRect);
          importer.setMatrix (drawMatrix);
 
          // have importer draw (scaled) into our GWorld
          importer.draw( );
          System.out.println ("Importer drew");
 
          // compress a frame
          CompressedFrameInfo cfInfo =
              seq.compressFrame (gw, 
                                 gRect, 
                                 StdQTConstants.codecFlagUpdatePrevious, 
                                 compressedImage);
          System.out.println ("similarity = " + cfInfo.getSimilarity( ));
 
          // is this a key frame?
          boolean syncSample = (cfInfo.getSimilarity( ) =  = 0);
          int flags = syncSample ? 0 : StdQTConstants.mediaSampleNotSync;
 
          // add compressed frame to the video media
          videoMedia.addSample (imageHandle, 
                                0, 
                                cfInfo.getDataSize( ),
                                20, // time per frame, in timescale
                                imgDesc,
                                1, // one sample
                                flags
                                );
      } // for
 
      // done editing
      videoMedia.endEdits( );
 
      // now insert this media into track
      videoTrack.insertMedia (0, // trackStart
                              0, // mediaTime
                              videoMedia.getDuration( ), // mediaDuration
                              1); // mediaRate
      System.out.println ("inserted media into video track");
 
      // save up 
      System.out.println ("Saving...");
      OpenMovieFile omf = OpenMovieFile.asWrite (movFile);
      movie.addResource (omf,
                         StdQTConstants.movieInDataForkResID,
                         movFile.getName( ));
      System.out.println ("Done");
 
  }
 
  /** Gets imageFile from props file, or file-preview if 
      that doesn't work.
   */
  protected QTFile getImageFile ( ) throws QTException {
      // is it in the props?
      QTFile imageFile = null;
      if (userProps.containsKey ("file")) {
          imageFile = new QTFile (userProps.getProperty("file"));
          if (! imageFile.exists( ))
              imageFile = null;
      }
 
      // if not, or if that failed, then use a dialog
      if (imageFile =  = null) {
          int[  ] types = {  };
          imageFile = QTFile.standardGetFilePreview (types);
      }
      return imageFile;
  }
 
  /** Gets startRect, endRect from userProps, or selects
      randomly if that doesn't work
   */
  protected void getRects (GraphicsImporter importer) throws QTException {
      Random rand = new Random( );
      int rightStop =
          importer.getNaturalBounds( ).getWidth( ) - VIDEO_TRACK_WIDTH;
      int bottomStop =
          importer.getNaturalBounds( ).getHeight( ) - VIDEO_TRACK_HEIGHT;
 
      // try to get startRect from userProps
      try {
          int startX = Integer.parseInt (userProps.getProperty("start.x"));
          int startY = Integer.parseInt (userProps.getProperty("start.y"));
          int startWidth = 
              Integer.parseInt (userProps.getProperty("start.width"));
          int startHeight = 
              Integer.parseInt (userProps.getProperty("start.height"));
          startRect = new QDRect (startX, startY, startWidth, startHeight);
      } catch (Exception e) {
          // make random start rect
          int startX = Math.abs (rand.nextInt( ) % rightStop);
          int startY = Math.abs (rand.nextInt( ) % bottomStop);
          startRect = new QDRect (startX, startY, 
                                  VIDEO_TRACK_WIDTH,
                                  VIDEO_TRACK_HEIGHT);
      }
 
      // try to get endRect from userProps
      try {
          int endX = Integer.parseInt (userProps.getProperty("end.x"));
          int endY = Integer.parseInt (userProps.getProperty("end.y"));
          int endWidth = Integer.parseInt (userProps.getProperty("end.width"));
          int endHeight = Integer.parseInt (userProps.getProperty("end.height"));
          endRect = new QDRect (endX, endY, endWidth, endHeight);
 
      } catch (Exception e) {
          float zoom = (rand.nextFloat( ) - 0.5f); // -0.5 <= zoom <= 0.5
          System.out.println ("zoom = " + zoom);
          int endX = Math.abs (rand.nextInt( ) % rightStop);
          int endY = Math.abs (rand.nextInt( ) % bottomStop);
          endRect = new QDRect (endX, endY,
                                VIDEO_TRACK_WIDTH * zoom,
                                VIDEO_TRACK_HEIGHT * zoom);
      }
  }
 
  public static void main (String[  ] arrrImAPirate) {
      try {
          QTSessionCheck.check( );
          new VideoSampleBuilder( );
      } catch (Exception e) {
          e.printStackTrace( );
      }
      System.exit(0);
  }
}

Note

Run this demo with ant run-ch08-videosamplebuilder.

When run, the demo looks for a file called videosamplebuilder.properties , in which you can define the source image and the start and end rectangles. The properties file should have entries like this:

file=/Users/cadamson/Pictures/keagy/DSC01763.jpg
 
start.x=545
start.y=370
start.width=1500
start.height=1125
 
end.x=400
end.y=390
end.width=800
end.height=600

If no properties file is found, the demo queries the user for an image and randomly selects the start and end rectangles.

As each frame is compressed, the program prints an update to the console indicating the frame count, the source frame, and how "similar" the CSequence decided the frame was to its predecessor. The console log looks something like this:

cadamson% ant run-ch08-videosamplebuilder
Buildfile: build.xml
 
run-ch08-videosamplebuilder:
 [java] Couldn't load properties
 [java] Created Movie
 [java] Added empty Track
 [java] Got GraphicsImporter - Bounds are quicktime.qd.QDRect[x=0.0,y=0.0,width=800.0,
height=600.0]
 [java] Created GWorld, - Bounds are quicktime.qd.QDRect[x=0.0,y=0.0,width=320.0,
height=240.0]
 [java] zoom = -0.45799363
 [java] startRect = quicktime.qd.QDRect[x=158.0,y=30.0,width=320.0,height=240.0]
 [java] endRect = quicktime.qd.QDRect[x=282.0,y=158.0,width=146.55795,height=109.91846]
 [java] Reset importer's GWorld, now: quicktime.qd.QDGraphics@8f10820[size=108]
[PortRect=quicktime.qd.QDRect[x=0.0,y=0.0,width=320.0,height=240.0],isOffscreen=true]
 [java] xOffPerFrame=0.41333333, yOffPerFrame=0.42666668, widthOffPerFrame=-0.58, 
heightOffPerFrame=-0.43666667
 [java] i=  =1
 [java] fromRect = quicktime.qd.QDRect[x=158.0,y=30.0,width=320.0,height=240.0]
 [java] Importer drew
 [java] similarity = 0
 [java] i=  =2
 [java] fromRect = quicktime.qd.QDRect[x=158.0,y=30.0,width=319.0,height=240.0]
 [java] Importer drew
 [java] similarity = 128

When finished, you can play the videotrack.mov file in QuickTime Player, the player and editor examples in Chapters Chapter 2 and Chapter 3, or equivalent. Figure 8-6 shows two screenshots from different times in the movie to indicate the zoom effect that is created by using different parts of the picture.

Figure 8-6. Movie built via addSample( ) from portions of a static image

Movie built via addSample( ) from portions of a static image

What just happened?

One of the first things to notice is the constant CODEC_TYPE , which is used later on in setting up the CSequence. This indicates which of the supported QuickTime video codecs is to be used for the video track. The codec is indicated by a FOUR_CHAR_CODE int, in this case "SVQ3", which identifies the Sorenson Video 3 codec. Most of the usable codecs exist as constants in the StdQTConstants classes—for example, I could have put this as StdQTConstants6.kSorenson3CodecType. The advantage of using the FOUR_CHAR_CODE directly is that you can use any supported codec, even those that don't have constants defined in QTJ yet. In fact, Sorenson Video 3 and MPEG-4 video (StdQTConstants6.kMPEG4VisualCodecType) didn't have constants in QTJ until I filed a bug report for them, and the Pixlet codec (whose 4CC is "pxlt") still doesn't, as of this writing.

Tip

"So, what's the best codec?" I hear someone asking. Don't go there. There's no such thing as a best codec. There are so many different codecs, because they're engineered to serve different purposes. For example, some codecs are difficult to compress (in terms of CPU power, encoder expertise, etc.) but easy to decompress, making them well suited for mass-distribution media like DVDs where the encoding is done only once. On the other hand, a codec used for video conferencing must be light enough to do on the fly, with minimal configuration. Others are tuned to specific bitrates and uses, losing their advantages outside their preferred realm. The new MPEG-4 codec, H.264 (AVC), claims to be able to scale from cell phone to HDTV bandwidths...we'll see if it delivers on this.

To build the image movie, create an empty movie file, add a track, and create a VideoMedia for the track. You do this by creating a Movie with the constructor that takes a file reference (so that QuickTime knows where to put the samples you'll be adding), calling Movie.addTrack( ) to create the track, and constructing a VideoMedia. Then call Media.beginEdits( ) to signal that you're going to be altering the VideoMedia.

Note

These steps are similar to those in Chapter 7 s square-wave sample-building example.

The next step is to get the image with a GraphicsImporter. This will be the source of every frame of the movie. However, it's not the right size. So create an off-screen QDGraphics (a.k.a. GWorld, the term used in the native API and all its getters and setters in QTJ) with the desired movie dimensions. By calling GraphicsImporter.setGWorld() , you tell the importer that subsequent calls to draw() will draw pixels from the imported graphic into the off-screen GWorld, which will be the source of the compressed frames later on.

Next, after calculating how far the source rectangle will move each frame, you set up the compression sequence. To do this, you need a buffer big enough to hold compressed images, which in turn requires a call to figure out how big that buffer needs to be. QTImage.getMaxCompression() size provides this size. You need to pass in the following data (in the order shown):

  1. The QDGraphics/GWorld to compress from.
  2. A QDRect indicating what part of the QDGraphics will be used.
  3. The color depth of the pixels (i.e., how many bits are in each pixel).
  4. A constant to indicate the compressed image quality level.
  5. The codec's FOUR_CHAR_CODE .
  6. A constant to indicate which codec component to pick if several can handle the codec. You can pass a specific component, or the behavior constants anyCodec, bestSpeedCodec, bestFidelityCodec, and bestCompressionCodec.

Given this, you can allocate memory for the image by constructing a new QTHandle, and then wrap it with a RawEncodedImage object. This is where the compressed frames will go.

Now you have enough information to create the CSequence . Its constructor takes a whopping 10 arguments:

  • The QDGraphics/GWorld to compress from
  • A QDRect indicating what part of the QDGraphics will be used
  • The color depth of the pixels (i.e., how many bits are in each pixel)
  • The codec's FOUR_CHAR_CODE
  • A specific codec component or a selection strategy constant (anyCodec, bestSpeedCodec, etc.)
  • Spatial quality (in other words, the quality of images after 2D compression, using one of the constants codecMinQuality, codecLowQuality, codecNormalQuality, codecHighQuality, codecMaxQuality, or codecLosslessQuality)
  • Temporal quality (this uses the same constants as for spatial quality, but refers to quality maintained or lost when using data from adjacent frames; you also can set this to 0 to not use temporal compression)
  • Key frame rate (the maximum number of frames allowed between "key frames" [those that have all image data for a frame and don't depend on other frames], or 0 to not use key frames)
  • A custom color lookup table, or null to use the table from the source image
  • Behavior flags (these can include the codecFlagWasCompressed flag, which indicates the source image was previously compressed and asks the codec to compensate, and codecFlagUpdatePrevious and codecFlagUpdatePreviousComp, both of which hold on to previously compressed frames for temporal-compression codecs, the latter of which may produce better results but consumes more CPU power)

Now you've got everything you need to build the frames: a GWorld for source images, a RawEncodedImage to compress into, a CSequence to compress frames, and a VideoMedia to put them into.

So, start looping. Each time through the loop, you draw a different part of the source image into the off-screen GWorld. This is done by resetting the GraphicImporter's Matrix , using rect( ) to scale-and-translate from a source rectangle to a new rectangle at (0,0) and with the dimensions of the off-screen GWorld. Use GraphicsImporter.draw( ) to draw from the source image into the GWorld.

With the frame's pixels in the GWorld, call CSequence.compressFrame() to compress the pixels into the RawEncodedImage . This returns a CompressedFrameInfo object that wraps the size of the compressed image and a "similarity" value that represents the similarity or difference between the current frame and the previous frame. The similarity is used to determine if this sample is a "key frame" (also called a "sync sample" in Apple's terminology), which in this context means an image so different from its predecessors that the compressor should encode all the data for this image in this frame instead of depending on any previous frames.

Finally, you call addSample() to add the frame to the VideoMedia. This call, inherited from Media, takes a pointer to the sample data, an offset into the data, the data size, the time represented by the sample (in the media's time scale), a description of the data (here an ImageDescription retrieved from the CSequence), the number of samples being added with the call, and a flag that indicates whether this sample is a key frame (if it's not, pass StdQTConstants.mediaSampleNotSync).

Note

Notice addSample( ) has the same signature for any kind of media. That's why it needs a parameter like the ImageDescription to explain what's in the essentially untyped QTHandle.

When you're done adding frames, call Media.endEdits( ) , then insert the media into the track with Track.insertMedia() . Finally, save the movie with the Movie.addResource() call.

Note

Run this demo with ant run-ch08-videosamplebuilder.

When run, the demo looks for a file called videosamplebuilder.properties, in which you can define the source image and the start and end rectangles. The properties file should have entries like this:

file=/Users/cadamson/Pictures/keagy/DSC01763.jpg
 
start.x=545
start.y=370
start.width=1500
start.height=1125
 
end.x=400
end.y=390
end.width=800
end.height=600

If no properties file is found, the demo queries the user for an image and randomly selects the start and end rectangles.

As each frame is compressed, the program prints an update to the console indicating the frame count, the source frame, and how "similar" the CSequence decided the frame was to its predecessor. The console log looks something like this:

Note

Did you notice the capture settings dialog in Chapter 6 showed a mirror image? You could use a Matrix to make the motion detector in that chapter render a mirror image, too.

Overlaying Video Tracks

When one video track is drawn on top of another, the top doesn't necessarily have to obscure the bottom. QuickTime gives you the option of specifying a GraphicsMode to combine pixels from multiple video layers to create interesting effects.

How do I do that?

You can create a GraphicsMode object to describe the means of combining overlapping colors. To try it out, take the previous lab's code and replace all the matrix stuff (after the foreTrack and backTrack are created, but before the MovieController is created) with the following:

GraphicsMode gm = new GraphicsMode (QDConstants.addMax,
                                  QDColor.green);
VisualMediaHandler foreHandler = 
  (VisualMediaHandler) foreTrack.getMedia( ).getHandler( );
foreHandler.setGraphicsMode(gm);
foreTrack.setLayer(-1);

Note

Run this example with ant run-ch08-composit-evideotracks.

When run, this sample program asks you to open two movies, then creates a new movie with video tracks from the source movies' media, and combines the pixels of the foreground movie with the background, so the foreground appears atop the background. The result is shown in Figure 8-4.

What just happened?

Setting a GraphicsMode instructs QuickTime to apply a specific behavior to combine overlapping pixels. The GraphicsMode has a "mode" int, which indicates which kind of behavior to use, and a QDColor that is used by some behaviors to indicate a color to operate on. For example, you might use mode QDConstants.transparent and QDColor.green to make all green pixels transparent. The default mode is srcCopy, which simply copies one set of pixels on top of another.

Note

Chapter 5 showed how to set up GraphicsMode compositing of still images. Video works in pretty much the same way.

To apply this GraphicsMode to overlapping video tracks, you call setGraphicsMode( ), a method not defined by Track but, rather, by the VideoMediaHandler. As a reminder, movies have tracks, tracks have media, and media have handlers. Actually, the setGraphicsMode( ) is defined by the VisualMediaHandler interface, making it available for all visual media (MPEGMedia, TextMedia, etc.).

Note

Again, this wrap-up is the same as Chapter 7 s audio sample-building technique.

What about...

...appropriate codecs to use? I've pointed out Sorenson Video 3 and MPEG-4 Visual, because they have very nice compression ratios and still look pretty good with natural images. Other codecs of interest in a standard QuickTime installation are shown in Table 8-1.

Table 8-1. Some standard QuickTime codecs

Name Constant 4CC Description
Animation kAnimationCodecType "rle" Good for long runs of solid colors, such as those found in simple synthetic 2D graphics.
Cinepak kCinepakCodecType "cvid" This was the most popular codec of the early to mid-1990s, thanks to a good compression/quality tradeoff, wide support (even Sun's JMF handles it), and the fact that it could run on very modest CPUs. Today, there are better options.
H.263 kH263CodecType "h263" This standard originally was designed for videoconferencing, yet is surprisingly good in a wide range of bitrates.
Pixlet N/A "pxlt" This wavelet-based codec, introduced in 2003, achieves high compression rates (20:1) without showing graphics artifacts like other codecs at similar compression levels. It requires powerful CPUs (PowerPC G4 or G5 at 1GHz and up) to decode.


As of this writing, Apple has demonstrated but not released an H.264 (aka AVC) codec for QuickTime. This is the newest and most powerful MPEG-4 codec, offering broadcast-quality video at 1.5 megabits per second (Mbps) and HDTV quality at 5-9Mbps, assuming your computer is powerful enough to decode it.

Also, other than making these "Ken Burns Effects," what am I going to do with writing video samples? This technique is the key to creating anything you want in a video track. Want to make a movie of your screen? Use the screen-grab lab from Chapter 5 and compress its GWorld into a video track. Have some code to decode a format that QuickTime doesn't understand? Now you can transcode to any QuickTime-supported format. You even can take 3D images from an OpenGL or JOGL application and make them into movies.

Note

Considering Chapter 5 showed how to grab the screen (even with the DVD Player running) into a GWorld, and considering you can make video tracks from any GWorld...uh-oh.

Personal tools