QuickTime for Java: A Developer's Notebook/Miscellaneous Media

From WikiContent

Jump to: navigation, search
QuickTime for Java: A Developer's Notebook

Audio and video are the most obvious and prominent kinds of media that can be found in a QuickTime movie, but the story doesn't end there. Take a look at quicktime.std.movies.media, and you'll find more than a dozen subclasses of Media, each representing media types that can be referenced by tracks in QuickTime movies.

This chapter is going to show off four of these, as much to show the variety of QuickTime as to illuminate their practical uses. These four are:

  • Text media
  • HREF media (actually a special case of text)
  • Timecode media
  • Effects media (actually a special case of video)

Elsewhere in the book, I've also mentioned MPEG media, which isn't so much a new media type as it is a disappointing compromise—QuickTime can't present the audio and video of a multiplexed MPEG-1 or MPEG-2 file as separate tracks, so instead it uses a single track pointing to "MPEGMedia," which has both visual and audio characteristics (i.e., its media handler implements both VisualMediaHandler and AudioMediaHandler).

I'm not covering several media types for reasons of space and concision. Sprites (represented by SpriteMedia) and QuickTime VR (QTVRMedia) are plenty cool; however, each required an entire volume of the old Inside Macintosh series, making them too involved to handle in this format. ThreeDMedia is effectively deprecated and isn't even present in Mac OS X. A few other media types are present largely as implementations for higher-level features—for instance, MovieMedia came about as part of the implementation of SMIL (an XML markup that lets authors, among other things, make movies that contain movies).

Tip

If you really think I should cover one of these other media types, send an email to cadamson@oreilly.com, and I'll see about covering it in an online article or a future revision.

Contents

Creating Captions with Text Media

Have you ever turned on captions on a DVD, perhaps for a foreign-language film? Have you ever wondered how that works, especially given that the DVD might have captions for several different languages? QuickTime can do the same thing, easily and efficiently.

The idea is that a movie can have zero-to-many text tracks (literally, tracks with text media), each of which has a collection of text samples. Each sample contains some text and a time to display it. In that sense, they're like any other media samples—they have some data to be presented and a time and duration indicating when to present it. So, to do a caption, you'd just have a single text sample that begins at a relevant time in the movie (like when someone on-screen starts speaking) and has an appropriate duration (how long the person speaks).

How do I do that?

To keep things simple, I'll focus on creating a movie with a single text track. Once you know how to do that, it's easy to add your own text track to existing movies.

If you read the sample-building examples in Chapters Chapter 7 or Chapter 8, you probably already know what's coming. To build a text track, you:

  1. Add a track to a movie.
  2. Create new media for the track.
  3. Call Media.beginEdits( ).
  4. Add samples.
  5. Call Media.endEdits( ).
  6. Insert the media into the track.
  7. Save the movie.

Note

These are the steps for adding any kind of media.

The biggest difference between adding different kinds of media is the setup you have to do for the Media.addSample() call. In the case of text, use TextMedia.getTextHandler() to get a TextMediaHandler object, which offers a convenient addTextSample( ) call. This method lets you specify font, size, color, and various other options. In fact, it takes 14 parameters (amazingly, in this exact order):

  • A QTPointerRef to the string to be added (typically, you call getBytes() on a Java string and wrap them with a QTPointer to provide this argument)
  • A font number (you can look up the font number from a font family's name via the QDFont.getFNum() method, or just pass 0 for a sensible default font)
  • Font size
  • Text face, meaning style information like QDConstants.bold , QDConstants.italic, or QDConstants.underline, combined with the | operator
  • Text color, as a QDColor value (this defaults to black if you pass null)
  • Background color, as a QDColor value (this defaults to white if you pass null)
  • Text justification, using one of the QDConstants values teFlushLeft, teFlushRight, teCenter, or teFlushDefault (the "teJust..." constants in this class seem to do the same thing, too)
  • Text box, a QDRect defining the bounding rectangle of the text (don't worry about this matching the size of a movie you want to add it to—you can make a small text box at (0,0) and move it into position by adding a Matrix translation to the text track)
  • Display flags (covered later)
  • Scroll delay (covered later)
  • Highlight start (this is the index of the first character to be highlighted)
  • Highlight end (this is the index of the last character to be highlighted)
  • Highlight color, as a QDColor value
  • Duration, in the media's time scale

The display flags parameter takes any number of the df constants from StdQTConstants , combined with the | operator. The possible behaviors are shown in Table 9-1.

Note

Who knew QuickTime was optimized for karaoke?

Table 9-1. Text sample display flags

Display flag Behavior
dfDontDisplay
Don't show this sample.
dfDontAutoScale
Don't scale text if bounding rectangle is resized.
dfClipToTextBox
Clips to the size of the bounding rectangle; useful if overlaying video.
dfShrinkTextBoxToFit
Recalculates the size of the text box parameter to just fit the text.
dfScrollIn
Scrolls the text in. If set, the scroll delay argument determines how long the text lingers before being scrolled out.
dfScrollHoriz
Makes the text scroll in horizontally, instead of vertically (the default).
dfReverseScroll
Reverses the typical scroll direction, which is bottom-to-top for vertical scrolling and left-to-right for horizontal.
dfContinuousScroll
Causes new samples to force previous samples to scroll out. You must set dfScrollIn and/or dfScrollOut for this to do anything.
dfFlowHoriz
Allows text to flow within the bounding rectangle instead of going off to the right.
dfContinuousKaraoke
Ignores the highlight start argument and highlights from the beginning of the text to "highlight end." This allows you to progressively "grow" a highlight through a line of lyrics, presumably for a karaoke application.
dfDropShadow
Displays text with a drop shadow.
dfAntiAlias
Displays text with anti-aliasing.
dfKeyedText
Displays text without drawing a background color. This is ideal for putting captions on top of video.
dfInverseHilite
Highlights with inverse video instead of the highlight color.


Example 9-1 shows a simple application that creates a movie with a single text track, containing four samples, each lasting 2.5 seconds.

Example 9-1. Creating a text track

package com.oreilly.qtjnotebook.ch09;
 
import quicktime.*;
import quicktime.std.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.io.*;
import quicktime.util.*;
import quicktime.qd.*;
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
public class TextTrackBuilder extends Object {
 
  public static int TEXT_TRACK_WIDTH = 320;
  public static int TEXT_TRACK_HEIGHT = 24;
 
  static String[  ] MESSAGES = {
      "QuickTime for Java",
      "A Developer's Notebook",
      "from O'Reilly Media",
      "Coming Fall 2004"
  };
  static QDRect textBox = new QDRect(0, 0,
                                     TEXT_TRACK_WIDTH,
                                     TEXT_TRACK_HEIGHT);
 
  public static void main (String[  ] args) {
      try {
          QTSessionCheck.check( );
 
          QTFile movFile = new QTFile (new java.io.File("buildtext.mov"));
          Movie movie =
              Movie.createMovieFile(movFile,
                         StdQTConstants.kMoviePlayer,
                         StdQTConstants.createMovieFileDeleteCurFile |
                         StdQTConstants.createMovieFileDontCreateResFile);
          
          System.out.println ("Created Movie");
          
          // create an empty text track
          int timeScale = 10; // time measured in 1/10ths of a sec
          Track textTrack = movie.addTrack (TEXT_TRACK_WIDTH,
                                            TEXT_TRACK_HEIGHT, 0);
          System.out.println ("Added empty Track");
          
          // create media for this track
          Media textMedia = new TextMedia (textTrack,
                                           timeScale);
          TextMediaHandler handler =
              (TextMediaHandler) textMedia.getHandler( );
          System.out.println ("Created Media");
 
          textMedia.beginEdits( );
          for (int i=0; i<MESSAGES.length; i++) {
              byte[  ] msgBytes = MESSAGES[i].getBytes( );
              QTPointer msgPoint = new QTPointer (msgBytes);
              // add sample
              handler.addTextSample (msgPoint, // text
                                     0, // font number
                                     14, // font size,
                                     QDConstants.bold, // style,
                                     QDColor.yellow, // fg color,
                                     QDColor.black, // bg color,
                                     QDConstants.teCenter,// justification
                                     textBox, // box
                                     0, // displayFlags
                                     0, // scrollDelay
                                     0, // hiliteStart
                                     0, // hiliteEnd
                                     QDColor.white, // rgbHiliteColor
                                     25 // duration
                                     );
          } // for
 
          // done editing
          textMedia.endEdits( );
          
          // now insert this media into track
          textTrack.insertMedia (0, // trackStart
                                 0, // mediaTime
                                 textMedia.getDuration( ), // mediaDuration
                                 1); // mediaRate
          
 
          // save up at this point
          System.out.println ("Saving...");
          OpenMovieFile omf = OpenMovieFile.asWrite (movFile);
          movie.addResource (omf,
                             StdQTConstants.movieInDataForkResID,
                             movFile.getName( ));
          
          System.out.println ("Done");
          
 
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
      System.exit(0);
  } // main
 
}

Note

If you downloaded the book code, run this example with ant run-ch09-texttrackbuilder.

Running this example creates a file called buildtext.mov in the current directory. It's a normal QuickTime movie, so you can open it with QuickTime Player, or the various players and editors from Chapter 2 and Chapter 3. Figure 9-1 shows what it looks like when played.

Figure 9-1. Text track movie

Text track movie

What just happened?

The application walks through the basic steps of creating a text track as described earlier. First, it creates an empty movie on disk (giving the movie a place to store the samples), and then adds an empty track and creates a TextMedia object for this track.

From there, it's a pretty simple matter of getting a TextMediaHandler and using it to make calls to addTextSample( ) , looping through the array of Strings that are used as samples. For each String , get its bytes and wrap them with a QTPointer , creating a QTPointerRef that can be used for addTextSample( ). When this is done, add the media to the track, then save the movie to disk with Movie.addResource( ).

What about...

...adding this text track on top of an existing movie I've opened, to make actual captions? To do this, you'd want to do a few extra things. First, you'd add your samples with the dfKeyedText display flag, to remove the background color and thus have only the text appear above the video. You might also consider using dfAntiAlias to make the text easier to read, though this is a little more CPU-intensive at playback time.

Next, you'd want to move the captions to the bottom of the movie's box because this example uses a box anchored at (0,0). You do this by setting a Matrix on the text track, defining it as a translation to a box along the bottom of the movie's box (e.g., where the y-coordinate is the movie's height minus the height of the text track).

Note

Run this example with ant run-ch09-transitiontrackbuilder.

Once an effect is selected, the resulting movie is saved as transition.mov. Figure 9-9 shows an example of a movie in mid-transition, using a vertical "barn door" wipe with 5-pixel-wide borders.

What just happened?

In general, this isn't very different from the one-source case: an effects description defines the effect, and an input map indicates where the sources come from. Probably the biggest hassle is that because an effect by itself isn't very interesting, this example rips out the pre-effect and post-effect video as separate tracks so that you can actually see the one video clip transitioning into another.

What about...

...all these tracks? Who sends out QuickTime movies with five tracks, one of which QuickTime Player identifies by the name of the effect, like "Wipe"? Fair enough—this is the form you would want your movie in while editing it so that you can make changes easily, tossing the effect or reworking it on the fly, with minimal CPU or I/O cost to do so (because, as always, you're mostly just copying pointers). For end-user delivery, you probably would want to export the movie. Even if you export to another QuickTime movie (as opposed to a foreign format like MPEG-4), the export process will render and compress each frame of the transition, leaving you with just a single video track.

Also, is there a list of all the effects I can check out? Sure, but there are more than 100...too many to list here. If you look in Inside Macintosh: QuickTime (on Apple's web site or installed by developer tools for Mac OS X), the section "Built-in QuickTime Video Effects" lists all the effects provided by QuickTime, with examples and information about the parameters each one takes. Several dozen of them are defined and standardized by the industry trade group SMPTE (Society of Motion Picture and Television Engineers) and will be familiar to anyone who's worked with a television switcher. Remember, though, the user may have installed third-party effects, so it's important to be able to use the EffectsList to do runtime discovery of what's available to your program.

Note

If you downloaded the book code, run this example with ant run-ch09-texttrackbuilder.

Running this example creates a file called buildtext.mov in the current directory. It's a normal QuickTime movie, so you can open it with QuickTime Player, or the various players and editors from Chapter 2 and Chapter 3. Figure 9-2 shows what it looks like when played.

Note

See Chapter 8 for coverage of transforming tracks with Matrix objects. The timecode example later in this chapter does this, too.

Creating Links with HREF Tracks

One peculiar trick you can do with text tracks is to use them to turn your movie into a set of time-based hyperlinks. The idea is that by adding a so-called "HREF track," you can make portions of your movie act like an anchor tag in HTML—clicking the movie takes you to a specified web page.

How do I do that?

Creating an HREF track is virtually identical to creating a text track—it is a real text track, after all—with URLs as the text samples. To actually activate its special features, though, you have to rename the track to HREFTrack. Also, because the URLs are not meant to be seen, you typically want to hide them by calling setEnabled(false) on the track.

Assuming there is an array of URL Strings called URLS, you can make the previous lab's movie linkable by adding the following code after the first text media has been inserted into its track:

// add HREF track
Track hrefTrack = movie.addTrack (TEXT_TRACK_WIDTH,
                                TEXT_TRACK_HEIGHT, 0);
// create media for this track
Media hrefMedia = new TextMedia (hrefTrack,
                               timeScale);
handler = (TextMediaHandler) hrefMedia.getHandler( );
System.out.println ("Created HREF Media");
 
hrefMedia.beginEdits( );
for (int i=0; i<URLS.length; i++) {
  byte[  ] msgBytes = URLS[i].getBytes( );
  QTPointer msgPoint = new QTPointer (msgBytes);
  // add sample
  handler.addTextSample (msgPoint, // text
                         0, // font number
                         14, // font size,
                         QDConstants.bold, // style,
                         QDColor.yellow, // fg color,
                         QDColor.black, // bg color,
                         QDConstants.teJustCenter,// justification
                         textBox, // box
                         0, // displayFlags
                         0, // scrollDelay
                         0, // hiliteStart
                         0, // hiliteEnd
                         QDColor.white, // rgbHiliteColor
                         25 // duration
                         );
} // for
 
// done editing
hrefMedia.endEdits( );
 
// now insert this media into track
hrefTrack.insertMedia (0, // trackStart
                     0, // mediaTime
                     hrefMedia.getDuration( ), // mediaDuration
                     1); // mediaRate
 
// disable href track because we don't want it visible

Note

Run this example with ant run-ch09-hreftrackbuilder.

hrefTrack.setEnabled(false);
 
// change track name to HREFTrack
UserData userData = hrefTrack.getUserData( );
String trackName = "HREFTrack";
QTPointer namePtr = new QTPointer(trackName.getBytes( ));
userData.setDataItem (namePtr,
                    QTUtils.toOSType("name"),
                    0);

When run, this demo creates a file called buildhref.mov. However, HREF tracks work only in the QuickTime plug-in—i.e., in a browser. In the book's downloadable code, the HTML file src/other/html/embed-hrefmovie.html has a simple web page that embeds this movie.

Note

To embed a QuickTime movie with HTML <embed> and <object> tags, see Chapter 1.

Figure 9-4 shows the page with the embedded buildhref.mov. If you click when you're on the first text sample (QuickTime for Java), a new window opens up and goes to Apple's QTJ home page. The other text samples each have a different corresponding HREF. The last one launches its page automatically.

Figure 9-2. Browser showing movie with an HREF track; the page opened by clicking the movie is shown in the second window

Browser showing movie with an HREF track; the page opened by clicking the movie is shown in the second window

Tip

Note that the arrangement of HREF samples to other tracks and samples in the movie is totally arbitrary—it depends only on when and for how long the HREF text sample appears. If you wanted to link a certain segment of video to a URL, you might add the sample at the time the video begins and make it have the same duration as the segment. This example makes the URLs correspond exactly to the text samples in the other track because that makes sense when you're playing with it, but it doesn't have to work like that.

What just happened?

As QuickTime parses each URL in the HREF track, it enables a link to that URL. However, the URLs can be specially formatted to achieve different behaviors. Here's what the demo's URLS array looks like:

static String[  ] URLS = {
  "<http://developer.apple.com/quicktime/qtjava/> T<_blank>",
  "<http://devnotebooks.oreilly.com/> T<_blank>",
  "<http://www.oreilly.com/> T<_blank>",
  "A<http://www.oreilly.com/catalog/> T<_blank>"
};

As you can see, the URL itself is enclosed in angle brackets. In each case, there's a second entry, T<_blank>, which is used to indicate a target frame. By using the special value _blank, clicking these URLs will always open them in a new window. However, you could also use a consistent name to open URLs in a single new window, or a frame. If the T<...> is absent, the URL will be opened in the current window (which will, of course, exit the page that contains the movie).

The last sample shows another interesting syntax. By preceding the URL and its angle brackets with an A, you can force the URL to be opened as soon as it is read, either by playing up to that point or scrubbing to it. There are lots of interesting uses for this approach, like an introductory movie (titles and credits) pulling up another movie, or automatically refreshing another frame on the page.

Adding Timecodes

In the professional realm, videotapes often have a timecode track in addition to their audio and video tracks. This track enumerates every video frame, and is used typically for various purposes: editing, logging what's on a tape, etc. Professional tape decks usually have an LED or LCD display of the timecode, and optionally can display timecodes on-screen.

You might think the text track provides a convenient way to embed timecodes—they're string values—you can have one for every frame of video (or many, if you set your time scale really high), you can read them from the TextMedia object, you can turn their display on and off by enabling and disabling the track, etc.

And this would be fine. But fortunately, QuickTime has a real timecode track that goes much further. Adding timecodes to a movie, in a format and resolution suitable for professional work, is a snap.

How do I do that?

No surprise, once again the key is to create a new track with a specific kind of media and to add samples to it. This time, the desired media class is TimeCodeMedia .

What's really interesting is that you don't actually write a sample for every video frame. You need to write only a single sample to define the timecode format and a start time, at the beginning of the period for which you want to provide timecodes. Because QuickTime already is measuring time in your track, at an arbitrary precision (i.e., the time scale you set for it), it can figure out the timecode for any time later in the movie.

To create the sample, first you need a TimeCodeDef object, which defines the timecode standard in terms of frames per second, duration per frame, and a time scale, each set with a method call. You also need a TimeCodeTime , which defines the starting point for your timecodes. Its constructor takes four arguments: hours, minutes, seconds, and frames.

Next, you need a TimeCoder , which is a MediaHandler for TimeCodeMedia. This object allows you to set flags to determine whether the time code is displayed and to set display options (font size, style, color, etc.) by passing it a TCTextOptions object. It also can generate a frame number, given the TimeCodeDef and TimeCodeTime, which is the data you need to pass to addSample( ).

Note

You would think this would be called a Time-CodeHandler, wouldn't you?

The application in Example 9-2 takes an existing QuickTime movie and adds a visible timecode track.

Note

Run this example with ant run-ch09-timecodetrackerbuilder.

Example 9-2. Creating a timecode track

package com.oreilly.qtjnotebook.ch09;
 
import quicktime.*;
import quicktime.std.*;
import quicktime.std.image.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.std.qtcomponents.*;
import quicktime.io.*;
import quicktime.qd.*;
import quicktime.app.view.*;
import quicktime.util.*;
import java.awt.*;
 
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
public class TimeCodeTrackBuilder {
 
public static final int TIMECODE_TRACK_HEIGHT=24;
public static final int TIMECODE_TRACK_WIDTH=120;
 
public static void main (String[  ] args) {
  try {
      QTSessionCheck.check( );
      // open a movie
      QTFile file = QTFile.standardGetFilePreview (
                              QTFile.kStandardQTFileTypes);
      OpenMovieFile omf = OpenMovieFile.asRead(file);
      Movie movie = Movie.fromFile(omf);
      // add a timecode track
      addTimeCodeTrack (movie);
 
      // create GUI
      Frame f = new Frame ("Movie with TimeCode track");
      MovieController controller = new MovieController(movie);
      Component c = QTFactory.makeQTComponent(controller).asComponent( );
      f.add(c);
      f.pack( );
      f.setVisible(true);
 
  } catch (QTException qte) {
      qte.printStackTrace( );
  }
}
 
public static Track addTimeCodeTrack (Movie movie)
  throws QTException {
  int timescale = movie.getTimeScale( );
 
  TimeCodeDef tcDef = new TimeCodeDef( );
  tcDef.setTimeScale (2997); // ntsc drop-frame
  tcDef.setFrameDuration (100); // 1 frame in 30 fps dropframe
  tcDef.setFramesPerSecond (30);
  tcDef.setFlags (StdQTConstants.tcDropFrame); 
 
  // first record at 0 hrs, 0 min, 0 sec, 0 frames
  TimeCodeTime tcTime = new TimeCodeTime (0, 0, 0, 0);
 
  // create timecode track and media
  Track tcTrack = movie.addTrack (TIMECODE_TRACK_WIDTH,
                                  TIMECODE_TRACK_HEIGHT,
                                  0);
  TimeCodeMedia tcMedia = new TimeCodeMedia (tcTrack, timescale);
  TimeCoder timeCoder = tcMedia.getTimeCodeHandler( );
 
  // turn on timecode display, set colors
  timeCoder.setFlags (timeCoder.getFlags( ) |
                      StdQTConstants.tcdfShowTimeCode,
                      StdQTConstants.tcdfShowTimeCode);
  TCTextOptions tcTextOptions = timeCoder.getDisplayOptions( );
  tcTextOptions.setTXSize (14);
  tcTextOptions.setTXFace (QDConstants.bold);
  tcTextOptions.setForeColor (QDColor.yellow);
  tcTextOptions.setBackColor (QDColor.black);
  timeCoder.setDisplayOptions (tcTextOptions);
 
  // set up a sample as a 4-byte array in a QTHandle
  int frameNumber = timeCoder.toFrameNumber (tcTime, tcDef);
  int frameNums[  ] = new int[1];
  frameNums[0] = frameNumber;
  QTHandle frameNumHandle = new QTHandle (4, false);
  frameNumHandle.copyFromArray (0, frameNums, 0, 1);
 
  // create a timecode description (the sample to be added)
  TimeCodeDescription tcDesc = new TimeCodeDescription( );
  tcDesc.setTimeCodeDef (tcDef);
 
  // add the sample to the TimeCodeMedia
  tcMedia.beginEdits( );
  tcMedia.addSample (frameNumHandle,
                     0,
                     frameNumHandle.getSize( ),
                     movie.getDuration( ),
                     tcDesc,
                     1,
                     0);
  tcMedia.endEdits( );
 
  // now insert this media into track
  tcTrack.insertMedia (0, // trackStart
                         0, // mediaTime
                         tcMedia.getDuration( ), // mediaDuration
                         1); // mediaRate
 
  // move the timecode to the bottom of the movie and
  // set a transparent-background GraphicsMode
  int x = (movie.getBox( ).getWidth( )/2) - (TIMECODE_TRACK_WIDTH / 2);
  int y = movie.getBox( ).getHeight( ) - TIMECODE_TRACK_HEIGHT;
  QDRect moveFrom = new QDRect (0, 0,
                                TIMECODE_TRACK_WIDTH,
                                TIMECODE_TRACK_HEIGHT);
  QDRect moveTo = new QDRect (x, y,
                              TIMECODE_TRACK_WIDTH,
                              TIMECODE_TRACK_HEIGHT);
  Matrix matrix = new Matrix( );
  matrix.rect (moveFrom, moveTo);
  tcTrack.setMatrix (matrix);
  timeCoder.setGraphicsMode (new GraphicsMode (QDConstants.transparent,
                                               QDColor.black));
 
  return tcTrack;
}
 
}

When this is run, the user is prompted to open a QuickTime movie. It adds the timecode track and opens the movie in a new window, as shown in Figure 9-3. Notice that the timecode stays accurate whether you play the movie, jump to a specific time by clicking the time bar, or scrub back and forth.

Figure 9-3. Time code track added to a movie

Time code track added to a movie

What just happened?

The addTimeCode( ) method begins by creating a TimeCodeDef object and setting its time scale, frame duration, and frames per second. Then it creates a TimeCodeTime for 0 hours, 0 minutes, 0 seconds, and 0 frames (typically represented in the form 00:00:00;00, though you need to remember the digits after the semicolon are in frames per second, not hundredths of a second, so in this case they'll run from 0 to 29). It also creates a new Track with TimeCodeMedia .

With these objects, you can create the sample you'll need for the track, so you need the MediaHandler, namely the TimeCoder, which you get from the TimeCodeMedia via getTimeCodeHandler() . But some things are worth setting up on the TimeCoder first, before you worry about the sample. If you want to make the timecodes visible, you need to set the tcdfShowTimeCode behavior flag. TimeCoder has a really weird syntax for behavior flags, requiring you to pass in two values, the new values of all the flags, plus a mask indicating which one you changed. So, to set tcdfShowTimeCode, you have to do this:

timeCoder.setFlags (timeCoder.getFlags( ) |
                  StdQTConstants.tcdfShowTimeCode,
                  StdQTConstants.tcdfShowTimeCode);

Use the TimeCoder to set any display options: font, size, style, and foreground and background colors. To do this, get the TimeCoder's TCTextDisplay object and make method calls to set each parameter.

Finally, you're ready to create the sample. The data needed for the addSample() call is just a 4-byte frame number, calculated by the TimeCoder from the TimeCodeDef and TimeCodeTime in the toFrameNumber( ) method. To get it into a QTHandleRef required by addSample( ), put it in a one-element int array, create a 4-byte QTHandle , and use the handle's copyFromArray( ) method to copy the int's bytes into the handle. The addSample( ) call also needs a SampleDescription to indicate what's in the QTHandle—get this by creating a new TimeCodeDescription object and setting its fields with setTimeCodeDef() .

After adding the sample, and inserting the media into the track as always, the timecode is ready to display. However, it defaults to a position at the upper right of the movie, and it has a background box that obscures the movie below it. You can fix these problems by setting a track Matrix to move the timecode display to the bottom of the movie's box and by setting a transparent GraphicsMode to make the background color disappear.

Note

See Chapter 8 for information on how to reposition tracks with matrices and composite them with GraphicsModes.

What about...

...those weird values for TimeCodeDef ? What's with the "2997"? This shows off the power of QuickTime's timecode support. Imagine you had perfectly normal, 30-frames-per-second video. In that case, you'd expect the values for the TimeCodeDef would be:

Time scale 3000
Frame duration 100
Frames per second 30


Notice how this is redundant: if the time scale is 3000 and there are 30 frames per second, of course each frame is 100 "beats" long. So, why did they define it this way?

Because "normal 30-frames-per-second video" isn't necessarily how things work in the real world.

In North America, most broadcast video is actually in a format called "drop frame," a misnamed concept indicating that two timecodes (but not actual frames) are dropped every minute, except for the tenth, to sync the color video signal with the audio. This format is defined by:

Timescale 2997
Frame duration 100
Frames per second 30


You can use these values with the TimeCodeDef methods setTimeScale( ) , setFrameDuration(), and setFramesPerSecond( ) to represent NTSC broadcast video in QuickTime. You'll also need to call setFlags() with the flag StdQTConstants.tcDropFrame to tell QuickTime you're doing drop-frame video. While you're at it, two other real-world flags to consider setting are tcNegTimesOK to allow negative times and tc24HoursMax, which limits timecodes to go up only to 24 hours (mimicking the behavior of analog broadcast equipment).

And by the way, what is the timecode system buying me, other than accuracy? One important consideration with QuickTime's timecoding is to support the way things are done in the professional realm, with both digital and analog equipment. There are many different schemes for timecoding media, and QuickTime is designed to support any such system. Also, one of the nice things you can do with timecodes is to capture the timecode from an original tape and maintain it in QuickTime, even through editing, so the user always has a frame-accurate representation of where his original material came from. There are even advanced techniques to "name" timecode tracks, presumably after their original tapes (or "sources," as we move to a tapeless media world), which would allow you to use QuickTime as the basis of a content management system.

Creating Zero-Source Effects

QuickTime comes with an extensive collection of video effects, which you use by making movies with effects tracks—i.e., a track whose media defines a video effect.

These effects are grouped based on how many sources they operate on.

Zero-source effects
These effects are meant to be seen just by themselves. Apple includes a few of these, like fire, clouds, and water "ripples."
One-source effects (or filters)
These effects are applied to a single source. Examples of this kind of effect include color correction or tinting, edge detection, lens flare, etc.
Two-source effects (or transitions)
These are effects that apply to two sources at once. Typically, they're used to visually change the display from one video source to another. Examples of these include dissolves and wipes.

The simplest of these are the zero-source effects, because they don't require wiring up the effect to sources. Instead, you just put an appropriate effects sample into a video track and you're done.

How do I do that?

An effects track is really just a video track (literally, a track with VideoMedia), whose samples are descriptions of effects: the ID of the effect and any parameters it might take. In QuickTime, these are passed in the form of AtomContainer s: tree-structures in which each "atom" can contain children or data, but not both. Each atom has a size and a FOUR_CHAR_CODE type, and can be accessed by index and/or type (i.e., you can get the nth atom of type m from a parent). For effects, you basically need to pack an AtomContainer with an Atom to specify the desired effect and possibly other Atoms to specify behavior parameters. This AtomContainer is the QTHandle you pass to the addSample( ) method. Fortunately, you can get a properly structured AtomContainer from a user dialog, instead of having to build it yourself.

Note

Almost everything you do in QuickTime involves atom manipulation, but most of the time the API isolates you from it. Not this time, though.

To generate the user dialog, use an EffectsList object to create a list of installed effects—remember, the user could have installed third-party effect components, so you want to get the list of effects at runtime. Pass to ParameterDialog.showParameterDialog( ), which will return an AtomContainer of the selected and configured effect.

The sample program in Example 9-3 shows how to create a zero-source effect movie, which is saved to disk as effectonly.mov.

Example 9-3. Creating a zero-source effect

package com.oreilly.qtjnotebook.ch09;
 
import quicktime.*;
import quicktime.std.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.io.*;
import quicktime.std.image.*;
import quicktime.util.*;
 
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
public class EffectOnlyTrackBuilder {
 
public static final int EFFECT_TRACK_WIDTH = 320;
public static final int EFFECT_TRACK_HEIGHT = 240;
public static final int TIMESCALE = 600;
 
public static void main (String[  ] args) {
  try {
      new EffectOnlyTrackBuilder( );
  } catch (QTException qte) {
      qte.printStackTrace( );
  }
  System.exit(0);
}
 
public EffectOnlyTrackBuilder( ) throws QTException {
  QTSessionCheck.check( );
 
  QTFile movFile = new QTFile (new java.io.File("effectonly.mov"));
  Movie movie =
      Movie.createMovieFile(movFile,
                            StdQTConstants.kMoviePlayer,
                            StdQTConstants.createMovieFileDeleteCurFile |
                            StdQTConstants.createMovieFileDontCreateResFile);
  Track effectsTrack = movie.addTrack (EFFECT_TRACK_WIDTH,
                                       EFFECT_TRACK_HEIGHT,
                                       0);
  int TIMESCALE = 600;
  VideoMedia effectsMedia = new VideoMedia(effectsTrack,
                                         TIMESCALE);
  // get list of effects
  // StdQTConstants.elOptionsIncludeNoneInList)
  EffectsList effectsList = new EffectsList (0, 0, 0);
  // show list of effects
  // flags are in StdQTConstants.pdOptions...
  AtomContainer effect =
      ParameterDialog.showParameterDialog (effectsList, // effectsList
                                           0, // dialogOptions
                                           null, // parameters
                                           "Pick an effect", // title
                                           null //pictArray
                                           );
  // find out the effect type by getting the "what" atom,
  // whose data is a FOUR_CHAR_CODE
  Atom what = effect.findChildByIndex_Atom (null, 
                                  StdQTConstants.kParameterWhatName,
                                             1);
  int effectType = effect.getAtomData(what).getInt(0);
  effectType = EndianOrder.flipBigEndianToNative32(effectType);
  System.out.println ("User chose " + 
                      QTUtils.fromOSType(effectType) +
                      " effect type");
 
  // make a sample description for the effect description
  ImageDescription imgDesc = ImageDescription.forEffect (effectType);
  imgDesc.setWidth (EFFECT_TRACK_WIDTH);
  imgDesc.setHeight (EFFECT_TRACK_HEIGHT);
 
  // add effect to the video media
  effectsMedia.beginEdits( );
 
  effectsMedia.addSample (effect, // QTHandleRef data,
                        0, // int dataOffset,
                        effect.getSize( ), // int dataSize,
                        1200, //int durationPerSample,
                        imgDesc, // SampleDescription sampleDesc,
                        1, // int numberOfSamples,
                        0 // int sampleFlags
                        );
 
  effectsMedia.endEdits( );
 
  // now insert this media into track
  effectsTrack.insertMedia (0, // trackStart
                          0, // mediaTime
                          effectsMedia.getDuration( ), // mediaDuration
                          1); // mediaRate
  System.out.println ("inserted media into effects track");
 
  // save up 
  System.out.println ("Saving...");
  OpenMovieFile omf = OpenMovieFile.asWrite (movFile);
  movie.addResource (omf,
                     StdQTConstants.movieInDataForkResID,
                     movFile.getName( ));
  System.out.println ("Done");
}
}

When run, it presents the user with an effects dialog, as seen in Figure 9-4.

Figure 9-4. ParameterDialog for a zero-source effect

ParameterDialog for a zero-source effect

This allows the user to choose the effect and configure it. For example, the fire effect allows the user to set the height of the flames, how quickly they burn out and restart, how much "water" is doused on them to vary their burn, etc. The resulting movie is shown in Figure 9-5.

Figure 9-5. An effect-only movie

An effect-only movie

What just happened?

After setting up an empty movie, track, and video media (effects tracks are actually a special case of video), ask QuickTime for a list of installed effects:

EffectsList effectsList = new EffectsList (0, 0, 0);

To specify which effects are returned, this call takes a minimum number of sources, a maximum number of sources, and a flag. To signal that you want only zero-source effects, set the first two parameters to 0. elOptionsIncludeNoneInList is the only flag that can be passed to the third parameter, because it causes a no-op "none" effect to be included.

Then pass this to ParameterDialog.showParameterDialog() to present the user with the list of discovered effects, as well as controls to configure each one. This call takes five parameters:

  • The EffectsList.
  • A dialog options int, which alters the dialog for effects that have "tweening" values—in other words., those that change the effect over time (like how much of a transition is actually performed). pdOptionsCollectOneValue causes tweenable options to not be tweenable, while pdOptionsAllowOptionalInterpolations puts tweenable parameters into a more advanced user-interface mode.
  • A "parameters" AtomContainer, which contains canned values for an effect. You could create such an AtomContainer by carefully studying the QuickTime native docs and constructing it manually with AtomContainer calls, or by getting an AtomContainer from this dialog and "canning" its bytes for future use. By passing null, you get the default values for all effects.
  • A String title for the dialog.
  • An array of Picts to use for previewing the effect. If none is provided, default images of the letters A and B are used for showing filter and transition effects.

When the user selects and configures an effect, it's returned as an AtomContainer . This is what you need to use for the addSample( ) call on the VideoMedia object. What's tricky is getting the SampleDescription to tell addSample() what to do with the effect AtomContainer. ImageDescription.forEffect( ) will create such a description, but you need to pass it the FOUR_CHAR_CODE of the effect—easy to do if you built the AtomContainer by hand, less easy if you got it from the dialog. The effect type is in an atom whose type is "what", so you can retrieve the AtomContainer by calling findChildByIndex_Atom( ) and asking for the first instance of the type kParameterWhatName. Atom.getData( ) will return an AtomData object, from which you can get an int with getInt( ).

There's an interesting concern with this int , because you must account for "endianness." QuickTime structures are defined as being "big-endian," meaning that in a given 32-bit value, the most significant 16 bits come first. That's convenient for 680x0 and PowerPC CPUs, which Macs run on, but not Intel CPUs. On Windows, when you get this int from the AtomContainer, it's big-endian, making it wrong for use with calls to any QuickTime method that takes an int. You fix this with the self-describing convenience method EndianOrder.flipBigEndianToNative32( ) . On the Mac, this call does nothing, because the native endianness is already big-endian.

Finally, you have everything you need to add the sample. It's interesting to note that zero-source effects aren't necessarily "played" in the same sense that other movie data is. When you open the resulting movie, the flame starts immediately, regardless of whether the movie is playing, and it keeps burning even if you stop the movie.

What about...

...the simpler version of showParameterDialog() ? Because this example just wants default values for everything, why not use that? Unfortunately, as of this writing, it's buggy. The native API has separate calls for creating the dialog, getting an event from it, and dismissing it. QTJ is supposed to catch the event and dismiss the dialog for you if you click OK, whereas a "cancel" throws an exception, like with other QTJ dialogs. Unfortunately, clicking OK also throws an exception, meaning you don't get the returned AtomContainer, and because there's not a ParameterDialog instance you can hold on to—the showParameterDialog( ) call was static, after all—there's no way to go back and find out what the user selected. Oops.

Note

Always file bugs at bugreport.apple.com when you find things that are obviously wrong. This one is #3792083.

Anyway, the fancy version of the dialog doesn't have the bug, so that's what I've used here.

Also, what can I do with these zero-source effects other than just look at them? Remember, they're normal video tracks, so they can be composited with other tracks, as shown in Chapter 8. For example, you could take the fire effect, put it in the foreground by setting its layer to a lower value, use a transparent GraphicsMode to punch out the black background, and voilà, the contents of your movie are on fire! And that's always a nice way to spice up your boring home movies.

Creating One-Source Effects (Filters)

Filtering a video track by applying an effect to it is a critically important tool for doing color correction, adding special effects like lens flare, or offering novelties such as converting the video to black and white or pseudo-antique sepia tone. The technique of creating the effect is effectively the same as with zero-source effects, although in this case you need to create an object that tells the effect where its video source comes from.

How do I do that?

You create a one-source effect just like you do the zero-source version—create a track, create video media, get an EffectsList (this time of one-source effects), and get an AtomContainer describing an effect from a ParameterDialog.

But before adding the AtomContainer as the effects media sample, you need to map it to a video source, which is another video track in the movie. You do this by creating an input map, which is an AtomContainer indicating the sources that are inputs to an effect. Next, create a track modifier reference to redirect the track's output to the effect. You use the reference in building up the Atoms in the input map. Once built, the input map is set on the effect's media with setInputMap() .

Example 9-4 exercises this technique by opening a movie, getting its first video track, and applying a user-selected filter to it.

Note

Run this example with ant run-ch09-filtertrackbuilder.ks.

Example 9-4. Creating a one-source effect (filter)

package com.oreilly.qtjnotebook.ch09;
 
import quicktime.*;
import quicktime.std.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.io.*;
import quicktime.std.image.*;
import quicktime.util.*;
import quicktime.qd.*;
 
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
 
public class FilterTrackBuilder {
 
  public static final int EFFECT_TRACK_WIDTH = 320;
  public static final int EFFECT_TRACK_HEIGHT = 240;
  public static final int TIMESCALE = 600;
 
  public static void main (String[  ] args) {
      try {
          new FilterTrackBuilder( );
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
      System.exit(0);
  }
  public FilterTrackBuilder( ) throws QTException {
      QTSessionCheck.check( );
 
      QTFile movFile = new QTFile (new java.io.File("filter.mov"));
      Movie movie =
          Movie.createMovieFile(movFile,
                      StdQTConstants.kMoviePlayer,
                      StdQTConstants.createMovieFileDeleteCurFile |
                      StdQTConstants.createMovieFileDontCreateResFile);
 
      Movie sourceMovie = queryUserForMovie( );
      Track sourceTrack = addVideoTrack (sourceMovie,
                                         movie,
                                         0,
                                         sourceMovie.getDuration( ),
                                         0);
 
      Track effectsTrack = movie.addTrack (EFFECT_TRACK_WIDTH,
                                           EFFECT_TRACK_HEIGHT,
                                           0);
      effectsTrack.setLayer(-1);
 
      int TIMESCALE = 600;
      VideoMedia effectsMedia = new VideoMedia(effectsTrack,
                                               TIMESCALE);
 
      // set up input map here
      AtomContainer inputMap = new AtomContainer( );
 
      int trackRef =
          effectsTrack.addReference (sourceTrack,
                                     StdQTConstants.kTrackModifierReference);
      // add input reference atom
      Atom inputAtom = 
          inputMap.insertChild (null,
                                StdQTConstants.kTrackModifierInput, 
                                trackRef,
                                0);
 
      // add name and type
      inputMap.insertChild (inputAtom,
                            StdQTConstants.kTrackModifierType,
                            1,
                            0,
          EndianOrder.flipNativeToBigEndian32(StdQTConstants.videoMediaType));
 
      inputMap.insertChild (inputAtom,
                            StdQTConstants.kEffectDataSourceType,
                            1,
                            0,
          EndianOrder.flipNativeToBigEndian32(QTUtils.toOSType ("srcA")));
      System.out.println ("set up input map atom");
 
      // show list of effects
      // flags are in StdQTConstants.pdOptions...
      Pict[  ] previewPicts = new Pict[1];
      previewPicts[0] = sourceMovie.getPosterPict( );
      // get list of effects
      EffectsList effectsList = new EffectsList (1, 1, 0);
      AtomContainer effect =
          ParameterDialog.showParameterDialog (effectsList,
                                               0, // dialogOptions
                                               null, // parameters
                                               "Pick an effect", // title
                                               previewPicts //pictArray
                                               );
      // find out the effect type by getting the "what" atom,
      // whose data is a FOUR_CHAR_CODE
      Atom what = effect.findChildByIndex_Atom (null, 
                                     StdQTConstants.kParameterWhatName,
                                                 1);
      int effectType = effect.getAtomData(what).getInt(0);
      effectType = EndianOrder.flipBigEndianToNative32(effectType);
      System.out.println ("User chose " + 
                          QTUtils.fromOSType(effectType) +
                          " effect type");
 
      // make a sample description for the effect description
      ImageDescription imgDesc = ImageDescription.forEffect (effectType);
      imgDesc.setWidth (EFFECT_TRACK_WIDTH);
      imgDesc.setHeight (EFFECT_TRACK_HEIGHT);
 
      // give the effect description a ref to the source
      effect.insertChild (null,
                          StdQTConstants.kEffectSourceName,
                          1,
                          0,
                          QTUtils.toOSType ("srcA"));
 
      // add effect to the video media
      effectsMedia.beginEdits( );
 
      effectsMedia.addSample (effect, // QTHandleRef data,
                            0, // int dataOffset,
                            effect.getSize( ), // int dataSize,
                            sourceTrack.getDuration( ), //int durPerSample,
                            imgDesc, // SampleDescription sampleDesc,
                            1, // int numberOfSamples,
                            0 // int sampleFlags
                            );
      effectsMedia.setInputMap (inputMap);
 
      effectsMedia.endEdits( );
 
      // now insert this media into track
      effectsTrack.insertMedia (0, // trackStart
                                0, // mediaTime
                                sourceTrack.getDuration( ), // mediaDuration
                                1); // mediaRate
      System.out.println ("inserted media into effects track");
 
      // save up 
      System.out.println ("Saving...");
      OpenMovieFile omf = OpenMovieFile.asWrite (movFile);
      movie.addResource (omf,
                         StdQTConstants.movieInDataForkResID,
                         movFile.getName( ));
      System.out.println ("Done");
 
  }
 
  public static Movie queryUserForMovie( )
      throws QTException {
      QTFile file =
          QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes);
      OpenMovieFile omf = OpenMovieFile.asRead (file);
      return Movie.fromFile (omf);
  }
 
  public static Track addVideoTrack (Movie sourceMovie,
                                     Movie targetMovie,
                                     int srcIn,
                                     int srcDuration,
                                     int targetTime)
      throws QTException { 
      // find first video track
      Track videoTrack = 
          sourceMovie.getIndTrackType (1,
                                       StdQTConstants.videoMediaType,
                                       StdQTConstants.movieTrackMediaType);
      if (videoTrack =  = null)
          throw new QTException ("can't find a video track");
      // add videoTrack to targetMovie
      Track newTrack =
          targetMovie.newTrack (videoTrack.getSize( ).getWidthF( ),
                                videoTrack.getSize( ).getHeightF( ),
                                1.0f);
      VideoMedia newMedia = 
          new VideoMedia (newTrack,
                          videoTrack.getMedia( ).getTimeScale( ),
                          new DataRef(new QTHandle( )));
      videoTrack.insertSegment (newTrack,
                                srcIn, // 0
                                srcDuration, // videoTrack.getDuration( )
                                targetTime);
      return newTrack;
  }
}

When run, this application queries the user to open a QuickTime movie. Then it opens a dialog to choose and configure the effect, as seen in Figure 9-6. Notice that a frame from the movie is used in the preview section of the dialog.

Figure 9-6. ParameterDialog for a one-source effect

ParameterDialog for a one-source effect

After the effect is chosen, the new movie—consisting of just a video track and an effects track—is written to filter.mov. Figure 9-7 shows a video that is modified by the emboss effect.

Figure 9-7. Video track filtered through emboss effect

Video track filtered through emboss effect

What just happened?

After grabbing the source movie's first video track and adding it as a video track in a new movie, the example creates an effects track. The video track's output is redirected by adding a reference to it to the effects track, via the addReference( ) call.

Next, you need to set up the input map. This is a normal AtomContainer, into which you'll insert child atoms. First, create the "track modifier" atom, with the four-argument version of insertChild() —this creates and returns a parent atom (the five-argument versions all create leaf atoms). To work, this atom requires two children: an atom of type kTrackModifierType whose data is the type of track being modified (videoMediaType in this case), and an atom of type kEffectDataSourceType whose data is a name for the track as a FOUR_CHAR_CODE int. Apple's recommended standard is that source tracks be named "srcA," "srcB," etc.; you can get this 4CC name with QTUtils.toOSType ("srcA") .

Again, there is an endianness issue—QuickTime expects what you're building to be big-endian, so you have to be careful to account for the endianness of the data you insert. In this case, the videoMediaType constant and the srcA name are native ints, so they need to be flipped to big-endianness with EndianOrder.flipNativeToBigEndian32() .

Now that it's initialized, set this atom aside while creating the effect and adding its sample to the effects media. Two important to-dos for filters are to ask the EffectsList constructor for only one-source effects (by passing 1 for the minimum and maximum number of sources to get effects for) and to provide the ParameterDialog with a Pict[ ] that contains an image from your source movie for previewing the effect. Once the effect has been added, provide the input map with a call to Media.setInputMap( ) .

What about...

...applying the filter to just part of the source track? Ah, this will turn up a nasty surprise . . . go ahead and make the effect cover just half the length of the source video, by changing the duration parameters in effectsMedia.addSample() and effectsTrack.insertMedia( ) from sourceTrack.getDuration( ) to sourceTrack.getDuration( ) / 2. You might reasonably expect that halfway through your movie, the filter simply would go away, because the duration of the effect would have expired and the video would be the only valid media at that point. Instead, the display goes blank!

Here's the deal: using a track for an effect makes it usable only by the effect. Setting up the track reference redirects the output of the source video track into the effect.

So, what can you do about it? One option is to use two different video tracks in addition to the effect. The first is the source to the effect and the second is all the source media not to be used in the effect. In adding this second track, you set its in-time (the "destination in" argument of Track.insertSegment( )) to come after the end of the effect. A somewhat cheesier alternative is to add another, "no-op" effect, like a color conversion configured to not actually do anything, allowing the source video to get to the screen by way of the effect.

Note

The next lab shows this first technique.

Creating Two-Source Effects (Transitions)

Effects that combine two sources are called transitions, such as dissolves and wipes. You've probably seen wipes on TV and less frequently in film, although they're considered somewhat artificial in film because they call attention to themselves (the Star Wars films are probably the most prominent films to use wipes, perhaps as a nod to old black-and-white adventure films and weekly cliff-hangers).

Note

Technically, a cut from one scene to another is also a transition, but that doesn't involve any kind of effect.

To show off a transition, this lab will open two movies and create a user-selected transition between them.

How do I do that?

In coding terms, the only significant difference from a one-source effect is, predictably, that you need to set up an input map that references both source tracks for the effect.

But in terms of practicality, although you might apply a filter to a long sequence of video, a transition will be very short typically—only a few seconds at most. Because a video track used as a source to an effect is shown only as part of that effect, to show all of one video source transitioning into all of another, you need five tracks:

  • All of source A, up to the beginning of the transition (i.e., its last n seconds)
  • The portion of source A to be used for the transition
  • The portion of source B to be used for the transition
  • All of source B after the transition (i.e., everything but its first n seconds)
  • The effects track

So, to change the previous filter example into a transition example, ask for two source movies and create the new target movie:

Movie sourceAMovie = queryUserForMovie( );
Movie sourceBMovie = queryUserForMovie( );
QTFile movFile = new QTFile (new java.io.File("transition.mov"));
Movie movie =
  Movie.createMovieFile(movFile,
                        StdQTConstants.kMoviePlayer,
                        StdQTConstants.createMovieFileDeleteCurFile |
                        StdQTConstants.createMovieFileDontCreateResFile);

Next, add the four video tracks, with the addVideoTrack( ) convenience method from the last lab, which grabs the first video track from the source, creates a new track, and inserts the specified segment of video media into the new track:

Track preEffectTrack = addVideoTrack (sourceAMovie,
     movie,
     0, 
     sourceAMovie.getDuration( ) - TRANSITION_DURATION,
     0);
Track sourceATrack = addVideoTrack (sourceAMovie,
     movie,
     sourceAMovie.getDuration( ) - TRANSITION_DURATION,
     TRANSITION_DURATION,
     sourceAMovie.getDuration( ) - TRANSITION_DURATION);
 
Track sourceBTrack = addVideoTrack (sourceBMovie,
     movie,
     0,
     TRANSITION_DURATION,
     movie.getDuration( ) - TRANSITION_DURATION);
Track postEffectTrack = addVideoTrack (sourceBMovie,
     movie,
     TRANSITION_DURATION,
     sourceBMovie.getDuration( ) - TRANSITION_DURATION,
     movie.getDuration( ));

After this, create the effect track as before, except that:

  • You ask the EffectsList constructor for two-source effects.
  • You provide two Picts to ParameterDialog, one from each source.
  • You create the input map with two track modifier atoms, each of which refers to a different track reference (as returned by calls to addReference( )). Their contents differ only by name: one is srcA, and the other is srcB:
int trackARef =
  effectsTrack.addReference (sourceATrack,
                             StdQTConstants.kTrackModifierReference);
int trackBRef =
  effectsTrack.addReference (sourceBTrack,
                             StdQTConstants.kTrackModifierReference);
 
// add input reference atoms
Atom aInputAtom = 
  inputMap.insertChild (null,
                        StdQTConstants.kTrackModifierInput, 
                        trackARef,
                        0);
inputMap.insertChild (aInputAtom,
          StdQTConstants.kTrackModifierType,
          1,
          0,
          EndianOrder.flipNativeToBigEndian32(StdQTConstants.videoMediaType));
inputMap.insertChild (aInputAtom,
          StdQTConstants.kEffectDataSourceType,
          1,
          0,
          EndianOrder.flipNativeToBigEndian32(QTUtils.toOSType ("srcA")));
 
Atom bInputAtom = 
  inputMap.insertChild (null,
                        StdQTConstants.kTrackModifierInput, 
                        trackBRef,
                        0);
inputMap.insertChild (bInputAtom,
         StdQTConstants.kTrackModifierType,
         1,
         0,
         EndianOrder.flipNativeToBigEndian32(StdQTConstants.videoMediaType));
 
inputMap.insertChild (bInputAtom,
         StdQTConstants.kEffectDataSourceType,
         1,
         0,
         EndianOrder.flipNativeToBigEndian32(QTUtils.toOSType ("srcB")));

Because you have two input atoms, you need to make two calls to insert them into the effects description:

effect.insertChild (null,
      StdQTConstants.kEffectSourceName,
       1,
       0,
       EndianOrder.flipNativeToBigEndian32(QTUtils.toOSType ("srcA")));
effect.insertChild (null,
      StdQTConstants.kEffectSourceName,
      2,
      0,
      EndianOrder.flipNativeToBigEndian32(QTUtils.toOSType ("srcB")));

When run, this example queries the user twice for input movies, then shows a dialog of all installed two-source effects, as seen in Figure 9-8.

Figure 9-8. ParameterDialog for a two source effect

ParameterDialog for a two source effect

Note

Run this example with ant run-ch09-transitiontrackbuilder.

Once an effect is selected, the resulting movie is saved as transition.mov. Figure 9-9 shows an example of a movie in mid-transition, using a vertical "barn door" wipe with 5-pixel-wide borders.

Figure 9-9. Two video tracks as sources to a transition effect

Two video tracks as sources to a transition effect

What just happened?

In general, this isn't very different from the one-source case: an effects description defines the effect, and an input map indicates where the sources come from. Probably the biggest hassle is that because an effect by itself isn't very interesting, this example rips out the pre-effect and post-effect video as separate tracks so that you can actually see the one video clip transitioning into another.

What about...

...all these tracks? Who sends out QuickTime movies with five tracks, one of which QuickTime Player identifies by the name of the effect, like "Wipe"? Fair enough—this is the form you would want your movie in while editing it so that you can make changes easily, tossing the effect or reworking it on the fly, with minimal CPU or I/O cost to do so (because, as always, you're mostly just copying pointers). For end-user delivery, you probably would want to export the movie. Even if you export to another QuickTime movie (as opposed to a foreign format like MPEG-4), the export process will render and compress each frame of the transition, leaving you with just a single video track.

Also, is there a list of all the effects I can check out? Sure, but there are more than 100...too many to list here. If you look in Inside Macintosh: QuickTime (on Apple's web site or installed by developer tools for Mac OS X), the section "Built-in QuickTime Video Effects" lists all the effects provided by QuickTime, with examples and information about the parameters each one takes. Several dozen of them are defined and standardized by the industry trade group SMPTE (Society of Motion Picture and Television Engineers) and will be familiar to anyone who's worked with a television switcher. Remember, though, the user may have installed third-party effects, so it's important to be able to use the EffectsList to do runtime discovery of what's available to your program.

Personal tools