QuickTime for Java: A Developer's Notebook/Capture

From WikiContent

< QuickTime for Java: A Developer's Notebook(Difference between revisions)
Jump to: navigation, search
(Initial conversion from Docbook)
Current revision (13:22, 7 March 2008) (edit) (undo)
(Initial conversion from Docbook)
 
(4 intermediate revisions not shown.)
Line 162: Line 162:
'''Figure 6-1. Audio capture preview window'''
'''Figure 6-1. Audio capture preview window'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt80.png|Audio capture preview window]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt80.png|Audio capture preview window]]
</div>
</div>
Line 259: Line 259:
'''Figure 6-2. Discovering and displaying audio capture devices'''
'''Figure 6-2. Discovering and displaying audio capture devices'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt83.png|Discovering and displaying audio capture devices]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt83.png|Discovering and displaying audio capture devices]]
</div>
</div>
Line 364: Line 364:
'''Figure 6-3. Audio channel settings dialog'''
'''Figure 6-3. Audio channel settings dialog'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt91.png|Audio channel settings dialog]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt91.png|Audio channel settings dialog]]
</div>
</div>
Line 550: Line 550:
'''Figure 6-4. Video channel settings dialog'''
'''Figure 6-4. Video channel settings dialog'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt92.png|Video channel settings dialog]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt92.png|Video channel settings dialog]]
</div>
</div>
Line 564: Line 564:
'''Figure 6-5. Captured video playing in a window'''
'''Figure 6-5. Captured video playing in a window'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt93.png|Captured video playing in a window]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt93.png|Captured video playing in a window]]
</div>
</div>
Line 845: Line 845:
'''Figure 6-6. Video motion detector window'''
'''Figure 6-6. Video motion detector window'''
-
[[Image:QuickTime for Java: A Developer's Notebook_I_6_tt97.png|Video motion detector window]]
+
[[Image:QuickTime for Java: A Developers Notebook_I_6_tt97.png|Video motion detector window]]
</div>
</div>

Current revision

QuickTime for Java: A Developer's Notebook

Much of this book has assumed you already had media of some kind to play and edit—but where does this media come from in the first place? Digital media has to come from one of two places: either it's completely synthetic or it's captured from a real-world source. Capture, via devices like microphones and video cameras, is far more common.

The problem is that capture doesn't officially work in QuickTime for Java. The problem dates back to Apple's Java 1.4.1 rearchitecture, which broke QTJ and forced massive changes to the API in QTJ 6.1. One of the things that was not updated for QTJ was the ability to get an on-screen component from a SequenceGrabber, which is the QuickTime capture component. Instead, Apple just put a statement in the QTJ 6.1 documentation:

Although sequence grabbing is currently not supported in QuickTime for Java 1.4.1, it may be provided in future releases.

But if you think back to how the QTJ 6.1 situation was described in Chapter 1, you might recall that QTJ classes that didn't require working with AWT—such as the quicktime.std classes that simply wrapped straight-C calls—were unaffected by the Java 1.4.1 changes and still worked. Given that, notice in the Javadoc the package called quicktime.std.sg, which contains the SequenceGrabber class among several others. Besides, capture, per se, doesn't necessarily imply using the screen, so shouldn't it still work?

The good news is that it does. In this chapter, I'll introduce the parts of the capture API that still work in QTJ, even without official support: capturing audio, capturing to disk, and even getting captured video on screen with a little QuickDraw voodoo. QTJ still needs proper support for on-screen, video-capture preview, but there's plenty to do in the meantime.

Contents

Capturing and Previewing Audio

Audio capture is a good place to start because that sidesteps the problem of the broken video preview. There's plenty to be learned in just opening up the default microphone and looking at the incoming level data—that is, how loud or soft the incoming sound data is.

How do I do that?

Setting up audio capture requires a number of steps. You start by constructing a SequenceGrabber. This object coordinates all the capture channels (audio, video...even text capture), and allows you to set capture parameters like whether to save the captured data to disk, set a maximum amount of time to let the capture run, etc.

Note

Don't scoff—there really are text-capture devices. For example, you could capture the closed captions off regular TV (also called "line 21" data).

Once you have the SequenceGrabber, you use an optional prepare( ) call to indicate whether you intend to preview the captured media, record it to disk, or both.

To work with sound, you need to create a sound channel, by calling the SGSoundChannel constructor and passing in the SequenceGrabber. This object allows you to configure the audio capture, choose among audio capture devices (see the next lab), and get the device's driver. The driver, exposed by the SPBDevice class, provides methods for checking the input line level.

As an example, compile and run the AudioCapturePreview application as shown in Example 6-1. Note that you need to have at least one audio capture device hooked up to your computer. Most Macs come with a built-in microphone. If you don't have one, you can use a USB capture device (like a headset or external microphone) or a FireWire device (like an iSight).

Note

Compile and run this example from the book's downloadable code with ant run-ch06-audiocapture-preview.

Example 6-1. Previewing captured audio

package com.oreilly.qtjnotebook.ch06;
 
import quicktime.*;
import quicktime.io.*;
import quicktime.std.*;
import quicktime.std.sg.*;
import quicktime.std.movies.*;
import quicktime.std.image.*;
import quicktime.qd.*;
import quicktime.sound.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.Timer;
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
public class AudioCapturePreview extends Frame 
  implements ItemListener {
  static final Dimension meterDim = new Dimension (200, 25);
  Checkbox previewCheck;
  AudioLevelMeter audioLevelMeter;
  SequenceGrabber grabber;
  SGSoundChannel soundChannel;
  SPBDevice inputDriver;
  boolean grabbing = true;
  public AudioCapturePreview( ) throws QTException {
      super ("Audio Preview");
      QTSessionCheck.check( );
      setLayout (new GridLayout (3, 1));
      add (new Panel( )); // reserved for next lab
      previewCheck = new Checkbox ("Preview", false);
      previewCheck.addItemListener (this);
      add (previewCheck);
      audioLevelMeter = new AudioLevelMeter( );
      add (audioLevelMeter);
      // 4th row is reserved for later lab
      setUpAudioGrab( );
      grabbing = true;
  }
  public void itemStateChanged (ItemEvent e) {
      try {
          if (e.getSource( ) =  = previewCheck) {
              if (previewCheck.getState( ))
                  soundChannel.setVolume (1.0f);
              else
                  soundChannel.setVolume (0.0f);
          }
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
  }
  protected void setUpAudioGrab( ) throws QTException {
      grabber = new SequenceGrabber( );
      soundChannel = new SGSoundChannel (grabber);
      System.out.println ("Got SGAudioChannel");
      System.out.println ("SGChannelInfo = " +
                          soundChannel.getSoundInputParameters( ));
      System.out.println ("SoundDescription = " + 
                          soundChannel.getSoundDescription( ));
      // prepare and start previewing
      grabber.prepare(true,false);
      soundChannel.setUsage (StdQTConstants.seqGrabPreview);
      soundChannel.setVolume (0.0f);
      grabber.startPreview( );
      inputDriver = soundChannel.getInputDriver( );
      inputDriver.setLevelMeterOnOff (true);
      int[  ] levelTest = inputDriver.getActiveLevels( );
      System.out.println (levelTest.length + " active levels");
      // set up thread to update level meter
      ActionListener timerCallback =
          new ActionListener( ) {
              public void actionPerformed(ActionEvent e) {
                  if (grabbing) {
                      try {
                          grabber.idle( );
                          audioLevelMeter.repaint( );
                      } catch (QTException qte) {
                          qte.printStackTrace( );
                      }
                  }
              }
          };
      Timer timer = new Timer (50, timerCallback);
      timer.start( );
  }
  public static void main (String[  ] args) {
      try {
          Frame f = new AudioCapturePreview( );
          f.pack( );
          f.setVisible(true);
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
  }
  public class AudioLevelMeter extends Canvas {
      public void paint (Graphics g) {
          // get current level if available
          int level = 0;
          if (inputDriver != null) {
              try {
                  int[  ] levels = inputDriver.getActiveLevels( );
                  if (levels.length > 0)
                      level = levels[0];
              } catch (QTException qte) {
                  qte.printStackTrace( );
              }
          }
          float levelPercent = level / 256f;
          System.out.println (level + ", " + levelPercent);
          // draw box
          g.setColor (Color.green);
          g.fillRect (0, 0,
                      (int) (levelPercent * getWidth( )),
                      getHeight( ));
      }
      public Dimension getPreferredSize( ) { return meterDim; }
  }
}

When run, the application brings up a small window with a green bar that indicates the current level on the line, as seen in Figure 6-1. At maximum input volume—if you're speaking loudly and directly into the microphone—it will stretch all the way to the right of the window.

Figure 6-1. Audio capture preview window

Audio capture preview window

There is also a Preview checkbox that is off initially. Clicking this will play the captured audio over the headset or speakers.

What just happened?

The constructor does some simple AWT business, adding the Preview checkbox and an AudioLevelMeter, which is an inner class that will be explained shortly. Then it calls setUpAudioGrab( ).

setUpAudioGrab( ) is responsible for initializing the audio capture. As described earlier, the first step is to create a new SequenceGrabber object. Next, tell the grabber what you intend to do with it, via the prepare( ) method, which takes two self-explanatory booleans: prepareForPreview and prepareForRecord.

Tip

You don't have to call prepare( ). If you don't, SequenceGrabber will take care of its setup when you start grabbing, possibly making the startup take longer.

You also need to tell the SGSoundChannel what you want to do via setUsage( ), inherited from SGChannel. As with all methods that take behavior flags, you logically OR together constants to describe your desired usage. In this case, seqGrabPreview indicates that the application is only previewing the captured sound, but you can use (and combine) four other usage constants:

seqGrabRecord
Include this if you want to record the captured media to disk.
seqGrabPlayDuringRecord
Add this to play while recording.
seqGrabLowLatencyCapture
Used to get the freshest frame possible (used for video conferencing and live image processing).
seqGrabAlwaysUseTimeBase
Used by video channels to get more accurate audio/video sync.

At this point, the capture is initialized. Begin capturing audio with SequenceGrabber 's startPreview( ) method.

To create the level meter, it's necessary to get an SPBDevice, which provides low-level access to the incoming data. This object provides level meters as an array of ints by first enabling monitoring with setLevelMeterOnOff(true) and then followed by getActiveLevels( ). The returned ints range from 0 (silence), to 255 (maximum input volume). In the example, the AudioLevelMeter inner class gets the first level on each repaint and draws a box whose width is proportional to the audio level. A Swing Timer calls repaint() on the meter every 50 milliseconds to keep it up to date.

Note

There may be multiple levels in the array, usually two for stereo input.

The repaint thread also calls idle() on the SequenceGrabber , which is something you have to call as frequently as possible to give the SequenceGrabber time to operate.

Note

SequenceGrabber.idle( ) is a lot like "tasking" back in Chapter 2, except there's no convenience class to do it for you.

What about...

...defaulting the volume off with SoundChannel.setVolume() ? This is a common practice because some users' speakers will be close enough to their microphones to cause feedback when previewing the audio to the speakers. On the other hand, users with headphones probably do want to hear the preview. So, the best practice is "default off, but let the user turn it on."

Warning

One thing this demo lacks is a call to SequenceGrabber.stop() when the user quits. This is something you should usually do, but I've left it out to make a point. On Mac OS X, if you don't stop the SequenceGrabber and you leave the volume on, you will keep grabbing sound—feedback included—even after the application quits. I've even seen this behavior survive a restart. So, try it out, don't blow your speakers, and then remember to have your programs turn off the volume and call SequenceGrabber.stop( ) when they quit.

Selecting Audio Inputs

It's not realistic to think the user has only one audio input device. The computer might be connected to a headset for audio conferencing, a webcam for video conferencing, and a camcorder for dumping pictures of the summer vacation into iMovie. Ideally, it should be possible to discover connected devices at runtime and specify which is to be used for capture.

How do I do that?

To provide a list of devices, you need to query the SGAudioChannel for what devices are available, and then present the choice to the user. So, take the code from the previous lab and add an AWT Choice called deviceChoice in the constructor (replacing a line with a comment that said "reserved for next lab"). Next, after the SGSoundChannel is created in setUpAudioGrab( ), insert this block of code to search for audio devices, adding the name of each to the deviceChoice:

// create list of input devices
SGDeviceList devices = soundChannel.getDeviceList(0);
int deviceCount = devices.getCount( );
for (int i=0; i<deviceCount; i++) {
  SGDeviceName deviceName = devices.getDeviceName(i);
  // is it available?
  if ((deviceName.getFlags( ) &
       StdQTConstants.sgDeviceNameFlagDeviceUnavailable) =  = 0)
  deviceChoice.add(deviceName.getName( ));
}

You need to update the itemStateChanged() callback to handle AWT events on the deviceChoice—in other words, when the user changes the selection. Fortunately, QuickTime allows you to change the input device by passing in a name, so switching devices is pretty easy. Add this to itemStateChanged( ), inside the try-catch block:

} else if (e.getSource( ) =  = deviceChoice) {
  System.out.println ("changed device to "+
                      deviceChoice.getSelectedItem( ));
  grabbing = false;
  // grabber.stop( );
  soundChannel.setDevice (deviceChoice.getSelectedItem( ));
  // also reset inputDriver
  inputDriver = soundChannel.getInputDriver( );
  inputDriver.setLevelMeterOnOff (true);
 
  grabbing = true;
}

The boolean named grabbing is a simple gate to keep the repaint thread from trying to get levels while this device change is underway, because the old inputDriver will be invalid once the new device is set.

A demo of this technique, SelectableAudioCapturePreview, is shown in Figure 6-2.

Figure 6-2. Discovering and displaying audio capture devices

Discovering and displaying audio capture devices

Note

Run this example with ant run-ch06-selectableaudiocapturepreview.

What just happened?

The key to switching capture devices is a single call, SGSoundChannel.setDevice() , which lets you change device mid-grab, without pausing or doing other reinitializations. It takes a device by name, the same name that was retrieved by walking through the SGDeviceList.

What about...

...the "0" parameter on getDeviceList( ) ? This method takes flags, only one of which is even relevant to QTJ.

Actually, it's easier to explain by starting further down, with the test for whether to add a device to the Choice. The SGDeviceName used to identify the capture devices wraps not just a name string, but also an int with some flag values. sgDeviceNameFlagDeviceUnavailable is the only publicly documented flag. As seen in this example, to test for whether such a flag is set, you AND the value with the flag you're interested in and check whether the result is nonzero. If so, it means that bit is set. So, in this case, if the value is 0, the device is available (literally, it's "not unavailable"), so it's OK to let the user select it.

If we were to return to the getDeviceList( ), the only flag available would be sgDeviceListDontCheckAvailability, which skips the device availability check, meaning that flag in SGDeviceName would never be set, and thus the device would never be reported as unavailable. That's clearly undesirable behavior here—you don't want to give the user an option that's only going to throw an exception when she chooses it.

Capturing Audio to Disk

Typically, you don't just capture media and immediately dispose of it—you want to save the media to disk as you capture so that you can use it later. Fortunately, the SequenceGrabber makes this pretty easy.

How do I do that?

Adding to the previous labs' code, the calls to set up the SequenceGrabber need to be changed to prepare for grabbing to disk. Specifically, the SGSoundChannel 's setUsage() call gets a flag to indicate that it will be writing the captured audio to disk:

soundChannel.setUsage (StdQTConstants.seqGrabPreview |
                     StdQTConstants.seqGrabRecord);

Next, add a call to give the user an opportunity to configure the audio capture:

soundChannel.settingsDialog( );

Warning

The settingsDialog( ) call will crash Java 1.4.2 on Mac OS X if called from the AWT event-dispatch thread. Yes, it's a bug. To work around this until the bug is fixed, you can stash the call in another thread and block on it. For instance, in this example you could replace the settingsDialog( ) call with the following:

final SGSoundChannel sc = soundChannel;
Thread t = new Thread( ) {
public void run( ) {
try {
sc.settingsDialog( );
} catch (QTException qte) {
qte.printStackTrace( );
}
}
};
t.start( );
while (t.isAlive( ))
Thread.yield( );

After starting the preview, tell the SequenceGrabber where it should save the captured audio:

// create output file
grabFile = new QTFile (new java.io.File ("audiograb.mov"));
if (grabFile.exists( ))
  grabFile.delete( );
grabber.setDataOutput(grabFile,
                    StdQTConstants.seqGrabToDisk 
                    //seqGrabDontAddMovieResource);
                    );

Finally, start recording to this file with startRecord() :

grabber.startRecord( );

The last step is to provide a Stop button because the data is written to disk only when the SequenceGrabber.stop() method is called. This Stop button is added near the bottom of the constructor, before the SequenceGrabber is set up:

stopButton = new Button ("Stop");
stopButton.addActionListener (this);
add (stopButton);

The button requires a new ActionEventListener to make the SequenceGrabber.stop( ) call and close down the sample program:

public void actionPerformed (ActionEvent e) {
  if (e.getSource( ) =  = stopButton) {
      System.out.println ("Stop grabbing");
      try {
          if (grabber != null) {
              grabber.stop( );
          }
      } catch (QTException qte) {
          qte.printStackTrace( );
      } finally {
          System.exit (0);
      }
  }
}

Note

Run this example with ant run-ch06-audiocapturetodisk.

When this AudioCaptureToDisk sample program runs, the user sees an audio settings dialog, as shown in Figure 6-3.

Figure 6-3. Audio channel settings dialog

Audio channel settings dialog

After OKing the settings dialog, the capture begins. When the user clicks Stop, the SequenceGrabber writes and closes the audiograb.mov file and the program exits.

What just happened?

Requesting that the SequenceGrabber save to disk requires just the few extra steps detailed earlier:

  1. Add seqGrabRecord to the channel's setUsage( ) call.

    Tip

    At this point, you optionally can call the channel's settingsDialog() to give the user a chance to configure the capture.

  2. Call setOutput( ) on the SequenceGrabber.
  3. Call SequenceGrabber.startRecord().

Also, the SequenceGrabber must be explicitly stop( )ped to write the captured data to disk.

What about...

...the SequenceGrabber.prepare() call? If the second argument is prepareForRecord, why isn't that set to true for this example? Well, inexplicably, when I did set it to true, I started getting erroneous "dskFulErr" exceptions every time I idle() d, even though I had 9 GB free. No, I don't know why—it's totally insane. But given the choice of what should work and what does work, I'll go with the latter.

And what is the deal with the settings dialog? Could that have been used in the preview examples? Yes, absolutely. In fact, it's important to let the user adjust things like gain, or to specify a compressor before grabbing begins. But that's more important when you're actually grabbing to disk, so I held off introducing it until now.

Note

Actually, it's usually best to capture uncompressed, so the CPU doesn't get bogged down with compression and possibly slow down the capture rate.

Capturing Video to Disk

Audio capture is nice, but if you bought this book because the sticky-note on the cover lists "capture" as one of the topics to be covered, you probably figured it meant video capture. Is there an iSight on the top of your monitor that wants some attention? OK, here's how to turn it on and grab some video.

How do I do that?

As with audio capture, the basics of setting up capture are:

  1. Create a SequenceGrabber.
  2. Create and configure (with setUsage( ) and the settingsDialog( )) the channels you're interested in—in this case, an SGVideoChannel.
  3. Call SequenceGrabber.setOutput( ) to indicate the file to capture to.
  4. Call SequenceGrabber.startRecord( ) to begin grabbing to disk.
  5. Finish up with SequenceGrabber.stop( ).

There is, however, a big difference with video. With no on-screen preview component available in QTJ 6.1, you must indicate where the SequenceGrabber can draw to. The workaround is to create an off-screen QDGraphics and hand it to the SequenceGrabber via the setGWorld( ) call.

The VideoCaptureToDisk program, presented in Example 6-2, offers a bare-bones video capture to a file called videograb.mov.

Note

Run this example with ant run-ch06-videocapturetodisk

Example 6-2. Recording captured video to disk

package com.oreilly.qtjnotebook.ch06;
 
import quicktime.*;
import quicktime.io.*;
import quicktime.std.*;
import quicktime.std.sg.*;
import quicktime.std.movies.*;
import quicktime.std.image.*;
import quicktime.qd.*;
import quicktime.sound.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.Timer;
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
public class VideoCaptureToDisk extends Frame 
  implements ActionListener {
  SequenceGrabber grabber;
  SGVideoChannel videoChannel;
  QDGraphics gw;
  QDRect grabBounds;
  boolean grabbing;
  Button stopButton;
  QTFile grabFile;
  public VideoCaptureToDisk( ) throws QTException {
      super ("Video Capture");
      QTSessionCheck.check( );
      setLayout (new GridLayout (2, 1));
      add (new Label ("Capturing video..."));
      stopButton = new Button ("Stop");
      stopButton.addActionListener (this);
      add (stopButton);
      setUpVideoGrab( );
  }
  public void actionPerformed (ActionEvent e) {
      if (e.getSource( ) =  = stopButton) {
          System.out.println ("Stop grabbing");
          try {
              grabbing = false;
              if (grabber != null) {
                  grabber.stop( );
              }
          } catch (Exception ex) {
              ex.printStackTrace( );
          } finally {
              System.exit (0);
          }
      }
  }
 
  protected void setUpVideoGrab( ) throws QTException {
      grabber = new SequenceGrabber( );
      System.out.println ("got grabber");
      // force an offscreen gworld
      grabBounds = new QDRect (320, 240);
      gw = new QDGraphics (grabBounds);
      grabber.setGWorld (gw, null);
      // get videoChannel and set its bounds
      videoChannel = new SGVideoChannel (grabber);
      System.out.println ("Got SGVideoChannel");
      videoChannel.setBounds (grabBounds);
      // get settings
      // yikes! this crashes java 1.4.2 on mac os x!
      videoChannel.settingsDialog( );
      // prepare and start previewing
      // note - second prepare arg should seemingly be false,
      // but if it is, you get erroneous dskFulErr's
      videoChannel.setUsage (StdQTConstants.seqGrabRecord);
      grabber.prepare(false, true);
      grabber.startPreview( );
      // create output file
      grabFile = new QTFile (new java.io.File ("videograb.mov"));
      grabber.setDataOutput(grabFile,
                            StdQTConstants.seqGrabToDisk 
                            //seqGrabDontAddMovieResource);
                            );
      grabber.startRecord( );
      grabbing = true;
      // set up thread to idle
      ActionListener timerCallback =
          new ActionListener( ) {
              public void actionPerformed(ActionEvent e) {
                  if (grabbing) {
                      try {
                          grabber.idle( );
                          grabber.update(null);
                      } catch (QTException qte) {
                          qte.printStackTrace( );
                      }
                  }
              }
          };
      Timer timer = new Timer (50, timerCallback);
      timer.start( );
  }
  public static void main (String[  ] args) {
      try {
          Frame f = new VideoCaptureToDisk( );
          f.pack( );
          f.setVisible(true);
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
  }
}

Note

Run this example with ant run-ch06-videocapturetodisk.

When it starts up, the program shows a settings dialog for your default camera, as seen in Figure 6-4. The video settings dialog is even more important for users than the audio settings dialog, as the video dialog gives them a chance to aim the camera, check the lighting, adjust brightness and color, etc.

Figure 6-4. Video channel settings dialog

Video channel settings dialog

Warning

Just like its audio equivalent, calling SGVideoChannel.settingsDialog( ) will crash the virtual machine in Mac OS X Java 1.4.2 if called from the AWT event-dispatch thread. And just as before, you can work around this bug by firing off the settingsDialog( ) call in its own thread and blocking until the thread finishes. I've filed it as a bug, but feel free to file a duplicate to get Apple's attention.

Once you click the Stop button, the video is written to videograb.mov and the application terminates. You can view the captured movie in any QuickTime application—Figure 6-5 shows it in the BasicQTController demo from Chapter 2.

Figure 6-5. Captured video playing in a window

Captured video playing in a window

What just happened?

The critical step in doing video capture, at least until QuickTime adds on-screen preview, is to create an off-screen QDGraphics and set that as the SequenceGrabber's GWorld:

// force an offscreen gworld
grabBounds = new QDRect (320, 240);
gw = new QDGraphics (grabBounds);
grabber.setGWorld (gw, null);

In previous versions of QTJ, this wasn't necessary because the on-screen preview provided a GWorld that the grabber could use. With no on-screen preview currently available in QTJ, this is a handy technique.

The next step is to create an SGVideoChannel from the SequenceGrabber and set its bounds. After optionally showing a settings dialog, set the usage (just seqGrabRecord this time because there's no preview) and then call prepare(false, true), which prepares the SequenceGrabber for recording but not for previewing.

Note

This time, setting the second prepare( ) argument to true is the right thing to do.

Just as with audio, the final steps are to call setDataOutput( ) on the SequenceGrabber, followed by startRecord( ). When SequenceGrabber.stop( ) is called, the file is written out and closed up.

What about...

...using this on Windows...it doesn't find my webcam! This example presupposes that a video digitizer component for your camera will be found, and a lot of video cameras don't ship with a Windows QuickTime "vdig", supporting only Microsoft's media APIs instead. However, there's hope: you can use SoftVDIG from Abstract Plane (http://www.abstractplane.com.au), which acts as a proxy to bring captured video from the Microsoft DirectShow world into QuickTime.

Capturing Audio and Video to the Same File

So, it's possible to capture audio and video in isolation. With QuickTime's editing API, it would be possible to put them in the same movie by adding each as a separate track (see Chapter 3). But wouldn't it be nice to just capture both audio and video into the same file at once, presumably keeping them in sync along the way? Fortunately, SequenceGrabber supports this, too.

How do I do that?

Starting with the previous lab's video-only example, you just need to add an SGSoundChannel in the setUpVideoGrab( ) method:

soundChannel = new SGSoundChannel (grabber);

The setUsage( ) and prepare() commands are identical to what was shown in the audio-only and video-only labs:

// prepare and start previewing
videoChannel.setUsage (StdQTConstants.seqGrabRecord);
soundChannel.setUsage (StdQTConstants.seqGrabPreview |
                     StdQTConstants.seqGrabRecord);
soundChannel.setVolume (0.0f);
grabber.prepare(false, true);
grabber.startPreview( );

Beyond that, everything is the same as in the video-only case. Because the setDataOutput() call is made on the SequenceGrabber—not just on an individual channel—the grabber writes data from all the channels it's capturing into the same file, called audiovideograb.mov in this case.

Note

Run this example with ant run-ch06-audiovideocapturetodisk.

What just happened?

For once, the SequenceGrabber APIs behave pretty much as you might expect them to. With no obvious prohibition on creating both audio and video channels from the same SequenceGrabber, and assigning the grabber's output to a file, the captured data from both channels goes into a single movie file.

Making a Motion Detector

Capture isn't just about writing data to disk. You can grab images as they come in and analyze or manipulate them.

Tip

A great example of "grabbing different" is Lisa Lippincott's ScrollPlate, a demo shown at ADHOC 2004. She used her iSight camera as a scroll wheel, by holding up a Styrofoam plate with either a large green arrow (for up) or a large red arrow (for down). Her code presumably grabbed from the camera, looked at the grabbed image for an abundance of green or red, and scrolled the top window in response.

This example offers a simple motion detector, which will display an alarm message if two subsequent grabs are markedly different. The idea is that if the camera is not moving, a significant difference between two subsequent grabs indicates that something in view of the camera has moved.

How do I do that?

In this case, what you want to do is to set up video-only capture, but instead of saving the data to disk, you do a little bit of image processing each time you idle() . Specifically, there is a method in QTImage called getSimilarity( ), which compares two images (one as a QDGraphics and the other as an EncodedImage). Motion—objects entering, exiting, or significantly moving within the camera's field of vision—can be understood as a significant difference between two consecutive grabbed images.

Note

See Chapter 5 for more on QTImage, QDGraphics, and EncodedImage.

Unfortunately, this requires jumping through quite a bit of QuickDraw hoops once an image is grabbed from the camera. Example 6-3 shows the SimpleMotionDetector code.

Note

Run this example with ant run-ch06-simplemotiondetector.

Example 6-3. Detecting motion by comparing grabbed images

package com.oreilly.qtjnotebook.ch06;
 
import quicktime.*;
import quicktime.io.*;
import quicktime.std.*;
import quicktime.std.sg.*;
import quicktime.std.movies.*;
import quicktime.std.movies.media.*;
import quicktime.std.image.*;
import quicktime.qd.*;
import quicktime.sound.*;
import quicktime.app.view.*;
import quicktime.util.*;
import java.awt.*;
import java.awt.event.*;
import javax.swing.Timer;
import java.text.*;
import com.oreilly.qtjnotebook.ch01.QTSessionCheck;
public class SimpleMotionDetector extends Frame 
  implements ActionListener {
  SequenceGrabber grabber;
  SGVideoChannel videoChannel;
  QDGraphics gw;
  QDRect grabBounds;
  boolean grabbing;
  Button stopButton;
  Pict grabPict;
  byte[  ] importPictBytes;
  Component importerComponent;
  Label motionLabel;
  GraphicsImporter importer;
  RawEncodedImage lastImage;
  ImageDescription lastImageDescription;
  byte[  ] lastImageBytes;
  QDGraphics newImageGW;
  int thumbcount = 0;
  // lesser numbers are more different (0 =  = totally different)
  // public static float trigger = 0.0002f;
  public static float trigger = 0.002f;
 
  public SimpleMotionDetector( ) throws QTException {
      super ("Simple Motion Detector");
      QTSessionCheck.check( );
      setLayout (new BorderLayout( ));
      motionLabel = new Label ( );
      motionLabel.setForeground (Color.red);
      add (motionLabel, BorderLayout.NORTH);
      stopButton = new Button ("Stop");
      stopButton.addActionListener (this);
      add (stopButton, BorderLayout.SOUTH);
      importer = new GraphicsImporter (StdQTConstants.kQTFileTypePicture);
      importerComponent =
          QTFactory.makeQTComponent(importer).asComponent( );
      add (importerComponent, BorderLayout.CENTER);
      setUpVideoGrab( );
  }
  public void actionPerformed (ActionEvent e) {
      if (e.getSource( ) =  = stopButton) {
          System.out.println ("Stop grabbing");
          try {
              grabbing = false;
              if (grabber != null) {
                  grabber.stop( );
              }
          } catch (Exception ex) {
              ex.printStackTrace( );
          } finally {
              System.exit (0);
          }
      }
  }
  protected void setUpVideoGrab( ) throws QTException {
      grabber = new SequenceGrabber( );
      System.out.println ("got grabber");
      // force an offscreen gworld
      grabBounds = new QDRect (320, 240);
      gw = new QDGraphics (grabBounds);
      grabber.setGWorld (gw, null);
      // get videoChannel and set its bounds
      videoChannel = new SGVideoChannel (grabber);
      System.out.println ("Got SGVideoChannel");
      videoChannel.setBounds (grabBounds);
      // get settings
      // yikes! this crashes java 1.4.2 on mac os x!
      // videoChannel.settingsDialog( );
      // prepare and start previewing
      videoChannel.setUsage (StdQTConstants.seqGrabPreview);
      grabber.prepare(true, false);
      grabber.startPreview( );
      // get first grab, so we're ready
      // to calc diff's and draw component
      scanForDifference( );
      updateImportedPict( );
      grabbing = true;
      // set up thread to idle
      ActionListener timerCallback =
          new ActionListener( ) {
              public void actionPerformed(ActionEvent e) {
                  if (grabbing) {
                      try {
                          grabber.idle( );
                          grabber.update(null);
                          scanForDifference( );
                          updateImportedPict( );
                      } catch (QTException qte) {
                          qte.printStackTrace( );
                      }
                  }
              }
          };
      Timer timer = new Timer (2000, timerCallback);
      timer.start( );
  }
  protected void scanForDifference( ) throws QTException {
      // this seems like overkill, but the GW we give
      // the grabber doesn't get updated.  Picts returned
      // from grabber are different each time, so use 'em
      if (newImageGW =  = null)
          newImageGW = new QDGraphics (grabBounds);
      grabPict = grabber.grabPict (grabBounds, 0, 0);
      grabPict.draw (newImageGW, grabBounds);
      if (lastImage != null) {
          // compare to last image
          float similarity = QTImage.getSimilarity (newImageGW,
                                                    grabBounds,
                                                    lastImageDescription,
                                                    lastImage);
          System.out.println ("similarity =  = " +
                              formatter.format(similarity));
          if (similarity < trigger) {
              System.out.println ("*** Motion detect ***");
              motionLabel.setText ("motion detect");
          } else {
              motionLabel.setText ("");
          }
      }
      // create a new lastImage from grabber GWorld
      int bufSize =
          QTImage.getMaxCompressionSize (newImageGW,
                                         newImageGW.getBounds( ),
                                         0,
                                         StdQTConstants.codecNormalQuality,
                                         StdQTConstants.kRawCodecType,
                                         CodecComponent.anyCodec);
      // make new lastImage
      lastImageBytes = new byte[bufSize];
      lastImage = new RawEncodedImage (lastImageBytes);
      lastImageDescription =
          QTImage.compress(newImageGW,
                           newImageGW.getBounds( ),
                           StdQTConstants.codecNormalQuality,
                           StdQTConstants.kRawCodecType,
                           lastImage);
 
  protected void updateImportedPict( ) throws QTException {
      importPictBytes = new byte [grabPict.getSize( ) + 512];
      grabPict.copyToArray (0,
                            importPictBytes,
                            512,
                            importPictBytes.length - 512);
      Pict wrapperPict = new Pict (importPictBytes);
      DataRef ref = new DataRef (wrapperPict,
                                 StdQTConstants.kDataRefQTFileTypeTag,
                                 "PICT");
      importer.setDataReference (ref);
      importer.draw( );
      if (importerComponent != null)
          importerComponent.repaint( );
      // wrapperPict.disposeQTObject( );
  }
 
  public static void main (String[  ] args) {
      try {
          Frame f = new SimpleMotionDetector( );
          f.pack( );
          f.setVisible(true);
      } catch (QTException qte) {
          qte.printStackTrace( );
      }
  }
}

When running, if two frames differ by more than a specified amount, the label "motion detect" will appear at the top of the window. Figure 6-6 shows the running application.

Figure 6-6. Video motion detector window

Video motion detector window

What just happened?

This is a huge example, but much of it draws on the video-grabbing techniques of the previous two labs. setUpVideoGrab() sets up the SequenceGrabber for grabbing video, but in this case, it doesn't need to save to disk, so the setUsage() argument is seqGrabPreview, and the arguments to prepare( ) are true and false (for preview and record, respectively). A Swing Timer calls back every two seconds—the long delay is intentional, so the potential for change between grabbed frames is greater—and calls the SequenceGrabber idle( ) and update( ) methods, followed by calls to the brains of this example: scanForDifference() and updatePict( ).

scanForDifference( ) evaluates the difference between the current frame and the last one. It does this by grabbing a Pict from the SequenceGrabber and drawing it into a QDGraphics (also known as a GWorld). It compares this GWorld to an EncodedImage of the last grab, via the QTImage.getSimilarity() method. This method returns a float that expresses the similarity of the two grabbed images, where 0 means the images are totally different and 1 means they're identical. At the end of this method, QTImage.compress( ) is used to compress the grabbed GWorld into a new EncodedImage for use on the next call to scanForDifference( ).

Note

It might be better to call scanFor-Difference( ) on another thread, so the image analysis doesn't block the repeated calls to SequenceGrabber.idle( ).

updatePict( ) updates a GraphicsImporter that is used to provide the preview image in the middle of the window. This uses a Pict-to-GraphicsImporter trick that was introduced in Chapter 5s Section 5.4 lab. In this case, it's used not to get a Java AWT Image, but to get new pixels into a GraphicsImporter, which is wired up to a QTComponent for on-screen preview.

What about...

...the ideal value for triggering a difference? It probably depends on lighting, your camera, and other factors. In a professional application, you'd want to give the user a slider or some similar means of configuring the sensitivity of the detection.

Also, there seems to be a lot of inefficient code here, particularly with drawing into the newImageGW. Why is that necessary when the Grabber was initially set up with a brand-new off-screen QDGraphics /GWorld? This, admittedly, is weird. When I was debugging, I found that the GWorld used to set up the Grabber is drawn to once and never again. On the other hand, the Pict generated from SequenceGrabber.grabPict( ) is always fresh, so that's what's used for testing similarity. However, to apply the getSimilarity( ) method, you need to have a GWorld, so you Pict.draw( ) the pixels from the Pict into the GWorld.

Come to think of it, with this application updating the component with a new grab every couple of seconds, isn't that effectively an on-screen preview? Yes, it is, in an extraordinarily roundabout way. You could take out the motion-detecting stuff and make a preview component by just grabbing a Pict each time, making a new Pict with a 512-byte header, setting the GraphicsImporter to read that, and calling GraphicsImporter.draw( ) to draw into its on-screen component. I didn't split that out as its own lab because the performance is pathologically bad (one frame per second—at best), and because it's an awkward workaround in lieu of a better way of getting a component from a SequenceGrabber. Presumably, someday there will be a proper call to get a QTComponent from a SequenceGrabber—maybe another overload of QTFactory.makeQTComponent( )—and kludgery like this won't be necessary.

Personal tools