Example GStreamer Pipelines

From IGEP - ISEE Wiki

Jump to: navigation, search

Notes

In order to make the next examples work I had to install (apt-get install <package>) the following packages on the target

  • gstreamer0.10-tools
  • gstreamer0.10-plugins-base
  • gstreamer0.10-alsa for alsasrc and alsasink
  • gstreamer0.10-plugins-ugly for mad

Perhaps also install gstreamer0.10-plugins-good and gstreamer0.10-plugins-bad...

For all examples I had to perform gst-launch-0.10 in stead of gst-launch.
When using the pipelines that use the TI codecs on the DSP, make sure you execute the gst-launch command in the directory were the codec server (cs.x64P) is present.
You may also make a link to the codecserver in the directory were you execute your command.
If you do not do this you will get an error like gst-launch-0.10: BufTab.c:440: BufTab_getNumBufs: Assertion `hBufTab' failed

Purpose

This page provides example pipelines that can be copied to the command line to demonstrate various GStreamer operations. Some of the pipelines may need modification for things such as file names, ip addresses, etc.

Refer to this Gstreamer article for more information on downloading and building TI Gstreamer elements.

Testing

Currently these pipelines have not undergone any extensive testing. If you find an error in a pipeline please correct it.

Media files

You should be able to use any audio and video media file that conforms to the appropriate standard.

Creating an AVI file

The following ffmpeg command takes a .mov file (say from the Apple movie trailers site) and make an AVI file. Run the command on your host computer.

ffmpeg -i tropic_thunder-tlr1a_720p.mov -r 60 -b 6000000 -vcodec mpeg2video -ab 48000000 -acodec libmp3lame -s 1280x544 tropic.avi

Supported Platforms

Following are a list of supported platforms, with links that jump directly to pipeline examples for each platform.

OMAP35x

This section covers pipelines for common use cases for the OMAP3530 or DM3730 processor.

Environment Requirements

cd /opt/gstreamer_demo/dmxxx/
./loadmodules.sh
export GST_REGISTRY=/tmp/gst_registry.bin
export LD_LIBRARY_PATH=/opt/gstreamer/lib
export GST_PLUGIN_PATH=/opt/gstreamer/lib/gstreamer-0.10
export PATH=/opt/gstreamer/bin:$PATH
cat /dev/zero > /dev/fbx 2> /dev/null

If you use IGEP GST FRAMEWORK 2.00.20 you can use "omapdmaifbsink" instead of "TIDmaiVideoSink" to display the video inside the X windowing system.

Loopback: Video

gst-launch -v videotestsrc ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD accelFrameCopy=FALSE sync=false

Loopback: Audio

In order to have access to the alsasrc and alsasink plugins perform a 'apt-get install gstreamer0.10-alsa' on the igep board.

gst-launch audiotestsrc freq=1000 num-buffers=100 ! alsasink

If you want to route audio in to audio out (your very own P.A. system), try:

gst-launch alsasrc num-buffers=1000 ! alsasink

If you get a Could not open audio device for recording. error, likely your ALSA configuration is incorrect. I fixed it with

mv /etc/asound.conf /etc/asound.conf.orig

to move the ALSA configuration file out of the way.

Loopback: Audio+Video

No pipelines here yet. Please feel free to add your own.

Decode Video files

H.264/VGA:

gst-launch -v filesrc location=sample.264 ! TIViddec2 codecName=h264dec engineName=codecServer ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD sync=false

MPEG-4/VGA:

gst-launch -v filesrc location=sample.m4v ! TIViddec2 codecName=mpeg4dec engineName=codecServer ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD sync=false

MPEG-2/VGA:

gst-launch -v filesrc location=sample.m2v ! TIViddec2 codecName=mpeg2dec engineName=codecServer ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD sync=false

Decode Audio Files

AAC:

gst-launch -v filesrc location=sample.aac ! TIAuddec1 codecName=aachedec engineName=codecServer ! alsasink sync=false

Decode .MP4 Files

The following pipeline assumes you have a VGA .MP4 file with H.264 Video and AAC Audio.

gst-launch -v filesrc location=sample.mp4 ! qtdemux name=demux demux.audio_00 ! queue max-size-buffers=8000 max-size-time=0 max-size-bytes=0 ! TIAuddec1 ! alsasink demux.video_00 ! queue ! TIViddec2 ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD

Decode .AVI Files

The following pipeline assumes you have VGA .AVI file with MPEG-2 or MPEG-4 Video and MP1L2 or MP3 Audio.

gst-launch -v filesrc location=sample.avi ! avidemux name=demux demux.audio_00 ! queue max-size-buffers=1200 max-size-time=0 max-size-bytes=0 ! mad ! alsasink demux.video_00 ! queue !  TIViddec2 ! queue max-size-buffers=2 max-size-time=0 max-size-bytes=0 ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD

Decode .TS Files

The following pipeline assumes you have VGA .TS file with H.264 Video and MP1L2 or MP3 Audio.

gst-launch filesrc location=sample.ts ! typefind ! mpegtsdemux name=demux demux. ! queue max-size-buffers=1200 max-size-time=0 max-size-bytes=0 ! typefind ! mad ! alsasink demux. ! typefind ! TIViddec2 ! queue max-size-buffers=2 max-size-time=0 max-size-bytes=0 ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD

Encode Video Files

H.264/QVGA:

gst-launch -v videotestsrc num-buffers=2000 ! TIVidenc1 codecName=h264enc engineName=codecServer ! filesink location=sample.264

MPEG-4/QVGA:

gst-launch -v videotestsrc num-buffers=2000 ! TIVidenc1 codecName=mpeg4enc engineName=codecServer ! filesink location=sample.m4v

Encode Video in Container

Encode H.264 in quicktime container (Capture):

gst-launch -v videotestsrc num-buffers=2000 ! TIVidenc1 codecName=h264enc engineName=codecServer byteStream=FALSE ! qtmux ! filesink location=sample.mp4


Image Encode

A simple pipeline to encode YUV420P image

gst-launch filesrc location=sample.yuv ! TIImgenc1 resolution=720x480 iColorSpace=UYVY oColorSpace=YUV420P qValue=75 ! filesink location=sample.jpeg

Image Decode

A simple pipeline that converts a JPEG image into UYVY format.

gst-launch filesrc location=sample.jpeg ! TIImgdec1 codecName=jpegdec engineName=codecServer  ! filesink location=sample.uyvy

Resize

A simple pipeline receiving CIF from videotestsrc and resizing to VGA.

gst-launch videotestsrc ! 'video/x-raw-yuv,width=352,height=288' ! TIVidResize ! 'video/x-raw-yuv,width=640,height=480' ! TIDmaiVideoSink videoStd=VGA videoOutput=LCD sync=false

Network Streaming

Audio RTP Streaming

Although these examples are using a target device and a host PC, you could use two target devices as well.

Case 1: sending audio from target (BeagleBoard in my case) to Ubuntu host:

On target:

gst-launch audiotestsrc freq=1000 ! mulawenc ! rtppcmupay ! udpsink host=<HOST_PC_IP> port=5555

On host:

gst-launch udpsrc port=5555 caps="application/x-rtp" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! alsasink


Case 2: sending audio from Ubuntu host to target (BeagleBoard)

On host:

gst-launch audiotestsrc freq=1000 ! mulawenc ! rtppcmupay ! udpsink host=<TARGET_PC_IP>  port=5555

On target

gst-launch udpsrc port=5555 caps="application/x-rtp" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! alsasink

The above example experienced dropped audio, please update pipeline when you get it working properly.
I had the same problem using my IGEP WLAN interface. After direct connect with Ethernet cable the dropped audio problem was solved.

I also used these pipelines:
On host:

gst-launch filesrc location=DownUnder.mp3 ! mad ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,\
rate=44100 ! rtpL16pay  ! udpsink host=192.168.2.8 port=5000

On target:

gst-launch udpsrc port=5000 ! “application/x-rtp, media=(string)audio, clock-rate=44100, width=16, height=16, \
encoding-name=(string)L16,encoding-params=(string)1, channels=(int)1, channel-position=(int)1, payload=(int)96” ! \
gstrtpjitterbuffer do-lost=true ! rtpL16depay ! audioconvert ! alsasink sync=false

And if you want to multicast your stream to multiple computers try this one:

On host:

gst-launch filesrc location=DownUnder.mp3 ! mad ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16, \
rate=44100 ! rtpL16pay  ! udpsink host=224.0.0.15 port=5000

On multiple targets:

gst-launch-0.10 udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16,\
 encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! \
gstrtpjitterbuffer do-lost=true ! rtpL16depay ! audioconvert ! alsasink sync=false

H.264 RTP Streaming

This section gives example where EVM acts as streaming server, which captures, encodes and transmit via udp. Host PC can be used as client to decode.

H.264 Encode/Stream/Decode A simple RTP server to encode and transmit H.264.

gst-launch -v videotestsrc ! TIVidenc1 codecName=h264enc engineName=codecServer ! rtph264pay pt=96 ! udpsink host=<HOST_PC_IP> port=5000

When the pipeline starts to run, you'll see something that looks like this:

 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, profile-level-id=(string)42801e, sprop-parameter-sets=(string)\"Z0KAHukCg+QgAAB9AAAdTACA\\,aM48gA\\=\\=\", payload=(int)96, ssrc=(guint)3417130276, clock-base=(guint)2297521617, seqnum-base=(guint)48503

Make a note of caps="application/x-rtp, media=(string)video ................" string and pass this string in client below

A simple RTP client to decodes H.264 and display on HOST machine

gst-launch -v udpsrc port=5000 caps="<CAPS_FROM_SERVER>" ! rtph264depay ! ffdec_h264 ! xvimagesink


MPEG-4 Receive/Decode/Display:

This section gives example where EVM acts as RTP client, which receives encoded stream via udp then decodes and display output. Host PC can be used as server to transmit encoded stream.

A simple RTP server which encodes and transmits MPEG-4 on OMAP3530 EVM.

gst-launch-0.10  videotestsrc  ! 'video/x-raw-yuv,width=640,height=480' ! ffenc_mpeg4 ! rtpmp4vpay ! udpsink host=<EVM_IP_ADDR> port=5000  -v

Make a note of caps="application/x-rtp, media=(string)video ................" string and pass this string in client below

A simple RTP client to receive and decode the MPEG-4 encoded stream.

gst-launch -v udpsrc port=5000 caps="<CAPS_FROM_SERVER>" ! rtpmp4vdepay  ! TIViddec2 !  TIDmaiVideoSink videoStd=VGA videoOutput=LCD sync=false

All

This section covers pipelines that should work for all processors. These pipelines should work on any other platform too (such as your desktop Linux machine). They are included because we have been asked for these examples previously.

Debugging

Verbose output

If you want to see what capabilities are being used or are expected, add -v.

gst-launch -v alsasrc ! alsasink

Element debug output

To enable debug output for a specific element:

gst-launch --gst-debug=audiotestsrc:4   audiotestsrc ! alsasink

Adjust the element name and debug level until you get the data you are looking for.

To enable debug output for a more than one element:

gst-launch --gst-debug=audio*:3  audiotestsrc ! audioconvert ! alsasink

or for all elements

gst-launch --gst-debug=*:3  alsasrc ! alsasink

You can see the list of element names that support debug output using

gst-launch --gst-debug-help

Audio pipelines

Controlling the sample rate and bit depth

Find the default audio capabilities:

gst-launch -v alsasrc ! alsasink

The output will be similar to

caps = audio/x-raw-int, endianness=(int)1234, signed=(boolean)true, width=(int)32, depth=(int)32, rate=(int)44100, channels=(int)2

Using that pattern (minus the data type indication, which is not needed with gst-launch):

gst-launch -v alsasrc ! audio/x-raw-int, endianness=1234, signed=true, width=32, depth=32, rate=44100, channels=2 ! alsasink

Adjust the capabilities as needed.

Generic network audio streaming example

These pipelines do not depend on the TI DMAI GSteamer plug-in.

Sender (host):

TARGET_IP=10.111.0.194
gst-launch audiotestsrc freq=1000 ! mulawenc ! rtppcmupay ! queue ! udpsink host=$TARGET_IP port=5555

Receiver (target):

gst-launch udpsrc port=5555 caps="application/x-rtp" ! queue ! rtppcmudepay !  mulawdec  ! alsasink


DSS2 Video Driver

The DSS2 video driver documentation can be found here for kernel 2.6.35.y

Some useful examples:

omapfb.mode=dvi:1024x768MR-16@60
omapfb.mode=dvi:1280x720MR-16@60 (for 720p HDTV with 1:1 pixel mapping)
omapfb.mode=dvi:1360x768MR-16@60 (works nice on 720p HDMI TV which crops edges due to overscan)

Note: the M indicates the kernel will calculate a VESA mode on-the-fly instead of using modedb lookup. The R indicates reduced blanking which is for LCD monitors. Most HDTV will probably only operate with a VESA mode.

omapfb.mode should be passed in the kernel command line, this is the uboot example:

bootargs-base=mem=430M console=ttyS2,115200n8 console=tty0 omapfb.mode=dvi:1280x720MR-16@60 vram=32M omapfb.vram=0:8M,1:16M,2:8M


Performance

Here are some test results I measured on my IGEPV2 board.
Decoding an aac music file using software decoder (faad) requires 60% load (top output).
Decoding the same file using the DSP TI framework results in 4% CPU load (top output) and 176kbps / 21 fps DSP "load" (dmaiperf output).

To get performance figures from the DSP add dmaiperf in the pipeline.

I used the following pipelines:

  • gst-launch -v filesrc location=sample.aac ! faad ! audioconvert ! audioresample ! alsasink
  • gst-launch -v filesrc location=sample.aac ! TIAuddec1 codecName=aachedec engineName=codecServer ! dmaiperf ! alsasink sync=false