Video streaming device

From Finninday
Jump to: navigation, search

background

I'm trying to come up with something like a raspberry pi with a webcam attached that I can use to stream video to any old web browser on the local network.

The first use of this may be to put a "window" into a room that has no natural light by setting up a webcam looking out a window and then dedicating a tablet to display the streamed video from the other window.

Extra points for layering an image of curtains on top of the stream to make it look more like a real window.

old webcam

I'm able to use cheese or VLC to view the output of the Venus webcam just fine.

find the right protocol inside vlc

My first working attempt at this uses ogg (theora vorbis)

The server has an attached webcam via usb and runs vlc with these settings:

:sout=#transcode{vcodec=theo,vb=800,acodec=vorb,ab=128,channels=2,samplerate=44100}:http{dst=:8080/stream.ogg} :sout-keep

The client displays this url:

http://localhost:8080/stream.ogg

That arrangement works when hosted from my fedora 20 laptop using builtin webcam, but not so much when I host from the my ubuntu 14.10 desktop and an external USB webcam.

The lag is pretty bad compared to a video call like skype or hangouts. I should be able to do better with some tuning.

find details of external webcam video output

vlc recognizes the external webcam as a video capture device titled "Venus USB2.0 Camera"

Clicking on "Information..." about the Venus capture device shows this codec info:

codec: packed YUV 4:2:2, YU:U:Y:V (YUY2)
resolution: 800x600
frame rate: 12.5

When I'm using VLC to open the Venus webcam, and then display codec information, I see this:

video codec: Packed YUV 4:2:2, Y:U:Y:V (YUY2)
resolution: 1280x1024
frame rate: 7.5
audio codec: PCM S32 LE (s32l)
channels: stereo
sample rate: 48,000Hz
bits per sample: 32


using motion works

http://www.instructables.com/id/Raspberry-Pi-remote-webcam/

But the video frame rate is pretty low and uses all the rasp cpu.

other tools

ffmpeg sounded cool so I gave it a try and got a deprecation error:

*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.

So I tried avconv on debian.

However, fedora doesn't claim that ffmpeg is deprecated. And "yum search" on fedora doesn't find avconv.

avconv

This works on the server, but I'm not able to view it on the client:

avconv -f video4linux2 -i /dev/video0 -vcodec mpeg2video -r 25 -pix_fmt yuv420p -me_method epzs -b 2600k -bt 256k -f rtp rtp://10.0.0.5:8888
avconv version 9.14-6:9.14-1rpi1rpi1, Copyright (c) 2000-2014 the Libav developers
  built on Jul 22 2014 15:08:12 with gcc 4.6 (Debian 4.6.3-14+rpi1)
[video4linux2 @ 0x1697740] Estimating duration from bitrate, this may be inaccurate
Input #0, video4linux2, from '/dev/video0':
  Duration: N/A, start: 4674.494925, bitrate: 48660 kb/s
    Stream #0.0: Video: rawvideo, yuyv422, 352x288, 48660 kb/s, 1000k tbn, 30 tbc
Output #0, rtp, to 'rtp://10.0.0.5:8888':
  Metadata:
    encoder         : Lavf54.20.4
    Stream #0.0: Video: mpeg2video, yuv420p, 352x288, q=2-31, 2600 kb/s, 90k tbn, 25 tbc
Stream mapping:
  Stream #0:0 -> #0:0 (rawvideo -> mpeg2video)
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 10.0.0.5
t=0 0
a=tool:libavformat 54.20.4
m=video 8888 RTP/AVP 32
b=AS:2600

Press ctrl-c to stop encoding

This worked to make a video file without audio:

root@raspberrypi:~# avconv -f video4linux2 -r 25 -i /dev/video0 -f alsa -i plughw:U0x46d0x81b,0 -ar 22050 -ab 64k -strict experimental -acodec aac -vcodec mpeg4 -y webcam.mp4

avconv has this test source which I should be able to use to get my streaming sorted:

The testsrc source generates a test video pattern, showing a color pattern, a scrolling gradient and a timestamp. This is mainly intended for testing purposes. 

to use rtmp (real-time messaging protocol) I need an rtmp server configured

https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/

ffmpeg

Since fedora still seems to prefer ffmpeg over avconv, try some experiments there where there is more infrastructure and tests are quick and easy. Also, there is the chanced that the documentation will be better than avconv. I haven't found the cookbook-style docs that I have been hoping for on the avconv side.

yum install ffmpeg
yum install v4l-utils

That lets me do this:

[root@servo rday]# v4l2-ctl --list-devices
UVC Camera (046d:081b) (usb-0000:00:14.0-4.1.1.2.1):
	/dev/video1

Laptop_Integrated_Webcam_HD (usb-0000:00:1a.0-1.5):
	/dev/video0


[root@servo rday]# ffmpeg -f v4l2 -list_formats all -i /dev/video1
ffmpeg version 2.1.6 Copyright (c) 2000-2014 the FFmpeg developers
  built on Nov 29 2014 12:07:56 with gcc 4.8.3 (GCC) 20140911 (Red Hat 4.8.3-7)
...
[video4linux2,v4l2 @ 0x2538fa0] Raw       :   yuyv422 :     YUV 4:2:2 (YUYV) : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
[video4linux2,v4l2 @ 0x2538fa0] Compressed:     mjpeg :                MJPEG : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
/dev/video1: Immediate exit requested

That information goes nicely with using arecord to see the audio devices:

root@ferret:~# arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: SB [HDA ATI SB], device 0: VT2020 Analog [VT2020 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 0: SB [HDA ATI SB], device 2: VT2020 Alt Analog [VT2020 Alt Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 2: Camera [Venus USB2.0 Camera], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

This is the best reference on ffmpeg I've seen so far:

https://trac.ffmpeg.org/wiki/StreamingGuide

In particular, I'm interested in the "point to point" streaming section. The fact that there are other options like multicasting are just complicating my task of finding a way to stream point to point.

The documentation for ffmpeg is pretty robust, but I still don't know the basics of specifying parameters. So I'm going here:

http://linuxers.org/book/export/html/593

More good fundamental background on streaming:

https://trac.ffmpeg.org/wiki/Streaming%20media%20with%20ffserver

ffserver

I started with the assumption that a point-to-point stream would be easiest to set up. But I'm beginning to think that is not necessarily true. And that since there will likely be several of these streams to manage, and I need a solution that can be implemented with raspberry pi, I think I really should jump straight to the configuration that uses an intermediary server between the video source and the video player. Docs for ffserver are here:

https://www.ffmpeg.org/ffserver.html

ffserver is part of the ffmpeg package on fedora 20.

I start ffserver with the stock config file like this:

[root@servo lf]# ffserver -f /usr/share/doc/ffmpeg/ffserver.conf

That binds to port 8090 listening for incoming source streams.

This is the command that the documentation says will connect an input stream:

ffmpeg \
	-f v4l2 -s 320x240 -r 25 -i /dev/video0 \
	-f alsa -ac 1 -i hw:0 \
	http://localhost:8090/feed1.ffm

It fails if I only specify video. So list my audio devices:

[rday@servo ~]$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: PCH [HDA Intel PCH], device 0: ALC3226 Analog [ALC3226 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 3: U0x46d0x81b [USB Device 0x46d:0x81b], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

Actually, it is easier to look at vlc's gui to see what audio devices it detects. They are:

  • hw:1,0
  • hw:3,0

OK, knowing all this, we can make it work like so:

  • start ffserver
ffserver -f ffserver.conf

That stock config file is copied from the ffmpeg package documentation. It defines the port as 8090. It indicates that the source feed will be sent to http://localhost:8090/feed1.ffm. It indicates that stream players should point to localhost:8090/test1.mpg. It sets the format of the stream to mpeg.

  • connect the webcam feed to ffserver
ffmpeg -f v4l2 -s 320x240 -r 25 -i /dev/video0 -f alsa -ac 1 -i hw:3,0 http://localhost:8090/feed1.ffm

In this configuration, ffmpeg uses about 10% of my laptop's cpu and generates a load average of 0.4. Increasing the quality/load is a bit of an ordeal as ffserver stream definition and the source feed must match in ways that I don't fully grok yet. That is, simply increasing the source feed resolution is incompatible with the ffserver config. And the ffserver only accepts certain resolutions, framerates, and codecs which all must be coherent with each other.

Hopefully, with tuning I can make the clients see the stream as realtime. My first try buffers quite a bit of the stream if the client joins some amount of time after the stream begins on the server. I don't want that.

Sweet. Let's move the whole deal to the raspberry pi.