If you’re working on quite big development project that runs on RPi, you might be interested in speeding the build time.
Thus you have several options. For projects that don’t have lots of dependencies, you can use the official Raspberry Pi toolchain.
If you have some or more dependencies, some good tutorials will lead you to some rootfs or chroot technique.
Basically, it means – for the first – to have a copy of the RPi’s /lib and /usr folder somewhere locally on your building host.
The second means that you run commands in some kind of sandbox, where it can’t do much pain on your computer and it believe that the folder containing RPi’s usr and lib is a (fake)root.
Those two methods are boring and could lead to very strange behavior (since some lib inside /usr are linked to binaries inside /lib and the link are broken when copying (because the /lib on your system is not the Pi’s one).
Another solution is to emulate the Raspberry Pi inside Qemu for example, then setup a build environment there to build. But this could be longer than on a Pi itself…
For JamomaPureData project, I need to build on Travis-CI to test each commit and detect regression or so.
So having a light toolchain is indeed a real need for that.
I started with the officical toolchain and I added to it the library I need.
For doing so, I install all the libraries I need (libxml2-dev, libsndfile-dev and their deps) on my RPi, then I copy them one by one, include and lib to the toolchain folder.
In the official toolchain, the root folder is :
You can also download the package, then
dpkg -x *.deb in the folder, it could be faster but this could install uneeded file such as manual pages or programs.
Then my toolchain is here : https://github.com/avilleret/tools/tree/Jamoma
Then I can build for Raspberry Pi on Travis-CI.org !
For some project one ask me to plug a MIDI controller on the network either to send its parameters to lots of computers or just to increase length between controller and computer.
To do so, I’ve plugged the controller onto a Raspberry Pi then the RPi on the network. Here I have several options.
The first one is to make a Pd patch that send all MIDI event over the network. This works and could be cross platform. The receiver could run on Linux, OSX or even Windows and then forward MIDI event to other programs if necessary with a platform specific protocol (IAC Bus on OSX for example).
But some controller (like the Novation Launch Control XL i’m using) doesn’t work with OSS MIDI on Linux, but with Alsa. So you have to connect the device with Pd with aconnect or something else and this could be boring.
I also found a small command line program multimidicast that creates an Alsa client with several ports and multicast MIDI event to network. It works fine on Linux. It is said to works on Windows too, but I can’t test. This is the solution I’m using.
Another solution if all computer are running Linux, is to use aseqnet the alsa sequencer network client/server. It works but you have to know the name or the IP of the server to connect to.
And finally, it should be possible to do the same with Jack. Jack midi could be integrated with Apple midi but I can’t find any resources on internet to build such a setup.
For a theater show I need to stream Raspicam to Gem.
I found some solution with VLC. Here is a good comparison of some solutions : http://stephane.lavirotte.com/perso/rov/video_streaming.html
VLC is a good choice since there is a VLC backend to play video in Gem, so one can use it to display network stream into Gem.
But VLC based solution suffer from big latency, around 1 sec, a bit too much for me.
So I dig a bit and found this very good article : http://antonsmindstorms.blogspot.nl/2014/12/realtime-video-stream-with-raspberry-pi.html.
And this one, pretty similar : http://blog.tkjelectronics.dk/2013/06/how-to-stream-video-and-audio-from-a-raspberry-pi-with-no-latency/.
But the two use gstreamer-1.0 which doesn’t work with Puredata and Gem.
To feed Gstreamer into Gem, there are mainly two solutions : v4l2loopback (https://github.com/umlaeute/v4l2loopback) or pdgst (https://github.com/umlaeute/pdgst).
But those are not (yet) working with gst-1.0.
So I found a way to make a gst-0.10 pipeline to decode the stream and send it to a v4l2loopback device.
First you need gstreamer-0.10 and v4l2loopback :
sudo apt-get install gstreamer-0.10 v4l2loopback-dkms
enable v4l2loopback with :
sudo modprobe v4l2loopback
after that you should have a new /dev/video* device. For example, on my laptop with an integrated webcam (which is /dev/video0), I have a /dev/video1 device which is the v4l2loopback device.
Then you’ll need some ffmpeg modules. FFMPEG is no more available for Ubuntu since it has been replaced by avconv – and gst-1.0 support avconv but not gst-0.10.
Here you can find some tips to install ffmpeg on Ubuntu 14.04+ : https://groups.google.com/forum/#!topic/clementine-player/JnGgRyUEuc4
Note that there is no utopic (14.10) repository but the trusty’s (14.04) one works for utopic.
Now here is the gstreamer pipelines I use. On the Pi :
raspivid -t 0 -b 2000000 -fps 60 -w 1280 -h 720 -o - | gst-launch-1.0 -e -vvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=10.42.0.1 port=5001
don’t forget to change the ip address to fit your computer’s IP.
And on my laptop :
gst-launch -v udpsrc port=5001 ! application/x-rtp, payload=96 ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! v4l2sink device=/dev/video1
Then I can display the stream in Gem with 10-11 frames latency at 60Hz, around 100-116 ms. Which is great !
On recent version of Raspbian (I think since the release of January 7th of 2014) the password rules have changed and you can’t use anymore simple password like `pi`.
To enforce this requirement, just change the line 25 of
/etc/pam.d/common-password. Remove the
obscur keyword and add
minlen=2 (or whatever you want).
The line should looks like :
25 password [success=1 default=ignore] pam_unix.so sha512 minlen=2
man pam_unix for more options.
If you’re using Raspberry Pi, you might know the famous command line utility
dd, useful to write a Raspbian image to a blank SD card (cf. http://elinux.org/RPi_Easy_SD_Card_Setup).
You can also use this tool to do backup of the whole disk, but it has two drawbacks :
- when you copy the whole disk, the image is as big as the disk, even if the is lots of empty space on it.
- when you restore the backup you need a disk at least as big as the original one.
Those two disadvantages lead me to find a solution to make backups smaller and more versatile. The solution I’ll describe here is an adaptation of Ubuntu’s documentation : https://help.ubuntu.com/community/BackupYourSystem/TAR and have been tested on Ubutnu 14.04.
Here are few modification that allow a very quiet boot, nothing will appear on the screen before the login prompt.
Thus if you start a visual application before that (in /etc/init.d for example) you will not see anything on the screen before your application starts.
First modify the /boot/cmdline.txt like this :
dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty3 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=noop rootwait loglevel=3 logo.nologo vt.global_cursor_default=0
here are the details of the changes :
console=tty3 redirect all the post messages to the third console (hit CTRL + ALT + 3 to see them after boot).
loglevel=3 make it less verbose, only errors are reported
logo.nologo disable the RaspberryPi logo on boot
vt.global_cursor_default=0 Disable the blinking cursor.
Moreover you can add
disable_splash=1 to /boot/config.txt in order to disable the rainbow splash on power on.
At the en, vous can completly disable the prompt #1 by editing the file
/etc/inittab and commenting the following line :
1:2345:respawn:/sbin/getty --noclear 38400 tty1
That’s all !
Here are two command lines to play 2 different video files in sync with VLC.
network clock client :
vlc --network-synchronisation --netsync-master-ip 127.0.0.1 sync-test-rouge.mp4
network clock master :
vlc --network-synchronisation --netsync-master sync-test.mp4
It seems to work only with videos of the same length.
Both audio and video are synchronized. The jitter compensation algorithm take a few time to compensate and after a dozen of seconds all videos are in sync.
It was tested on Ubuntu 13.10 with VLC 2.0.8, the two players on the same machine but it should work over the network.
Also the client must be started before master.
Here is a new video about the interactive laster installation Silhouette :
Silhouette from Antoine Villeret on Vimeo.
For das Körperrauschen project, I use an array of 5 Arduino micro and since they are plugged inside the sculpture, it is not convenient to unplugged them for upgrading one by one. And as I’m a lazy boy I search a way to upgrade them all at the same time. Since they run all the same code, I just made a simple script to upload all the connected board one by one.
Here is the script :
for arduino in /dev/ttyACM* ;
ino upload -m micro -p $arduino
It searches all serial interfaces that looks like Arduino (/dev/ttyACM* on Ubuntu) and uses inotool to upload the sketch to them. Of course the folder should have the structure required by inotool.
Without delay, I can’t access the next board, and I don’t know why…
The R(Pianophone) has been upgraded since last year (see http://antoine.villeret.free.fr/?p=427).
Here is the version 0.2 :
It has a nice 16×2 LCD screen and two push buttons.
The LCD displays Pd patches available on the SDcard and you can go up and down in the list thanks to the buttons.
I also added a USB sound card : ESI UGM96 with 2 inputs (mic and hi-Z) and 2 outputs.
The LCD is a Midas I²C device connected to the Pi through GPIO.
A small command line tool writes on the LCD.
The two buttons are matrixed with the keypad and scanned in an infinite while loop.
When a patch is selected, its name is send through OSC to a main Pd patch and the patch is loaded.
To update the patches on the SDcard, just plug a USB key with some Pd stuff and all patches will be available on the (R)Pianophone.
The USB sound card makes me crazy because of lots of issues.
First I have to disable the ethernet chip to get audio input through USB with this command :
dhclient -r # release DCHP
echo -n "1-1.1:1.0" | sudo tee /sys/bus/usb/drivers/smsc95xx/unbind #disable the cheap
I also need to downgrade to the firmware revision of April 26, 2013.
sudo rpi-update 994e46341bd190ef4ce6ee011e3f9fb8173e2bbf
With the up-to-date firmware, I only got crakles when I disable the ethernet chip (and my USB keyboard goes crazy too…).
Moreover, as the analog synth emulation is eating a lot of CPU, I have to disable the audio input in this patch and to re-enable it the others.
This is done inside Pd by sending those messages to pd :
audio-dialog 2 0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 48000 10 -1 64 to enable
audio-dialog 2 0 0 0 -2 0 0 0 2 0 0 0 2 0 0 0 48000 10 -1 64 to disable.
But those messages are not portables. I depends on the alsa configuration. You may find out which command to send by looking to tcl -> pd communication.
To do so, just put a
[r pd] connected to a
[print] somewhere in your patch and look to the Pd’s console when clicking on “Apply” in the Alsa configuration dialog.
One more thing, is that I got crackles when Pd is loaded directly from
/etc/rc.local and I don’t know why…
But a workaround is to start Pd from
/home/pi/.bashrc script instead.
It’s a bit crapy since Pd launches every time the
pi user logs in and thus several Pd can be loaded at the same time.
But I can’t find a better way to avoid crakles…