Culture / Technology / Top Stories / Tutorials /

Off-The-Shelf Hacker: Picking a Brain for Hedley, the Robotic Skull

15 Nov 2017 1:00pm, by

Over the past few weeks, we’ve been building from scratch an interactive robotic skull (catch up with parts one, two and three).

The JeVois machine vision camera is mounted, as is the skull itself, on the old moveable arm project base. The physical-computing-packed bonehead will be a focal point at my live conference tech talk appearances and should draw considerable attention from attendees. My local robotics club liked it at our recent meeting. Head panning, person tracking (using the vision sensor) and a whole host of other crazy features are planned.

Did I tell you, I named the skull Hedley Boneaparte? It seemed clever and appropriate considering my eventual planned steampunky theme.

Any robot with a name like Hedley deserves a proper brain. So, what do we use?

Fortunately, I have a pile of cadaver microcontrollers that were collected from past projects over the years. A hollow skull provides plenty of room to stuff in just about any type of microcontroller from the inventory. We assume, of course, that we’re talking about the Arduino and Raspberry Pi form factor devices. Larger boards like the Pine64 or a traditional small Intel PC motherboard, although probably capable, certainly wouldn’t fit inside Hedley’s cranial dimension.

Let’s take a little break from the construction and delve into picking a board for our Frankensteinian creation.

It’s a good place to start is by asking what we think the “brain” should be able to do.

Coordination and Interfacing

One of the bleeding-edge centerpiece technologies of the robotic skull is the JeVois smart vision sensor. Readers will recall that the JeVois is a tiny video sensor attached to a quad-core ARM processor and packed with artificial intelligence machine vision algorithms. It runs Linux and spits out an augmented video, as well as various types of vision analysis data over both USB and external serial ports.

Even though the JeVois runs Linux, we have to think of it as a dedicated sensor. There are no digital input/output pins or connections. And while, theoretically, we could run C programs or Bash scripts, the purpose of the device is to be a dedicated sensor, one that passes its data to another device. In JeVois jargon, that other device is known as a host computer.

With all that in mind, one of the requirements for a “host computer” is that it will accept and coordinate data coming from the JeVois sensor. Either we do additional processing, over and above the results provided by the machine vision algorithms or pass the data along to other machines, perhaps over a network. We might also take the data and use it locally to drive servos, motors or other actuators in the robot skull itself. Data from the sensor is what it’s all about.

Another aspect of the JeVois is that you can actually watch the processed video stream coming from a sensor on your host computer. For example, if you connect the camera into a USB port on a Linux notebook, you’ll see the augmented boxes around objects, that the sensor recognizes. This feature is great for developing your prototypes because you’re able to visually verify that the “data” streaming from the sensor (probably via the serial line) is what you actually want.

I sat in Panera Bread one day with the sensor pointed out toward the main lobby area. It was cool to see a superimposed box around people, as they walked through the sensor’s field of view. Above each box was a tag, object: person. The sensor could simultaneously identify and track three or four people, without any trouble. At the same time, static objects like chairs (object: chair tag) and tables were also easily tagged in the scene. By “identify,” I mean the sensor can recognize objects with human characteristics, not necessarily specific individuals with a name or anything.

The video program I use is called guvcview. It’s just a simple GTK video viewing program. Plug a webcam or the JeVois into a USB port, fire up guvcview and watch as the world goes by in fairly real time. On the Asus duo-core Linux notebook, the video is smooth and responsive, though running guvcview on a Raspberry Pi type device can be a challenge.

Screenshot of guvcview, with video from the JeVois sensor on the Pi.

Choosing a Device for Guvcview

Using data from a serial port, is pretty straightforward for just about any physical computing device, from the lowly Arduino Pro Mini, up through the hottest multicore ARM Raspberry Pi clone. So, really the ability to handle serial, or even I2C or SPI inter-device communications, is a no-brainer for anything I choose. All modern microcontrollers and nano-Linux machines handle that stuff.

The skull will be used at conference talks, demos and for training purposes. As such, an important capability is to be able to hook the skull up to a monitor or projector and show what the camera sensor sees to the audience.

We’d need to be able to run guvcview on the skull host computer, as well as an optional accompanying Linux notebook.

Several devices jumped right into mind. Obviously, there were the Raspberry Pi 2 or 3. I also had a BeagleBone Black sitting around unused. I was itching to use the hotrod RoseApple Pi, quad-core, 2 GB memory device, in something, as well.

The BeagleBone was a very logical first choice, with it’s on-chip Linux implementation (not on a micro-SD card like the Pi clones), generous input/output pins and HDMI video. While a nice thought, it dropped out of the running as I recalled the performance of guvcview on this particular board. It either was very slow and choppy or didn’t run at all. I seem to remember problems with dependencies and it just didn’t seem worth it to try to sort things out.

Next up. The RoseApple Pi worked reasonably well with guvcview, albeit a little slow. Video frame rates of the processed video, coming from the JeVois sensor, was pretty disappointing at between one and four frames per second. That kind of performance will be painful to use for a demo or during a tech talk. Additional programs like the Arduino and Processing IDEs would be used on the skull host computer too and they ran reasonably well, without problems, on the RoseApple.

Unfortunately, the latest pre-built Linux image I could find for the RoseApple that worked was Debian 8 from 2015. I tried doing an “update” and “upgrade” to bring the software up to the latest version. No go. Unresolved dependencies caused all kinds of errors and I simply wasn’t ready to spend hours, or perhaps days, sorting it all out. In spite of a quad-core chip, 2GB of memory and a USB 3.0 port, the RoseApple dropped off the list. There hasn’t been much activity on the RoseApple Pi site, so you can draw your own conclusions, on that note.

The takeaway is that while hardware devices may look awesome, it’s hard to see their future without regular vendor activity or strong industry/community interest.

Going with the Regular Old Pi

I ended up going with a regular old Raspberry Pi 2 model B, then downloading and burning the latest Raspbian Stretch image onto an 8GB micro-SD card. Resetting the Pi username and upping the speed to “performance mode” was accomplished via the usual “sudo raspi-config” setup exercise. The guvcview program was installed via apt-get.

In the past, guvcview ran OK on the Pi 2. Performance wasn’t quite as bad as the RoseApple experience, but still not stellar. RoseApple Pi speed might have improved with a current Linux kernel and apps.

Running guvcview on Raspbian Stretch made a world of difference. The video from the JeVois sensor was smooth and fast, typically at between 15 and 30 frames per second.

One caveat is that Stretch has a problem with guvcview and audio program dependencies. Just running “guvcview” crashes with a segmentation fault. The way around this problem was to run the “-a none” option along with the video type and resolution.

The entire command line to make guvcview run right was as follows:

pi%  guvcview -a none -f YUYV -x 544x240

Wrap Up

All things considered, the Pi turned out to be a good choice. It runs guvcview well, has lots of general purpose input/outputs and we can suck in data, then process it with a variety of applications. Not only that, the Pi communicates easily with Arduino and other boards, that probably will end up in the skull.

By-the-way, I chose to mount the Pi “brain” on the skull baseboard, instead of intracranially. Demos and steampunk are all about exposed gadgets and putting the device right out front, where everyone can see it made a lot of sense.

We’ll put an Arduino Pro and maybe an ESP8266 inside the skull. Don’t worry, it will work out.

See you next week.

Feature image: Brooklyn street art by Secret handshake’s Andrew Steiner.

A digest of the week’s most important stories & analyses.

View / Add Comments

Please stay on topic and be respectful of others. Review our Terms of Use.