This week, we’ll delve deeper into the guts of the robotic skull project, which we started last week, discussing some of the algorithms I plan to use and cover the mounting of the JeVois camera sub-system. As always, physical computing stack projects involve a range of topics, from software to hardware, mechanical systems, electronics and aesthetics. Make sure to document everything, including written notes and photographic records, so you don’t forget things and can later easily write your books and engineering papers.
Let’s start with the camera, shall we?
The JeVois smart vision camera module, which arrived last week, is by far the most complex sensor I’ve used to date. Various algorithms, running on its nano-Linux ARM core, analyze a variety of behaviors and characteristics in each video frame. The results appear either on-screen, in the case of being connected to a host computer running a video viewing program like guvcview, or over a simple serial line text stream from the module. I’d like to push that serial line data to a Raspberry Pi or an Arduino, to manipulate physical reactions in the robotic skull, in real-time. Eye movement and panning of the skull come to mind.
There’s actually already a tutorial on building a JeVois pan/tilt mount. I’ll adapt the techniques for use in the robot skull.
Assuming the hardware serial connection is working, one thing you have to do is select a vision model that streams data to the hardware serial port. Not all models do that at this time. You also have to enable data streaming to the port via an update to the main configuration file. Once data is streaming from the hardware port, you can capture it using the “cat /dev/ttyXXX” method. I plan to stream the data to a Python program on a Pi or to the serial port on an Arduino. We’ll get into more details on using the data, in a later article.
Studying algorithms helps to understand what’s possible. Take a look at the model demo page.
Several algorithms capture “saliency.” Saliency is when an object attracts attention, within the camera’s field of view. The program mimics what humans notice, like motion, something familiar, a shiny object and so on.
There are also object recognition and tracking models. The JeVois uses neural networks to identify objects and track them. A couple models can identify and track me as a “human.” My scheme is to have the skull sitting near the front row of the audience, facing the stage. I’d be the only human up there, so, it should easily be able to follow me.
Dr. Frankenstein, eat your heart out, Dr. Torq is in town now.
Combining saliency and object recognition/tracking ought to give reliable relative coordinates of my location that I can then use to move servos, motors or control other devices. Perhaps, I’ll just have the skull track me by panning left and right as I move. Another idea is to follow me with a remote spotlight or video camera, maybe over a WiFi connection. The physical computing stack certainly crosses device boundaries.
If I have two projector screens, the skull might instruct one projector to light up when I’m on that side of the room, while turning off the projector on the other side. Wouldn’t that be an interesting attention-drawing effect for the audience? Also, it would be a fine excuse to possibly integrate MQTT messaging into the mix.
During testing at my off-site office, Panera Bread, the JeVois device actually tracked people as they walked in front of the camera. You could see the little highlight box appear around each person, with a caption of “object: person.” It seemed pretty fast too, being able to track a few people at once, with about a 1/10th-second delay.
That’s the plan on the software side. Now, let’s look at putting the camera in the robot skull.
Building the Camera Mount
One thing I found working with the Pixy was that in order to have reliable image processing, a steady camera is needed. It may have to move quickly, but the movement needs to be reliable with no slack, overrun or hunting. Also, the JeVois module is pretty small and it has a little fan for cooling. Understandably, the fan is fragile and should be protected. A fairly robust camera mount seemed in order.
The skull will have a decidedly steampunk look. Naturally, it made sense to build one of my trademark brass frames to hold the camera. A simple square design mounting the camera in the right eye socket of the skull looked good and wasn’t too hard to make. Of course, I had to perform surgery on the skull with the Dremel.
Dr. Frankenstein, eat your heart out, Dr. Torq is in town now.
Actually, I hacked the skull up pretty well. The internal framework will structurally support all the hardware, so the plastic skull really becomes the packaging and will be attached to the framework by screws or bolts. Mods to the processor, camera mounting, jaw servo, internal radios and such will be much easier with removable skull panels.
Here’s a shot of the interior of the skull after the operation.
The camera mount passes through the eye socket and bolts onto the yet-to-be-built internal skull framework.
I tried to use a minimum of brass tubing and flat stock, to keep the mount as light as possible, while keeping it strong and rigid. Three 10×32 brass screws arranged in a triangle secure the camera to the skull’s internal frame bracket. We’ll probably cover the internal framework next week. The 10×32-sized screws soldered nicely into the 3/16 inch diameter brass tubing, once you cut the heads off with the Dremel and an abrasive cut-off wheel. The design was inspired by the method used to mount small piston engines to aviation airframes.
Not only is the mount design simple, it’s also adjustable. I can just insert washers behind one or more bolts to change the way the camera points out of the skull. Adjustments could also be done with two nuts on each stud, instead of using washers.
The piece of tubing on top also shields the fan. I may add a loop of 10-gauge copper wire for more protection, in the future.
Next week we’ll look at constructing the skull’s internal framework and attaching the camera mount sub-system. With the framework in place, we can then start engineering the pan/tilt mechanism and perhaps mount the skull to some kind of a desk stand.
There are also a ton of pages covering things like optimizing performance (of the JeVois camera) under different lighting conditions and going deeper into saliency and visual attention.
The robot skull is shaping up to be a fun and educational platform. See you next week.
The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Torq.