The JeVois smart machine vision algorithms run great on the little quad-core ARM processor in the $50 JeVois smart camera package. Coupled with a Raspberry Pi 3 sporting the latest Raspbian “Sketch” distribution, the combination gives you a nice system to explore machine vision concepts and put them to use in a project like Hedley, the robotic skull.
The software also runs on regular 64-bit Intel notebooks, like my old warhorse ASUS duo-core Xubuntu machine. While you can simply plug the JeVois camera into the notebook’s USB port, fire up guvcview and go, you can also leave the JeVois bolted into the skull and get the same results with an every day cheapo webcam. I used my hacked Logitech model C310.
Why put the camera algorithms on a notebook?
Well, for one, because we can. It’s another shining example of why Linux is the operating system of choice for serious physical computing jobs, without too much regard for the hardware. Whether it’s a notebook or a tiny ARM nano-Linux box (like the JeVois camera), everything is standard and just works.
Running the algorithms on a 64-bit Intel processor is also very fast. Maybe you’d like to do object recognition from the desktop. Could I use the setup (with the built-in camera or a webcam) as an intelligent thief sensor to guard my notebook, while it sits on a table at Panera Bread, when I step away to refill my coffee mug? That’s possibly an idea for an upcoming story.
Let’s go over installing the JeVois software, so you can start finding your own desktop smart vision applications.
Installation on a Xubuntu Notebook
My ASUS notebook is almost a decade old, has a duo-core Intel chip clocked at around 2.3 GHz and 4 GB of RAM. There’s also a 750 GB disk, a gig of video memory, 5 USB 2.0 ports, WiFi and 1280×800 display resolution. It’s an old-school hot-rod by today’s standards and still certainly performs all the daily writing and physical computing stack tasks I throw at it.
Find the complete instructions on GitHub.
Getting the basic run-time programs, for use with a webcam, on the notebook took just a few steps and required about 15 GB of space. The GitHub page also recommends using Ubuntu version 17.04. I used Xubuntu 17.04 without issues.
Open a terminal and type in the following commands, while connected to the internet.
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DD24C027
sudo add-apt-repository "deb http://jevois.usc.edu/apt zesty main"
sudo apt update
sudo apt upgrade
sudo apt install jevois-host jevoisbase-host
The first and second lines establish a secure connection to the server housing the JeVois repository. We then make sure the notebook’s repository list is up to date. Next comes the upgrade for all the Linux packages to the latest versions, taking care of any dependencies. Finally, we install the JeVois smart vision camera base system on the notebook.
The Smart Vision Desktop Rig
Plug your webcam into a USB port, once the software is installed on the notebook.
Then, start the jevois daemon, from the command line, in a terminal:
Note that the cameradev option points to “/dev/video1”, which is the externally USB-connected Webcam. The default camera would be the built-in device, usually located at the top of the display screen. Maybe I could use the built-in camera for face activated commands or eye tracking projects. The built-in camera is most likely called “/dev/video0” if you want to use it.
As soon as you hit return, a 640×300 two-part video display will pop up, showing the camera view and augmented notations on the left and the saliency map on the right.
There are about two dozen different video mappings for the JeVois camera algorithms. Each mapping corresponds to a certain type of behavior for the intelligent neural network engine.
The default mapping overlays “saliency” results in the frames coming from the camera. Think of saliency as things that attract attention. Like a baby noticing movement or a bright color or a human face.
Move around in front of the camera and watch the salience mapping pane. Movement and various points of interest cause bright spots to appear (in the right pane), while the green attention circle moves to those locations in the field of view (in the left pane). The square pink boxes are instantaneous points of interest, that when figured into the saliency calculations roll up to display the green circle.
You can choose other models like using the Darknet framework to recognize objects, read various ArUco symbols, read dice, do road navigation or watch for surprise objects.
While the program runs, click down into the terminal to execute a handful of commands: “quit” stops the program and returns you to the command line; “listmappings” prints out a complete list of all the video modes, plus what they do, to the terminal. It might be a little hard to read because of the screen scrolls, as algorithm result data appears on the terminal window.
Choose a different mapping by using the “videomapping” option when you start the program.
jevois-daemon --videomapping=20 --cameradev=/dev/video1
Notice that we don’t have to start guvcview, to see the augmented analysis results, like you do when using the standalone JeVois smart vision camera, plugged into the USB port. Running the JeVois algorithms on the notebook, it has its own built-in video viewer.
Don’t forget to move down into the terminal and type “quit” to shut down the camera and return to the command line.
Next: Play with Data
There are a lot of other command line options to explore that let you tweak the behaviors of the algorithms and data being produced. Of course, the data is what we’re after, so we can feed it into Python programs or pipe it to other applications. We’ll talk about that in an upcoming article.
Here’s a hint: use file redirection or a Linux pipe for data output.
jevois-daemon --videomapping=19 --cameradev=/dev/video1 --serout=All > jevois-data.txt
Running the JeVois programs on a notebook also lets you compile new vision modules, cross-compile for the standalone JeVois camera or simply take a look at the programs before you order a standalone camera.