TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Software Development

Off-The-Shelf Hacker: Make Your Wearable Device Talk to You

Oct 11th, 2017 12:00pm by
Featued image for: Off-The-Shelf Hacker: Make Your Wearable Device Talk to You

Building wearable devices is a great way to explore “alternative” interfaces. What if your wearable device could do things and then tell you the results, kind of like Amazon’s Alexa voice assistant?

Back in January, I wrote a column discussing how to integrate Amazon Alexa on a Raspberry Pi. Hands-free jobs could present results through earphones or a small speaker. Today, we’ll look at making your wearable talk. The best part is that it works pretty well and isn’t very complicated to get going. We’ll start with my own wearable, a Raspberry Pi 3 with an Arduino Pro Mini and ultrasonic sensor hooked up to the serial port.

Keep Things Simple, Stick with Python

After a little research, I found that using Python with the Google Text-to-speech (gTTS) API seemed like a reasonable way to go on my Pi-powered wearable. Python is available on every Raspberry Pi, so it makes sense to leverage that to our advantage. Inspiration came from a page on a Python programming language site. I expanded on a couple of its examples.

The first order of business was to install gTTS. I used the Python installer program, pip, for that task.


Sadly, the installation didn’t work, saying that it had trouble finding libraries and such.

I’m using the Adafruit 3.5″ PiTFT color touchscreen display on my gadget and running one of its slightly customized Linux images. Running an upgrade on the system tends to break the setup of the PiTFT display, causing it to go dark. I’ve had to re-burn the image a couple of times, to get the TFT screen working again, after a general “sudo apt-get –upgrade” attempt. My solution is to just try to stay as “stock” as possible and tweak individual applications as needed. It’s no surprise that old versions of support programs might be causing the pip installation problems.

No worries. The “upgrade” option in pip did the trick.


With gTTS installed, I just needed the MP3 player, to “speak” the output.

mpg321 is a lightweight audio player that works great on the Raspberry Pi. This time I used the –upgrade option with apt-get.


Here’s the code I borrowed to send text out to the earphones.


It’s executed with the following:


Notice that gTTS takes the text and creates a mp3 audio file, that mpg321 plays, via a system call.

Sensor Talk

The real fun begins when you read sensor data, turn it into a variable and feed it into the speech engine.

In this case, I used an ultrasonic range finder, hooked up to an Arduino Pro Mini, to send values out through the serial port. Pulling the values into the Pi, gTTS could then “tell” me the distance through the earphone.

First, we need to capture the rangefinder data into a file. I used the old tried-and-true “cat /dev/ttyS0” method.


Distance data from the Arduino Pro-Mini/ultrasonic sensor serial line.

As data arrives from the Arduino it’s copied into a file named rob.txt. The “&” causes it to run in the background and lets you use the same window to run the Python script.

Here’s the Python program to read in the serial line data (from the file rob.txt), do the text-to-speech processing with gTTS and then output the resultant audio to the earphones, on the Pi.


The “while” loop reads a line at a time (tail -n 1) from the rob.txt file, outputting the audio distance for each pass. There is enough lag in the whole process for about one reading per second from the rangefinder.

Notice that I used an os.system() call in the first Python script and the subprocess.check_output in the second one. The subprocess method is preferred and has a lot of options. Note too that I used the “-q” mpg321 option to suppress the standard header and command status data, usually sent to the terminal.

I ran the program with the following:


Kicking off the process and then holding my hand in front of the rangefinder gave me a steady stream of readings corresponding to the distance.

Here’s a screenshot WITHOUT the -q option to show you what status and info is printed to the screen:

mpg321 output to the terminal, with status and info visible (no -q option).

You’ll notice that the screenshots are from a full-sized terminal window, on the HDMI monitor. I admit it, I used the monitor to develop this project and the article. Everything worked the same when I was wearing the device, using the 3.5″ TFT touchscreen and on battery power.

Off-The-Shelf Rocks

Mixing and matching things like the Raspberry Pi, gTTS, an Arduino and a rangefinder to get an audio distance is pretty interesting to me. Don’t forget that all of this stuff is off-the-shelf. Five years ago, this was pretty difficult.

You don’t have to make huge plays right out of the box, either. Using current off-the-shelf parts and Linux tools to build the MVG (minimum viable gadget), then adding features and capabilities as you learn is an organized and effective way to go.

Remember to be methodical, document your journey and note where something could fit into future projects.

Google is a sponsor of The New Stack.

Feature image via PXHere, copyright cc0.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Shelf, The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.