How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
No change in plans, though we will keep an eye on the situation.
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
What recent turmoil?
Edge Computing / Open Source / Software Development / Tech Life

Meet the Neural Network AI You Can Train Easily — Like a Dog

Jun 25th, 2017 3:00am by
Featued image for: Meet the Neural Network AI You Can Train Easily — Like a Dog

For most ordinary folk, the words “artificial intelligence” will probably conjure up images of brilliant people doing complex, enigmatic things with code that will help machines do magical things — like accurately diagnose disease, accelerate the discovery of new life-saving drugs, or beat humans at their own games. So to many who don’t have the technical know-how, the idea of training an artificial intelligence seems quite complicated and out of reach.

To counter this assumption, Amsterdam-based designer Bjørn Karmann created The Objectifier, an artificially intelligent interface that can be easily trained by anyone to perform simple tasks, such as turning lights on and off with a gesture. To do this, it’s equipped with a camera that is enhanced with computer vision technology, and a simple neural net.

Users can train the system with any gestures they want, using a mobile app interface to associate a certain movement with a certain result. For example. the system can be programmed to respond when a book is closed, or a palm is thrust forward, thus turning off a light or a music player. Watch this fun footage of the Objectifier being trained by regular citizens:

The overall concept here was to transform people’s ambiguous relationship with artificial intelligence. and to prove that training one doesn’t necessarily require a lot of knowledge.

“Objectifier empowers people to train objects in their daily environment to respond to their unique behaviors,” said Karmann on his website. “It gives an experience of training an artificial intelligence; a shift from a passive consumer to an active, playful director of domestic technology. With computer vision and a neural network, complex behaviors are associated with your command. For example, you might want to turn on your radio with your favorite dance move. Connect your radio to the Objectifier and use the training app to show it when the radio should turn on.”

From left to right: Different stages of the prototyping process in developing the Objectifier – latest version at left.

Dog Training AI

To emphasize this point, Karmann takes this simplification even further, deliberately drawing a parallel between the Objectifier and coaching a dog to follow simple commands. It might not get it right every time, but it is capable of learning. “Interacting with Objectifier is much like training a dog — you teach it only what you want it to care about,” explained Karmann. “Just like a dog, it sees and understands its environment.”

Karmann’s analogy is inspired in part by a recent article in Wired that pointed to a recent paradigm shift in computing, brought on by the growing use of deep neural networks in all kinds of applications. These distributed computational systems are powerful and are capable of learning, but it’s not always clear how exactly this happens — much like how we humans have yet to fully understand the underlying mechanisms in the brain, and behind the processes of cognition, learning and consciousness itself. So for his own research into this analogy, Karmann talked to professional dog trainers to understand what training a dog actually entails.

“Observing training techniques, tools and interactions revealed a world full of inspiration and similarities to machine learning,” Karmann wrote. “The power of the dog analogy is that everyone can understand how this complicated technology works without any knowledge of programming.”

Like training a dog (or a baby human), we don’t need to know how the brain works in order to train it. In this case, the Objectifier uses reinforcement learning (an AI training technique) to act upon information it receives from its environment, but it’s the user that determines what information is actually relevant. For instance, as we see in the video above, a user who cares about reading books by a certain lamp at night would connect the Objectifier to that lamp. To train it to turn it on, they would turn on the light, pick a gesture or movement — perhaps opening the cover of a book — and execute it in front of the Objectifier’s camera, while pressing the “1” button on the mobile app interface. To turn the light off, they close the book, press the “0” button, and the Objectifier will know to associate the closing of the book cover with turning off the light. All of this can be done in under a minute.

How the Objectifier’s mobile app interface works, and how to train the system to associate a gesture or event with turning something “on” or “off”.

Training the Objectifier to turn on the light when a book is opened, and to shut off the light when the book is closed.

For now, this current version of the Objectifier is just a prototype, but it’s already won an award from the Google I/O Experiments Challenge. Karmann is now working to improve it so that it can do more complicated things than just these binary “on” and “off” states, and adding an audio interface to it as well, so users can train it by issuing voice-based commands. Karmann’s aim is to continue to produce the Objectifier as an open-source project (built on the ConvnetClassifier openFrameworks application) so that anyone with the desire and access to a laser cutter and 3D printer might be able to build their own.

It’s a pretty simple but effective way to give people a hands-on glimpse into what training an AI might look like. All the heavy code lifting is done by the app itself, but even non-experts will intuitively grasp how amazingly uncomplicated the process can be. After all, rudimentary forms of AI already surround us — found in our smartphones and in digital assistants like Siri, Cortana, and Alexa, or in recommendation engines like those underpinning services like Netflix and Amazon. AI’s growing and pervasive role in our lives is something we seem to take for granted, yet on a deep level, many of us are uncomfortable with it, and it’s important to address the reasons why. So as AI continues to infiltrate our lives, it’s vital for this technology to be demystified, and for us humans to understand that AI won’t bite — if it’s trained well.

Images: Bjoern Karmann

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.