TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Tech Life

Stephen Thaler Claims He’s Built a Sentient AI

Thaler's AI company claims to have mapped thinking itself onto a system of neural networks. He demoed the technology for the Chicago ACM.
Mar 16th, 2023 8:10am by
Featued image for: Stephen Thaler Claims He’s Built a Sentient AI

Stephen Thaler made headlines last year when courts around the world began ruling on whether his AI system could be named as the inventor on a patent. (No.)

But as one of the AI industry’s pioneers, Thaler has led a fascinating and storied career — and he recently shared some highlights during an online talk for a March 1 Meetup of the Chicago and Washington D.C. branches of the Association for Computing Machinery (ACM). Thaler told his audience that he’d first started playing with neural networks back in the 1970s, to create alternate versions of the “have a nice day” smiley face.

By 1995, Thaler had founded Imagination Engines Incorporated, a pioneering AI company that claims to have mapped the act of thinking itself onto a system of neural networks.

Ultimately Thaler’s company even patented the idea of using artificial neural nets for “noise-driven brainstorming sessions,” according to the ACM’s introduction of Thaler, a process which led to “significant” results in everything from materials discovery, personal hygiene products, entertainment, and even creative robots. But then the company tried pushing it to combine knowledge domains. More specifically, they’d encoded the consequences of ideas — or “salient outcomes” — using chains of interconnected neural modules that triggered simulations of the positive reaction of a human mind, thus selectively reinforcing the most impactful ideas.

Stephen Thaler ACM talk - screenshot 2023 - the non-protoplasmic sentience driving DABUS

The ACM’s introduction argues that the end result was “the subjective feelings (i.e., sentience) of an arguably conscious machine intelligence” — with valuable ideas formed through a computational process “now challenging our long-held beliefs about biological intelligence and personhood.”

Sure enough, by the end of his talk, Thaler was explaining how to build a Sentient Artificial General Intelligence — while making it all sound so simple. But along the way, his audience learned an awful lot about how the human brain works, about thought itself — and ultimately, about themselves.

The Case for Sentience

Thaler began by stressing casually that he doesn’t believe claims that our current large-language models are sentient. “But I think what I’m about to talk about is sentient,” he added, “and I will make the appropriate case.”

Thaler’s company has built a system they call DABUS, an acronym that stands for “Device for the Autonomous Bootstrapping of Unified Sentience.” Thaler clarified that the company’s DABUS system isn’t just one algorithm, but an entire system — with both computerized and electro-optical components, “each of the subsystems governed by their own particular algorithm.” Thaler claimed that “it’s like the brain — subsystems and algorithms at work in each one of those.”

The Non-Protoplasmic Sentience Driving DABUS

But to start at the beginning, Thaler described some early experiments with neural networks back in the 1970s. While pinning its inputs at constant starting values, he’d also steadily increased the “injected perturbations” (later described as “internal noise”), then tracked the resulting increase in patterns or “notions” generated. As the noise increased, even the patterns tended to be lesser “generic patterns, oftentimes nonsense” — until eventually, the system was unable to even produce a pattern at all. “I call it ‘the deer in the headlights’ regime,” he said. “It’s as though the whole neural network is flooded with the equivalent of cortical adrenaline, and it can’t think of anything else.”

But the experiment had also identified a “Goldilocks zone” — where the levels were just right for optimal pattern generations. So Thaler attached a second neural network, leaving the first one to work as the “Imagination Engine,” and the second one tasked with reinforcing the best ideas (serving as both a filter for an idea’s utility and a kind of memory). Thaler called the resulting system the “Creativity Machine,” a kind of neural network-implemented associative memory that stored the ability to replicate a specific set of input patterns. “What you have going on here is not really what I would call creativity,” Thaler said later in his talk. “At its utmost, essentially it is an optimization process going on, to find global optimal solutions, but it’s not really combining conceptual spaces to create whole new ideas.”

Simulating Consequences

The shortcomings of this process inspired more experiments to try to get the neural network to recognize a “core idea” chain and a “consequence” chain — as well as a so-called “hot button” that affects the entire system. To a human being, this might be “something existential,” such as getting nutrition and surviving. “We’re all supplied with these hot buttons at birth,” Thaler told his audience, joking that “they’re factory installed.” But what’s important is that “hot button” perceptions — in humans — trigger the release of neurotransmitters.

To simulate these, Thaler used those same perturbations in his system. “If a hot button activates, then it will essentially secrete the appropriate neurotransmitter simulation.” The idea is to reinforce the chain that triggered the hot button, “so we get a memory, complete with the base concept, along with the consequences and the accompanying hot button. And then, later on, we can reuse some of those consequence chains.”

DABUS is a large array of such memories. “In training sessions, we essentially read it information, talk to it, show it pictures, show it all sorts of media,” Thaler said.

Could it all really be that simple? “What you wind up with is a thousand-ring circus,” he told his audience, “[with] many memories forming up, a latent idea that can be the product of nothing more than transient noise in the system. You have old ideas that have formed up the base idea, plus the consequence chain and a hot button. And then you have new ideas that are forming up complete with their consequence chains.”

DABUS, obviously, would want to distinguish new ideas from old ones. So as Thaler describes it, “Essentially the whole picture — the whole neural landscape — is passed through a big auto-associative net that is constantly training.” Its job? Screen out past chains that weren’t particularly novel, while emphasizing new ones — identifying those which appear as anomalies when compared to a long history. There’s a phase called “optical compression/integration of sub-models,” which basically involves photographing the various combined chains, all at once. The resulting images — in JPEG format — end up creating a handy “condensation of all this activity” which Thaler then passes along to other machines.

So where does this all lead? The combinations grow larger and larger. Some may be dead ends (or circular paths that just return to their base), some may be growing gradually like plants. Thaler says it looks like the way molecules form — even ending up with specialized subgroups. But in a way, it ends up defining things by the functions of each component, which Thalar argues is ultimately less abstract. It is, in essence, the formation of a thought. “Whole conceptual spaces are being compressed into associative memories… Concepts and strategies are represented as shapes formed by linked associative memories.”

And in a crucial imitation of human reinforcement, “if a hot button activates, then it will essentially secrete the appropriate neurotransmitter simulation.” Thaler argues you end up with “a crude simulation of consciousness — because the noise produced a turnover of different, pattern-based ideas.” Thaler says that’s identical to how human brains work. “We may have no inputs from the environment, but we relax and essentially we observe a stream of ideas flowing through our heads. And that, I claim, is because of ‘noise’ — I’ve got a lot of papers along those lines, but essentially the whole stream of consciousness has the same fractal distribution as a human stream of consciousness.”

Thaler sees patterns in the “consequence” chains activating hot buttons — which reinforces the chain (associating it with the hot button). And he believes they’re ultimately the equivalent of sharp instincts, or synthetic feelings — “the chain of memories, combined with just the diffusing disturbances throughout the brain… create a range of different feelings.”

Thaler’s DABUS system “evaluates its stream of consciousness via such sentience,” explains one slide, “ripening certain notions while culling others.”

“The result of all this shows that consciousness, sentience, and cognition are all combined together to do what they do,” Thaler tells his audience. “But essentially, it is that sentience that is driving the idea formation.

“And it’s all done with non-protoplasmic parts,” Thaler adds as an aside. “So it may turn out that sentience and consciousness are not limited to the protoplasmic stuff.”

Stephen Thaler ACM talk - screenshot 2023 - the non-protoplasmic sentience driving DABUS

Beyond Sentience

Thaler quickly ticked off some of his other projects, including a financial application that watches various stock markets and classifies trajectories, and a 2006 CD of music conceived by the AI (entitled “Song of the Neurons”). Thaler also showed off a fractal-shaped beverage container that the system dreamed up. (“It has about three times the area of an equivalent volume in a cylindrical soup can, for instance… It has faster heat transfer, and is more grab-able by human hands or robotic handlers.”)

Fractal container - shown by Stephen Thaler ACM talk - screenshot 2023 - the non-protoplasmic sentience driving DABUS

In 2022, his company started working on medical applications, “but we’re also starting to get indications that investors are interested in the Artificial General Intelligence aspect of DABUS.”

When someone asked what would happen if the system was grafted to a robot, Thaler replied that “We’ve actually done that with the Air Force, with Air Force Research Laboratory, way back, probably 15 years ago. But for some reason, we couldn’t fund beyond phase 2.”

“But yeah, that is a tempting area to get into. Trouble is, I’m inundated. I drown in the technology. There are too many applications and too many papers to be written, too many patents to be patented.”

Here is the full video:

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.