TNS
VOXPOP
How has the recent turmoil within the OpenAI offices changed your plans to use GPT in a business process or product in 2024?
Increased uncertainty means we are more likely to evaluate alternative AI chatbots and LLMs.
0%
No change in plans, though we will keep an eye on the situation.
0%
With Sam Altman back in charge, we are more likely to go all-in with GPT and LLMs.
0%
What recent turmoil?
0%
Operations

Can We Teach an AI to Play Dungeons and Dragons?

Lara Martin, a Computing Innovation Fellow (and a recent postdoctoral researcher) at the University of Pennsylvania, is "trying to teach AI to tell stories," explained the podcast. And the issues can be summed up in a single question: could an AI play Dungeons and Dragons?
Mar 28th, 2021 6:00am by
Featued image for: Can We Teach an AI to Play Dungeons and Dragons?
Feature image by James Pond on Unsplash.

There was a remarkable moment on a recent podcast by the team from the BBC’s Science Focus magazine, in which AI researcher Lara Martin shared a story generated by an AI.

“I think what’s good about this story is it kind of shows the strengths and the weaknesses of the system I built for my Ph.D.,” Martin explained to the podcasters. “What’s interesting about this story is that it’s starting to make causal and coherent sense — until it reaches a point where it just spits out garbage.”

It’s an interesting question to ponder: what if computers could tell stories? They could entertain us better — but also advise us better. Since storytelling is a fundamental part of how we communicate, it could wildly improve how we interface with devices, Martin explained to the podcasters. “All types of talking that we do is actually kind of telling a story.”

So Martin, a Computing Innovation Fellow (and a recent postdoctoral researcher) at the University of Pennsylvania, is “trying to teach AI to tell stories,” explained the podcast. And the issues can be summed up in a single question: could an AI play Dungeons and Dragons?

Unique Stories

AI’s recent attempts at storytelling have been intriguing but imperfect. Martin cites the story of Oscar Sharp, who with technologist collaborator Ross Goodwin trained a neural network on hundreds of science fiction scripts in 2017 — and then filmed the resulting screenplay. (“At the end there, in case you were curious, the script said ‘He picks up a light screen and fights the security force of the particles of a transmission on his face.'”)

The stories may be unique, “but the coherence is terrible.” Martin explained in a September 2020 talk as part of Georgia Tech’s NLP Seminar Series.

More recently researchers in Prague used the open source GPT-2 AI to generate a play which was later performed in Prague, London, New York and Chicago.

But Martin’s talk also explained why storytelling is a hard problem. Among other things, actual live conversations, while collaborative, also have unexpected consequences baked into the process. And in addition, “We live in a dynamic and continuous world. So this is a messy world, that can’t easily be turned into discrete states like computer scientists like.

“It’s also a distributed world, so people rely on their own experiences of the world, and we’re constantly creating our own idea of what the world is like. And we also have ideas of what we think other people think of the world — known as the theory of mind…”

“I know what you’re thinking. These are also things in Dungeons and Dragons.”

The popular tabletop game requires people collaborating on a story — and even planning and acting together — while interacting with a designated “Dungeon Master” who relays the outcomes of their decisions and supplies the rest of the story. Dungeons and Dragons would be a great test, Martin told the Science Focus podcasters this month. “You’re not just telling a story… you’re telling a story with like three or four other people. And if you’re the Dungeon Master, it might even be a harder task… because you’re creating a whole world and you have to relay this information on to other people to make to allow them to create a theory of this fictional world that you’re trying to share. And so there’s a lot of theory of mind going on.”

Lara Martin - Georgia Tech NLP Seminar Series in September 2020 - Christopher and the King story

To explore this realm, Martin first retrained a neural network on science fiction stories. “I scraped from fan wikis because science fiction TV show nerds are very thorough with their plot summaries.”

The problem? Most stories try to resemble the actual world we live in — and thus start with some basic assumptions. But unfortunately, neural networks know nothing about the real world. Just, for instance, it has to know what actions naturally precede other actions, which Martin has tried addressing manually. “I kind of inject that into these systems to guide the generation and have it make more sense.”

One of her early systems really surprised her, Martin tells the Science Focus podcast. “I had a story about, like, a horse becoming a lawn chair entrepreneur, and I just think that that concept is really, really interesting and I think about that every now and then.” While she’d specified that the output needed an “agent” of some kind for its narrative, “it just happened to come up with this interesting concept” — just by filling in random things.

In her 40-minute talk, Martin describes the technical details of her earlier research — breaking up the moments of a story into smaller linked chunks, and then performing “knowledge engineering” — which still leaves you constrained by a highly specific system. The end result? “It’s certainly still interesting, but even these large models are still losing coherence.” Martin then described further experiments in creating event representations — subject, verb, direct object, modifier — to help bootstrap stories. (This also involved creating a variable for story characters to be used throughout the story.)

Multiple systems were combined — words were narrowed down using the University of Colorado’s VerbNet, which they tout as “the largest online network of English verbs that links their syntactic and semantic patterns.”

In a complex process, she describes as separating semantics from syntax, Martin ultimately created “a hybrid system for reasoning and to maintain state” to “create generative systems that are consistent within themselves.” After all the “event-to-event loops” and policy gradient deep reinforcement learning, there’s still the question of evaluating the output, and measuring “perplexity” —  how surprised the model is by the new data it receives.

But in the end, it’s still not ready to play Dungeons and Dragons. Reincorporating things introduced from outside its system remains on Martin’s list of “future work.”

Lara Martin - Georgia Tech NLP Seminar Series in September 2020 - D and D and natural language use

Even then, there’s always the perennial question of whether an AI can be truly creative (instead of just spitting back variations on the input it’s received.) Even with this earlier prototype, Martin told the podcasters that more often the output was just plain weird — things like “a really specific type of flower or something… because it would just pull this from the database it had.” Maybe the pattern — or rather that random lack of a pattern — became more evident as the output grew. “It’s not as surprising when you can’t understand it.”

But this brings Martin to an important point. “I think that it’s really important for people to realize that computers are not as smart as they think they are… They’re not people, they don’t have agency, they’re just tools that other people have used to work on these things.” So the best use of so-called “creative” AI might be “as a tool to augment human creativity…” Martin tells the podcasters. “Computers are really good at looking through large, large spaces of data so they can come up with things that you’ve never seen before and never thought of connected with this.” Then the humans, in turn, “are really good at making those connections, connecting ideas that the computer might present to them.”

So even when the computer cleverly suggests a horse as a lawn-chair entrepreneur, “the computer knows nothing about what that means, is just spitting out stuff. But having a human take that and run with it… That would be fantastic… Humans have that ability to connect these things, and I think that’s a really good symbiotic relationship that needs to be used more.”

Her research seems to have left her keenly aware of the limits of our current technology. Asked if we’ll ever see an actual AI Dungeon Master in our lifetimes, Martin thinks about the need for a high-quality performance, and then concludes, “To see a good Dungeon Master AI in our lifetime, I think, um — I don’t know, I’m a little skeptical that’ll happen.”


WebReduce

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.