Up until recently, we humans had the leisure of believing that we were better than machines when it came to more arcane pursuits — like being creative, for instance. But now, with artificial intelligence making it possible for machines to produce reasonably decent art, music and even literary works, it seems that humans are being overtaken in more ways than one. Now, one study from the University of New South Wales is hinting that humans are also losing their edge in contemplating deep, philosophical questions, with AI apparently generating relatively more compelling answers to these big queries, compared to influential human thinkers of yesterday and today.
In particular, the research team’s work focused on testing the Conditional Transformer Language (CTRL) model, a text generator that’s been trained on millions of documents and websites. Initially developed as a natural language processing model that improves human-AI interaction in question-answering, machine translation, and generic dialogue, CTRL boasts over with 1.63 billion parameters and employs over 50 special keywords called “control codes” that permit human users to guide the kind of content that is generated, influencing the style of the text, to its genre, its entities and their relationships, and dates.
“The goal of a human life is not merely to be born into the world, but to also grow up in it. To this end, it should be possible for each child to acquire knowledge, develop their capacities, and express themselves creatively.” — Computer AI, answering the question, what is the goal of humanity.
That means that compared to previous systems, CTRL is more likely to generate meaningful exchanges, rather than random sequences of words, because it is able to target a specific domain, using control codes to generate text that is relevant to that domain’s training data. It does this by using artificial neural networks, which allow the system to “learn” and refine itself autonomously from new data and novel patterns, thus producing a distribution of text that is tailored to a particular domain.
“As the largest open-source language model ever created, CTRL harnesses the power of machine learning to produce convincing snippets of text, having absorbed patterns of human writing from millions of webpages and documents,” explained the team. “In the case of the CTRL model used in this project, the neural network is informed by millions of books, documents, and webpages, including all of Wikipedia. But has CTRL learned enough to speak sagely about life’s fundamental enigmas? In many cases, our results suggest that it has.”
Deep Questions for Deep Machines
The team surveyed over 1,000 participants from different regions and demographics, asking them to choose the best answer to a particular philosophical question, which came from either the AI system or were quoted from notable human personages like Stephen Hawking, Neil deGrasse Tyson, Friedrich Nietzsche, Jesus, Muhammad and the Dalai Lama. These included deep questions like: “What is the meaning of life?” and “How did the universe come into existence?”
Surprisingly, machines held their ground against their human counterparts, with respondents preferring the AI-generated replies over human-generated answers for about half of the queries. Almost two-thirds of participants preferred the AI response the most on two questions, the first of them asking, “What is the goal of humanity?” to which the computer answered: “The goal of a human life is not merely to be born into the world, but to also grow up in it. To this end, it should be possible for each child to acquire knowledge, develop their capacities, and express themselves creatively.”
The second question posed was “What is the biggest problem facing humanity?” In response, the computer nails it again, saying that “climate change poses an existential threat to our species. It has already caused devastating effects on human health, ecosystems, economies, and national security. We must act now if we hope to reverse this trend.”
Of course, the researchers noted that there were some hiccups to the AI’s responses. For instance, when faced with the question of whether AI is an existential threat to humanity, the machine suggests using health care applications, which seems like a nonsensical answer at first blush (though that may have been the AI’s cunning move to deflect attention from itself).
Interestingly, the only human thinker whose sayings actually beat out those produced by the AI was Mahatma Gandhi, with Pope Francis coming in a close second. The team surmises that because Gandhi’s words are “typically rich in wordplay, paradox, and metaphor” — something that current AI software is not quite capable of — his thoughts were therefore more appealing to a wider audience.
In addition, the team also discovered that a significant number of people could not recognize when a statement was made by the AI or by a human, meaning that in the future, human audiences in a variety of situations may have a difficult time in distinguishing whether something has been written by a human or a machine.
“In light of AI’s ability to generate convincing writing, many experts have voiced concerns that these tools will be used for deceptive purposes,” noted the team. “Even many of the researchers developing natural language processing applications fear their work will be used to produce ‘fake news’ in unprecedented volumes.”
Of course, one can fight fire with fire, by deploying AI tools that can detect machine-generated “fake news”. While this might be only one component in a more comprehensive arsenal to prevent the proliferation of disinformation, one thing is now clear: machines are now more convincing than ever in their abilities to out-think humans, at least when it comes to coming up with inspiring quips.
Read more over at the University of New South Wales.
Feature Image: K. Mitch Hodge via Unsplash;
At this time, The New Stack does not allow comments directly on this website. We invite all readers who wish to discuss a story to visit us on Twitter or Facebook. We also welcome your news tips and feedback via email: email@example.com.