The Year in AI: What’s Behind in 2020, and What’s Ahead
It’s been an incredible — and controversial year — for machine learning and artificial intelligence in general. For the past several years we’ve witnessed the gradual infiltration of AI into our everyday lives, but 2020, in particular, has been an exceptional year for AI in analyzing language, precision medicine, and in more nefarious applications like the automated mass surveillance of ordinary people.
With AI poised to make additional inroads into all areas of our daily lives, it’s important to take a look back at the immense gains and discoveries of the year, while also keeping one eye looking ahead toward the new possibilities that might await us in the future.
AI Supercharges Natural Language Processing
Some of the biggest breakthroughs of the year centered around the development of new models in natural language processing (NLP), a subfield of AI that gives machines the ability to read, understand and extract meaning from human languages, whether it’s in the form of audible speech or written texts. Possible applications for NLP can range widely, from language translation applications (such as Google Translate), to chatbots and personal virtual assistants like Siri, Alexa or Cortana.
In particular, the launch of OpenAI’s GPT-3 was a huge leap forward for NLP, thanks to its enormous size of 175 billion training parameters, which far eclipses those of previous state-of-art models by at least tenfold. Unlike earlier NLP models which required significant amounts of manual fine-tuning, GPTs (generative pre-trained transformers) rely on “transformer” deep learning neural networks, which are capable of learning and understanding contextual relationships between words in a text. What’s significant about GPT-3 is that it is able to learn from only a few examples before it’s capable of producing output on its own, whether it’s long lengths of machine-generated text, summaries, or even short stories that would be indistinguishable from those written by a human — to unscrambling words, solving simple math problems, or even writing code.
As NLP models continue to evolve, we will see more intriguing use cases, such as machines that can intelligently answer deep philosophical questions, to so-called “moral choice machines” that can automatically learn the difference between right and wrong, as well as the enhancement of useful tools like automated fact-checkers to combat the proliferation of online disinformation.
Of course, there are still a lot of flaws to be worked out. But besides a few exceptions, as the size and performance of these NLP models grow, more computational power and time — and therefore larger budgets — will be required to train them, meaning that it’s likely that bigger tech companies will have the advantage over smaller outfits looking to make advances in this field.
AI Accelerating Scientific Research
With an expanding number of research publications incorporating machine learning methods, AI is also further propelling research in chemistry, medicine and biology this year. Notably, machine learning is being used to automate a number of tasks that might have been tedious or even impossible for human researchers, such as predicting quantum mechanical wavefunctions, or analyzing, deconstructing and classifying the multifaceted behaviors of animals — some of which might be too subtle for the human eye to perceive.
In addition, AI is being used to help correct the decades-old problem of sex bias in clinical research trials, which test the safety of new drugs. Data in such trials is typically heavily skewed, due to the larger numbers of male participants. Since men’s and women’s bodies have different physiologies, such discrepancies can mean that either sex can experience adverse reactions with drugs or dosages that might be considered safe for the other. Besides correcting such data imbalances, machine learning is also being used to power “prescriptive analytics,” where patient data can be analyzed to establish preventative measures that can ultimately help reduce hospital readmission rates.
AI Expanding Mass Surveillance
One of the more questionable advances this year is the increasing use of AI in mass surveillance. In particular, the sale of facial recognition technology by companies like Clearview and Amazon to law enforcement agencies around the world raises disturbing concerns about privacy and the possible erosion of civil liberties. What’s even worse is that the biometric data that powers these tools — as was the case with Clearview — are sometimes scraped wholesale off social media sites, without the knowledge or consent of users.
Such powerful technologies can help to automatically identify and track individuals, which can mean solving more crimes. However, the industry is largely unregulated, meaning that facial recognition technology can be also be misused, making it easier for the police to identify and monitor law-abiding citizens exercising their right to freely protest and assemble, or detaining innocent people on the basis of an incorrect facial recognition match.
Given the public outcry over the potential for misuse of the large-scale expansion of mass surveillance, it was heartening to see some big tech companies declare a moratorium on the development and sale of facial recognition tech to law enforcement — but it may be too little, too late, unless privacy regulations are strengthened on the federal level.
The Rise of MLOps
The rapid development of AI models of all kinds has translated into an increasing need to ensure that they are also ready to be scaled up for production. The emergence of machine learning operations — also known as MLOps or AIOps — is centered around a series of best practices on how lab-trained machine learning models can be then efficiently operationalized and managed in the real world. Such transitions can often be inefficient and disorganized, as data scientists, data engineers, IT professionals, ML engineering teams often work in their own silos, resulting in complex challenges to creating, managing and deploying ML models — sometimes even within the same company.
Similar to the DevOps approach from which it takes its inspiration, MLOps aims to automate and integrate the processes of development, integration, testing, and deployment into a single, efficient pipeline. To respond to this growing need, tools like feature stores for machine learning have recently cropped up, providing a central interface where different teams can create, publish, store and consume new features (i.e. an individual, measurable property or characteristic of whatever is being observed). The growing list of MLOps tools will help streamline the lifecycle of machine learning model development and usage, especially as the field matures and the number of ML models being pushed into production grows.
Emerging AI Trends in 2021
Beyond these notable trends in 2020, the year ahead will likely see larger and larger language models being built, with experts declaring the possibility of a 10-trillion parameter model making its first appearance in 2021. AI will continue to accelerate new discoveries in biology and medicine, whether that may be building upon previously intractable problems like how proteins enfold, or developing customized pharmaceuticals for personalized medical treatments. Given the existing problems of algorithmic bias and a multitude of privacy concerns, ethics and data privacy in AI will continue to be major issues, meaning that interest in potential solutions like de-biasing algorithms, anonymizing algorithms, and federated learning will only continue to intensify.
Images: Markus Winkler, Bill Oxford and Nick Loggie via Unsplash; Tecton