TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
Operations

The Move to Unsupervised Learning: Where We Are Today

Mar 3rd, 2023 10:00am by
Featued image for: The Move to Unsupervised Learning: Where We Are Today

Deep learning is a sub-field of machine learning which particularly focuses on the class of algorithms that are inspired by the structure and function of the human brain. These algorithms use Artificial Neural Networks (ANNs) to learn from large amounts of data and represent the world as a layered hierarchy of concepts.

For instance, for image recognition, light and dark regions are detected first, then edges, then forms and shapes. ANNs existed five decades ago, but back then, they were just two layers deep, because that’s all that the processing power of the day was able to handle. Now, we can go much deeper by adding more and more layers to the ANN, and we can better observe, understand and react to complex events. Hence, the “deep” in the name.

Common Deep Learning Applications

The biggest strength of deep learning lies in its ability to learn complex patterns from huge volumes of data. Multiple layers of processing elements, better ability to utilize large compute power and better training procedures are collectively empowering deep learning algorithms in this regard. Currently, some of the most common applications for deep learning are in image and speech recognition. Among the most promising applications are self-driving cars, virtual assistants, speech recognition, drug discovery, personalized recommendations in online retail and image processing.

Technologies that can mimic and better human behavior have been the subject of books and movies for decades. For most companies, realizing such solutions has been a long-standing pursuit but advances in deep learning are enabling many businesses to start to realize these aspirations.

Deep learning is marking its presence in a variety of ways, from humble day-to-day tasks such as cataloging one’s pictures to “moon-shot” aspirations such as safe self-driving cars and automated high-precision surgeries. Here are just a few examples of business sectors that can take advantage of the innovations in deep learning:

  • Healthcare: The healthcare industry holds enormous potential for deep learning. Advances are being made in areas ranging from drug discovery through to disease prediction and medical diagnosis.
  • Cybersecurity: Historically, cybersecurity focused on detecting attacks that occurred in the past. Advances in technology have enabled live monitoring and attack detection. Building on that foundation, deep learning can leverage data and add context and intelligence to both detect attacks that have never been seen before and even predict when and where the next attack is likely to happen.
  • Manufacturing: Manufacturers use machine learning to streamline every phase of production all the way from procurement to manufacturing scheduling to order fulfillment. Deep learning application will further empower manufacturers with abilities such as the predictive maintenance of equipment, “experience centers” to simulate the impact of changes in design, and virtual environments that enable remote troubleshooting.
  • Automotive: The automobile industry is seeing a shift from traditional automotive making to advanced technology applications, from core design to infotainment systems. Deep learning is further pushing the boundaries with driving assistance, mishap prevention and self-driving cars and trucks.
  • Retail: Retail industries have access to huge pools of data relating to customer behavior and preferences. Deep learning offers immense possibilities to provide personalized experiences, understand demand and make both products and services stand out.

Despite all these promising opportunities, the adoption of deep learning in the industry still faces several challenges, the main ones being a lack of “explainability” (more on that below) and the need for labeled training data.

Increasing Deep-Learning Adoption by Making It Explainable

As companies look to embed greater levels of deep learning into their data management systems it is important to make deep-learning solutions “explainable,” meaning the solution should be able to explain to business users why it predicted what it predicted. This explanation needs to be communicated in an easy-to-understand and transparent manner to gain the comfort and confidence of users.

Explainability not only builds trust in the teams using these solutions in production, but it also leads to the adoption of a more responsible approach to development. It helps developers ensure that the system is working as expected, confirm existing knowledge and challenge it when necessary.

Deep learning algorithms often offer higher operational accuracy due to the ability to create complex models, address high-dimension space and better capture feature interactions. However, such solutions come with the disadvantage of a lack of explainability, as the increasing complexity of these algorithms can make it difficult to infer how an algorithm reached a certain outcome.

In fact, these solutions often become so complex that even the data scientists who created them are not able to deduce how the algorithm deduced a specific result! This lack of explainability can lead to issues such as spurious correlations, unexpected behaviors and potential biases or unfairness, among others.

The wider range of AI solutions can be broadly classified into “white-box” and “black-box” models.

White-box solutions are transparent as to how they reach a certain conclusion, with users able to view and understand which factors influenced an algorithm’s decisions and how the algorithm behaves. Decision trees and linear regression are some examples of white-box algorithms. Such algorithms are often not able to derive complex relationships or deal with high-dimension space but provide high degrees of transparency in their functioning.

Black-box algorithms, on the other hand, are far less transparent in letting users know about how a certain outcome is reached. Deep neural networks are an example of black-box algorithms, as are boosting algorithms, which combine many simpler learning algorithms to iteratively improve accuracy. Black-box solutions often offer higher accuracy due to their ability to better capture complex feature interactions in a high-dimension space, but it comes at the cost of explainability.

Consider, for example, the problem of predicting customer churn for a telecom company. Creating a model to perform this prediction entails considering many features such as customer age, gender, geography, usage patterns, plans used and many more. Black-box algorithms perform this prediction without revealing the details of how it reached a certain conclusion.

Such algorithms carry the risk of not leveraging the help of domain experts to prevent any incorrect inferences, while white-box algorithms will provide a specific set of rules or conditions that were used to make an inference if a particular customer will stay or leave.

Post-hoc explainability provides a middle path. Both white-box and black-box solutions have their place in real-world scenarios, but there’s an opportunity to balance and blend the two, and it’s here that a promising middle path between white-box and black-box solutions is offered in the form of post-hoc explainability methods.

These methods analyze the responses of a machine learning model to interpret the reasoning logic behind the model. An example of one such method is Local Interpretable Model Agnostic Explanation (LIME), which analyzes the inputs and outputs of a black-box model and uses this data to construct simpler models providing detailed explanations for why an individual prediction was made.

Another promising post-hoc explainability technique is SHapley Additive exPlanations (SHAP), which analyzes the importance of how much each feature contributes to the derivation of the predicted value, thus helping provide an explanation of the output of a black-box algorithm. Continuing the customer churn prediction example, deploying SHAP would help infer which attributes — for example, customer usage pattern, monthly bill or competition — played a significant role in making the prediction about a customer’s churn.

It is important to find the right balance in the trade-off between explainability and accuracy. Users need to understand how much accuracy improvement a black-box solution is delivering over a white-box solution, then decide which option is the best fit for use.

A common industry practice is to start with a white-box solution wherever possible and then progress to a black-box solution only if it makes a compelling case that requires a focus on accuracy over explainability. When using black-box solutions, applying explainability techniques as outlined above can bring greater transparency to the modeling, building trust in the process with stakeholders.

Making Deep Learning Sustainable Through Self-Supervised Learning

In addition to the need for explainability, another significant challenge to the widespread adoption of deep learning is the increasing reliance on the need for labeled data, that is, adding labels to raw data such as text files and images to identify them and provide context that machine learning models can recognize and learn from. Supervised learning has made significant and impressive advances in recent years, demonstrating the ability to learn from massive amounts of labeled data.

There is, however, a limit to how much AI can advance using supervised learning alone. In many real-world scenarios, the availability of large amounts of labeled data is a challenge — either due to a lack of resources or the inherent nature of the problem itself. Ensuring class balance in the labeled data presents another challenge in that it’s often the case that some classes make up for a large proportion of the data, while other classes might not be adequately represented.

Furthermore, ensuring the trustworthiness of labeled data can present another challenge. If we analyze how we, as humans, learn, it’s not entirely based on the past training data we’ve received. Over the years, we develop generalized models about various concepts in the world, as if by osmosis, from experiences, to form what we know as common sense. This common sense helps us to learn new concepts and put those in context without massive amounts of training data.

For years, developing something akin to this common sense has eluded AI systems. That said, there are promising developments in this area in the form of self-supervised learning (SSL), which attempts to develop such background knowledge.

Self-supervised learning is a paradigm where the deep learning algorithm is fed unlabeled data as input, and automatically generates data labels, which are then used in subsequent iterations.

This unsupervised learning uses unlabeled datasets and works towards clustering and grouping, whereas supervised learning uses datasets with explicitly provided labeled datasets and works towards more conclusive tasks such as classification and regression. Self-supervised learning takes unlabeled data as input, but internally generates labels and uses these labels in subsequent iterations to learn, and work towards tasks such as classification and regression. Thus, an unsupervised problem is transformed into supervised action by autogenerating labels.

Self-supervised solutions are based on the concept of energy-based models (EBM). EBM is a trainable system where the compatibility of two inputs, let’s call them “x” and “y,” is measured in the form of energy. If the energy is low, then “x” and “y” are considered compatible. Let’s consider a common natural language processing (NLP) problem of completing an incomplete sentence. Now, to solve this problem, the SSL engine would be fed a large volume of unlabeled text data sets.

The SSL engine will use the structure of the data itself to train without relying on any external labels. The engine can then compute the energy between words to predict the most suitable word for missing or hidden words in a sentence. The SSL engine computes the prediction score for the possible output, giving a low score when it is uncertain of an outcome. Similar concepts can be applied to train audio and video datasets where the SSL engine can be trained on a large volume of audio or video datasets to predict hidden parts of input from the audible/visible parts of the input.

Self-supervised learning has found significant success in the field of NLP and vision, used to identify any hidden parts of an input from any unhidden parts of the input, for example, predicting the missing words in a sentence, predicting the past or future frames of a video, or predicting the missing portions of an image. Some of the common applications are in the space of medical image analysis, signature detection, image colorization and video motion prediction.

Conclusion

Deep learning technology is developing quickly, offering industries a better understanding of their world today and opening new, exciting technological possibilities that are influencing future direction. Advances in explainability and self-supervised learning provide greater flexibility and choice for intelligent learning tools to be applied based on specific use cases. These technologies hold significant promise to bring deep learning-based solutions closer to human-like autonomous and contextualized learning, freeing up human capital to focus on developments in other operational areas.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.