TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Operations

Deep Learning Broadens the Reach of Artificial Intelligence

Deep learning does not require ML engineers to define a fixed set of features. Instead, it starts with a representation of an object, scene, or text. In image classification, for instance, the deep learning algorithm takes a list of pixels as input. For example, if an application is trying to identify handwritten digits in an image, the input would be the pixels of the image and the output would be a number from 0 to 9.
Apr 17th, 2019 12:15pm by
Featued image for: Deep Learning Broadens the Reach of Artificial Intelligence

Deep learning has become almost synonymous with machine learning and that is no surprise given its effectiveness and adaptability. Deep learning models are being used to detect cancer, make autonomous driving possible, translate texts, predict maintenance needs, assist with drug discovery and so much more.

Learning without Feature Engineering

Tareq Aljabar
Tareq Aljabar is the Head of Marketing and growth hacking at MissingLink.ai. He formerly worked at Macromedia, Adobe, Atlassian and Microsoft. He also co-founded @iGitSocial @Klippings.

Learning begins with understanding the features of a problem or object. Features can be as simple as the number of words in a text or the attributes in a database table describing a customer. Prior to developing deep learning algorithms, machine learning (ML) engineers have to identify features that could represent objects to learn about in such a way that objects could be classified. Deep learning enables machine learning without feature engineering and the ability to learn increasingly complex structures.

A classic training model problem in machine learning is to identify different species of irises using only sepal length and width along with petal length and width. For example, a simple set of features may be sufficient to distinguish between three types of irises by color, texture, and shape.

While the iris exercise uses a simple set of learning criteria, often the most obvious features are not sufficient for the accuracy and precision needed in real-world AI applications. That’s where deep learning comes into play.

Deep learning does not require ML engineers to define a fixed set of features. Instead, it starts with a representation of an object, scene, or text. In image classification, for instance, the deep learning algorithm takes a list of pixels as input. For example, if an application is trying to identify handwritten digits in an image, the input would be the pixels of the image and the output would be a number from 0 to 9.

Deep learning algorithms use neural networks, which are sets of interconnected nodes. Each node takes in one or more inputs and produces a single output which can be used as input to other nodes in the network. The output is produced by a mathematical rule or function that adjusts during training. By making thousands to millions of small adjustments, the nodes in the network can be trained to approximate any mapping from inputs to outputs. This is in contrast to ML, in which engineers must specify features when training deep learning models.

In many cases, learning is enhanced by using multiple layers of nodes. By adding layers, the neural network can learn features which can become more complex as the number of layers increases. For example, early layers of an image analysis application using a neural network may learn to identify points, while another layer learns lines, and a third layer learns to respond to polygons.

Specialized deep learning techniques, like convolution, allow a network to recognize an object even if it appears in different parts of a scene from training examples. Recurrent neural networks help to capture information about inputs already seen, such as analyzing a text or a stream of time series data. The flexibility and adaptability of deep learning enables machine learning to be applied to a wide range of complicated problem domains.

Effectiveness of Deep Learning

Deep learning not only obviates the need for feature engineering, but it also produces results superior to other ML approaches. For example:

  • Google Translate reduced errors by 60% by switching to a deep learning model for translation from its previous non-deep learning model.
  • Autonomous vehicles are using deep learning techniques to learn multiple tasks, such as controlling speed while adjusting steering.
  • Deep learning models are out-performing standard finance methods for pricing securities and constructing investment portfolios.

Why deep learning networks are as effective as they are is difficult to answer, but some contributing factors are discussed in the article, The Unreasonable Effectiveness of Deep Learning.

Where Is Deep Learning Headed?

Deep learning is rapidly emerging as the most effective machine learning approach for many problem domains across different industry sectors. Michele Goetz, principal analyst at Forrester, expects companies to weave AI into business processes and focus on fundamentals like data management and understanding AI decision-making processes.

In a similar vein, Santanu Bhattacharya, chief data scientist at Airtel, predicts that non-tech companies will adopt AI with a focus on AI-driven results rather than vague AI strategies. Andrew Ng, co-founder of Google Brain and founder of Landing AI, has a similar outlook, predicting the biggest opportunities are outside the tech sector in areas such as manufacturing, agriculture, and healthcare.

Moving deep learning into more application areas come with new challenges. Deep learning works well when there are large volumes of relatively clean data to learn from. This is not always the case, and both Google’s Ng and Yann LeCun, chief AI scientist at Facebook, expect to see more research on problems related to learning with smaller amounts of data and lower quality data.

Deep learning has achieved a rare status among AI methods and techniques. It is widely recognized as an effective and pragmatic tool for solving challenging AI problems. Both business analysts and ML researchers anticipate wide adoption of deep learning as new learning models are refined. That will help move deep learning from the research lab to the marketplace. As deep learning goes mainstream it will give businesses across industry sectors the ability to more efficiently harness massive amounts of data for nuanced decision making that solves complex problems quickly, increases productivity and enhances the productivity of human workers.

To learn more about Deep Learning, check out this video presentation by Yuval Greenfield of MissingLink.ai at the 2018 Samsung Developers Conference:

Feature image via Pixabay.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.