TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Tech Life

The Year Ahead: What’s in Store for Machine Learning?

Dec 31st, 2015 9:45am by
Featued image for: The Year Ahead: What’s in Store for Machine Learning?

It’s been a breakthrough year for machine learning. This form of artificial intelligence — which uses algorithms to provide computers with the ability to learn from data and make computational predictions without explicit programming — has long been confined to academia since the 1950s. But there’s a noticeable shift in the air, as we’re seeing machine learning (ML) being incorporated into conspicuous items like autonomous cars, robots and drones, and even computers smart enough to beat humans on quiz shows. Machine learning is now also being increasingly adopted into the business world, as computing gets cheaper and more powerful, and data ever more abundant.

Whether we are conscious of it or not, many of us already have had some experience with machine learning through our smartphones and computers. Companies like Google, IBM, Microsoft, Amazon and Facebook are investing heavily in AI, and have already integrated forms of ML into products designed to perform image, text and speech recognition, natural language processing, search engine and recommendation tasks, fraud detection, and more. Even Tesla’s Elon Musk is wading in with a consortium to back OpenAI, a non-profit initiative to develop responsible AI with a positive impact on humanity. If you’ve used Google’s mobile Translate app, or clicked through Amazon’s Recommendations, then you’ve used machine learning. It’s more prevalent than we know and from the looks of it, machine learning is set to become even more entrenched in our daily lives. So what might we anticipate for this rapidly evolving field in 2016?

machine learning 3

Smarter Apps and Agents

For starters, we can expect machine learning to become even more mainstreamed in the coming year, advancing the deeper personalization of products and services: smarter mobile applications and smarter personal digital assistants that will learn and adapt to user needs over time.

“Enterprise apps, productivity apps, and mobile apps will become increasingly smart by embedding learning capabilities or other forms of AI into the apps,” says Mark Koh of consulting firm Frost & Sullivan. “For example, increasing use of NLP [natural language processing], machine vision and voice interface.”

Preliminary versions of these embedded learning capabilities are already at work in familiar digital personal assistants like Apple’s Siri, Microsoft’s Cortana and Google Now. These virtual assistants utilize ML for speech recognition and natural language processing in answering simple questions like “What’s the weather today?”

While the long-term goal is to use predictive technology to understand patterns in user preferences in order to accomplish tasks, today’s digital assistants are still relatively limited in their capabilities, and aren’t able to handle more complicated requests involving many pieces of nuanced information. Fast forward ten years, and if radical innovations in computing architectures do take shape, then future assistants will have the capacity to seamlessly synthesize information from many different sources and services to multitask more complex jobs.

Others, like Facebook’s M, could supplement machine learning algorithms with behind-the-scenes human trainers to create a more powerful virtual assistant to complete relatively complex assignments like booking restaurant reservations or finding a gift for a loved one. If this strategy proves more popular and profitable, then we may see other hybrid digital assistants being developed in the near future. Ultimately, these virtual personal assistants may be forerunners to conversational interfaces and “autonomous agents” that will mediate between a human user, their agglomeration of smart devices, wearables and the Internet of Things.

“Over the next five years we will evolve to a post-app world with intelligent agents delivering dynamic and contextual actions and interfaces,” says David Cearley of Gartner, an IT research firm. “IT leaders should explore how they can use autonomous things and agents to augment human activity and free people for work that only people can do. However, they must recognize that smart agents and things are a long-term phenomenon that will continually evolve and expand their uses for the next 20 years.”

machine learning 4

Drug Discovery, Diagnosis and Environmental Predictions

The thrust of ML-powered deep personalization could also manifest in unlikely places. Deep learning — a form of machine learning that trains multiple-layered artificial neural nets to learn higher-level features from large datasets — could potentially be used to accelerate the discovery of effective drugs for a wide range of diseases.

To this end, Google is leveraging its large-scale neural network training system conduct virtual drug screening, reducing the enormous amount of time and cost that traditional methods require. In the same spectrum, IBM recently acquired over 30 billion medical images to train Watson, its artificial intelligence system, to help doctors diagnose diseases much more accurately.

Besides medicine, machine learning could also help tackle challenging environmental problems. IBM researchers are now testing Green Horizon, a system that uses “adaptive machine learning” to predict air pollution levels in Beijing 72 hours in advance by assimilating large amounts of complex data from several different models. Experts say that this approach is about 30 percent more accurate than conventional methods. This practical machine learning approach to persistent environmental problems could someday be part of the solution for optimized pollution reduction strategies, wildlife conservation, and perhaps mitigate climate change itself.

machine-learning-2016

Cyber Security and Democratization

In light of this year’s massive security breaches, machine learning models could give IT security a much-needed boost in the near future by automating the analysis of security data and establishing adaptive authentication systems, rather than relying on static rules or signature-based detection of security threats.

A democratization of the machine learning field is also emerging, with companies offering more user-friendly tools that could be used by businesses and non-academics. In addition to existing machine learning APIs, open sourcing the technology is another way that this democratization will happen, as we’ve already seen with Google’s TensorFlow, IBM’s SystemML and Microsoft’s Distributed Machine Learning Toolkit (DMLT). With more and more data repositories, data marketplaces and data search engines popping up, more fuel is added to the fire as open source machine learning and the combined knowledge of the community will drive artificial intelligence innovations further in the next few years.

Keep a Human in the Loop

Though the technology has grown by leaps and bounds in the last few years, it will still be critical for businesses that use off-the-shelf machine learning products to keep a knowledgeable human in the loop — whether it’s to “clean the data,” define the problem or to develop context-specific solutions. All machine learning models have flaws, so the tools won’t be replacing people anytime soon, but they will help people be better at what they do.

As machine learning breaks out of academia and into the larger world, the general focus will shift to scaling, applying and deploying established algorithms to solve real-world problems.

“There’s theoretical machine learning, where people are very focused on idealised problems, creating new algorithms,” says Poul Petersen, the chief infrastructure officer of BigML, a company that offers a machine learning API. “But it’s not enough just to invent algorithms, it’s really only useful if you can do something with them and put them into practice.

“The focus will eventually move away from the learning layer [to] where everybody will use machine learning, as it’s just another part of the [computing] stack.”

IBM is a sponsor of The New Stack.

Images: CASIS / Jon Fingas (CC BY-ND 2.0) / Enzylogic (CC BY-SA 2.0) / xdxd_vs_xdxd (CC BY-SA 2.0)

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.