TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Operations

Salesforce Ramps Up Deep Learning Research, Offers Pre-Packaged Machine Learning for Developers

Oct 10th, 2016 10:53am by
Featued image for: Salesforce Ramps Up Deep Learning Research, Offers Pre-Packaged Machine Learning for Developers

Enterprise software cloud provider Salesforce.com is building out a continuum of artificial intelligence-driven services, from prepackaged solutions to bring intelligence to common workday routines, to a platform that would support companies in building their own deep learning models.

Part of Salesforce’s momentum in this emerging field comes from its acquisition of MetaMind earlier this year. The company appointed MetaMind founder and CEO Richard Socher to head a research lab that will investigate how artificial intelligence (AI) and machine learning (ML) can be used in the enterprise.

The lab will publish its findings, rather than issuing yet another deep learning toolkit, Socher, who is now the Salesforce chief scientist, told us at the Salesforce’s annual Dreamforce user conference last week.

Socher came to Salesforce with a startup he founded to pursue the dynamic memory approach to machine learning. It was a topic he had been researching at Stanford University.

Socher’s approach adds more memory to a neural network to help it store and update details as it parses information, to help it deal with a stream of facts or with information coming from different sources. He’d shown the system detecting the sentiment of complex sentences and answering questions about what was happening in photographs; tasks machine learning systems often have problems with.

His lab will be working on fundamental research in deep learning, natural languages processing and computer vision rather than product features, although they’ll show up as part of the Einstein, a set of machine learning features that Salesforce recently added to its cloud services.

“We’re covering a broad range, from things that we know how to solve like lead scoring but we could do it better with the latest and greatest techniques, to things we don’t know how to do,” Socher said. “We don’t know how to reason over large amounts of text, we can’t do a perfect translation. We’re doing some basic, fundamental research, improving neural networks, really hard tasks that nobody can do well yet like question answering. Computer vision and multitask learning we’re very excited about, rather than the single model approach.”

Don’t expect to come out with its own deep learning framework the way Google, Microsoft, Amazon, Intel and Baidu have, though.

“We will publish academic, peer-reviewed papers; we’ll take them forward and we’ll publish the insights we have,” Socher told us; “We don’t have to reinvent the framework. We think there are enough frameworks already. The question is how you use those frameworks and what do you do with them. In a sense, they’re a commodity like programming languages and operating systems are commodity; we don’t need to go back to that assembly language we can go to much higher level languages.”

The techniques the research lab is working on go beyond the Einstein machine learning features built into Salesforce tools like Sales Cloud and Marketing Cloud which offer predictions, recommendations and alerts, based on machine learning models that are custom to each business using Salesforce.

“We can learn patterns and make predictions about customer data,” explained Salesforce head of data science Shubha Nabar; “How likely is your customer to churn? How should you reach out to them, even what should the text of your communication be?”

That could be scoring leads to send priority opportunities to senior sales staff, suggesting the best time of day to send a message that will get read quickly, spotting that you’re trying to close the deal with the wrong person at a company and suggesting who else should be involved in the conversation or detecting a competitor mentioned in an email using entity recognition.

“We can identify what company is mentioned in the context of other companies, who is the CEO, what are the personal networks like there,” claimed Socher.

Einstein can segment an audience by their shopping habits, such as dormant customers you can win back to selective subscribers who have specific interests, or window shoppers who open a lot of pages but don’t click on many things. Einstein could  be used to change the sort order of products on a shopping site by using previous sales to predict what specific customers will be interested in. It can help route customer support cases to the most qualified agent or escalate unanswered customer questions to cases automatically.

Image recognition capabilities could look at customer photos, and allowing the organization to drill into the features they interested in. Gender, hair color and length data could be used to offer hats to men with short hair and headbands to brunettes with long hair.

Salesforce is also working on predictive analytics and predictive device scoring for its Internet of Things services.

Those are all drag and drop options developers can use when building apps based on Salesforce using its Lightning tools for web and mobile development — the latter is based on Cordova mobile framework, and Salesforce Lightning-based apps will also work inside Microsoft Outlook.

But two of Metamind’s more advanced options have also made their way into Salesforce; you can train your own deep learning models for the predictive vision and sentiment services by uploading a training set, which is as straightforward as picking files to upload in a browser.

If none of the pre-trained classifiers suit the images you need to work with, you can train your own model to detect logos, process medical images or tell the difference between flat and pitched roofs. You can use the natural language processing services to look for positive and negative sentiment in text. You can also ask questions about what’s going on in an image in an interactive way, so you could build that into a chatbot for customer service, for example. The predictive services are APIs that you can call from any app.

You can also go a step further and use the in-development Apache PredictionIO open source machine learning server in the Heroku Private Spaces that Salesforce offers, which offer dedicated instances that give you your own version of Heroku, so you can add limit what IP ranges can access the instance or choose what geography it runs in).  PredictionIO lets you use machine learning libraries like Spark MLlib and OpenNLP or build your own custom machine learning models.

In short, Salesforce now offers a continuum of ML services, from the canned Einstein services that add ML to common sales and support processes, to the emerging and more experimental predictive services for images and natural language, to a full machine learning environment that you can set up, customize and program against in Java, PHP, Python and Ruby.

It’s quite a jump to move between those different levels of tools, but if your business is using Salesforce, you can choose how you want to use machine learning with it, depending on the skills and resources you have available.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.