DeText: LinkedIn’s Open Source Deep Learning Framework for Natural Language Processing
U.S. social networking company LinkedIn has released DeText, an open source natural language processing framework that uses deep neural networks to facilitate tasks such as search and recommendation ranking, multiclass classification, query understanding, and sequence completion.
So what does DeText add to the ever-growing field of machine learning? As one might imagine, getting machines to understand human language isn’t as easy as it might initially appear — different word choices and contexts can all add to the complexity of any given utterance. In the field of artificial intelligence, natural language processing (NLP) is what machines use to read, understand and derive meaning from human language. Well-known NLP models include Bidirectional Encoder Representations from Transformers or BERT, which allow machines to perform tasks like machine translation, speech recognition, text classification, and more. When put into practice, such models can extract data from the internet to automate research, or detect disinformation.
Not surprisingly though, such tasks can require quite a bit of computational power to perform, depending on the AI model used. “DeText is a framework for efficiently leveraging deep learning models (such as BERT) to understand the semantics of text data,” as LinkedIn senior engineering manager Weiwei Guo explained to us via email. “It is able to perform word sense disambiguation and identify similar words such as ‘software developer’ versus ‘programmer.’ BERT models require a large amount of computation time, so our focus in building and productionizing BERT-based DeText models has been to simplify the process of using BERT in commercial applications.”
Flexible and Swappable
To tackle this issue of computational inefficiency, DeText has been designed with flexibility in mind, so that different NLP models can be “swapped” in as needed, depending on the requirements of different production processes. For instance, DeText supports a range of state-of-the-art semantics understanding models like convolutional neural networks (CNNs), long short-term memory networks (LSTMs), as well as BERT.
“The ‘swappability’ component is what enables the wide applicability of the DeText framework,” said Guo. “In addition, users can conveniently search for the optimal network architecture for use in their applications.”
DeText is optimized for efficiency so that DeText models can be deployed in a production environment, and also offers a high level of flexibility when it comes to module configuration. These features make DeText relatively more versatile and potentially more powerful compared to BERT or other NLP models, especially when it comes to fine-tuning search, and understanding what certain words might mean within a certain domain, as opposed to what the same words might signify in another domain. Compared to BERT, DeText also offers a much more efficient way to rank search results in a production environment, especially if a pre-trained BERT model is used.
“LinkedIn has deployed DeText models across three major applications: search ranking, query intent prediction, and query auto-completion,” explained Guo. “We use DeText with a version of BERT that is trained and calibrated on LinkedIn data, LiBERT. This is because LinkedIn is a professional social network with many domain-specific keywords. For example, applying LiBERT in DeText allows us to associate a query of ‘apple’ with ‘software engineer’ or ‘iOS’ instead of ‘fruits’. As you can see, the ‘swappability’ makes DeText a general language understanding tool that can be applied to many different tasks. In addition, users can quickly try out all the possible model structures to maximize the performance of their models.”
Notably, DeText is being offered as open source software, meaning that others can download it and easily adapt it for their own use. DeText features different components that can be modified through templates, such as an embedding layer, text encoding models, an interaction layer, feature processing, and a multilayer perceptron (a type of feedforward artificial neural network).
In the future, Guo says LinkedIn is looking to extend DeText as a general language understanding and generation library so that it can perform other important NLP tasks like sequence tagging (predicting labels on each word of an input sequence), sequence generation (creating a sequence from an input sequence), and unsupervised representation learning (so that it can learn language representations to act as a foundational model for the other deep learning NLP tasks).
To learn more, read LinkedIn’s blog post, and download DeText over at GitHub.
Images: Photo by Jason Leung via Unsplash; LinkedIn.