NLUX can be used with any large language model (LLM) backend service but comes with pre-built adapters to connect with OpenAI’s ChatGPT and Hugging Face’s large language models (LLM). It’s also possible to build custom streaming or promise adapters to connect with other LLMs or APIs.
“Let’s say a big company that doesn’t use the public OpenAI large language model, but they want to use a custom model that’s hosted in their own servers,” said NLUX creator Salmen Hichri. “They can actually build and customize their own models, but they still can use NLUX to connect to those models. They need to build a custom adapter for their own model and API.”
Hichri said there are older chatbot libraries, but to his knowledge, NLUX is the first library that’s AI-specific.
Currently, there are two “flavors” of NLUX:
- NLUX React JS, which includes React components and hooks; and
Giving Chatbots Personality
Developers can personalize their chatbot with natural language queues and a few lines of code to give the conversation a bit of personality. They can also instruct the bot to be serious, funny, modest or confident.
“For the adapters, we already provide for OpenAI and for Hugging Face, we allow the customization through what are called system messages,” he explained. “So when developers use NLUX, the system message — which is not visible to the user — this is like the developer telling the chatbot to behave in a certain way.”
NLUX is also currently building an adapter for LangChain, along with support for server-side rendering. Voice chat is also on the roadmap for the library.
Beyond Chatbots: The Next Stage of AI-Powered Apps
Right now, the focus for generative AI application development is on building what are essentially AI-powered chatbots. But Hichri and others in the space say the focus will soon shift toward AI copilots, which will be able to perform actions within applications after receiving natural language commands.
“It’s not just a conversation, but it’s a smart system that can perform actions on behalf of the user and it’s embedded within the application or the software,” he said. “The user still needs to define what kinds of actions can be can be performed on their software, but the trigger won’t be a click or looking for a menu — the trigger will be natural language expression.”
This copilot mode is in the roadmap for NLUX and should be coming very soon, he added.
The other trend that will impact AI apps will be the ability to have spatial awareness and coupling that with augmented reality, Hichir predicted. Specifically, he pointed to Apple’s work with Vision Pro, but noted that OpenAI is also starting to offer features that would position them to be an augmented reality assistant.
“That might not be a big deal for someone who’s working with an office but for certain types of jobs in industries or in architecture, or some other jobs where a spatial awareness is very important, giving access to a smart AI system through augmented reality assistance — it’s a game changer,” he said.