Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
No: TypeScript remains the best language for structuring large enterprise applications.
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
I don’t know and I don’t care.

Large Language Models Aren‘t the Silver Bullet for Conversational AI

While Large Language Models have become an important foundation for conversational AI systems, many people incorrectly assume they'll solve all conversational AI problems — they won't.
Feb 28th, 2023 10:00am by
Featued image for: Large Language Models Aren‘t the Silver Bullet for Conversational AI
Image via Shutterstock.

Machine Learning’s Large Language Models (LLMs) — like ChatGPT, GPT3 and BERT — have recently captured the attention of the world. And for good reason.

Put simply, LLMs are artificial intelligence (AI) tools that read, summarize, translate and generate text. They’re able to predict which words would come next in a sentence with high confidence, which allows them to generate language similar to how humans speak and write. These models are so advanced, in fact, that some have even questioned their ability to achieve sentience.

But, while it’s no secret that LLMs have become an important foundation for conversational AI systems, many people incorrectly assume that LLMs will eventually be the silver bullet that will solve all conversational AI problems — and that’s just not the case.

There are several reasons why:

Significant Hallucination Rates

Conversational Systems Trained on LLMs are known to generate statements that are not grounded by any source content, or worse, contradict the source content — a phenomenon known as hallucination.

Although each model’s rate varies, hallucination is present with every LLM available today. The largest LLM available today, GPT3, has a 41% hallucination rate — meaning roughly one in every 2.5 prompts will yield a response that includes hallucination.

For example, you could ask the model if 1-800-APLCARE is a real support line, to which it might respond, “1-800-APLCARE is not a real Apple support line. Apple provides support through their website, phone numbers and online forums, but 1-800-APLCARE is not one of them.” When in actuality, 1-800-APLCARE is very much a real support line according to the Apple website. In this example, the generated text is simply factually incorrect.

Quoting from different sources can’t solve the problem either, since the model often takes sentences out of context and reassembles them into a paragraph to create an incorrect answer. For example, you could ask the model, “How much does AWS charge for a g4dn.16xlarge GPU instance?” To which it might respond, “The g4dn.16xlarge GPU instance from AWS costs $0.526 per hour.” In this instance, the pricing of g4dn.xlarge is grafted to gfdn.16xlarge.

The subtlety of these inaccuracies is innocent in these examples, but has much broader implications when dealing with more sensitive subjects. In its research, OpenAI states that hallucination poses very real threats when LLMs are used for real-world applications — like responding to employee questions in a business setting, or providing automated patient support in a healthcare setting. And, while hallucination rates are expected to improve in the future, there is no mechanism for fact-checking as of today. So to the untrained eye, these hallucinations could very well seem true.

The problem is that LLMs are a black box — and offer almost zero explainability. But to produce a conversational AI system that generates truthful responses with high confidence, you need additional layers of algorithms that help ensure predictability.

A Lack of Controllability

LLMs aren’t built like conventional systems — like Google — which are constructed with hundreds of layers of algorithms that ultimately need to be connected together. LLMs are so powerful because they offer an out-of-box, end-to-end system that essentially fuses these layers together.

On the one hand, this significantly reduces the time needed to build and train complex systems. However, it’s also very limiting because it offers little controllability — meaning there’s no way to manipulate the model to produce responses beyond the data it’s been trained on.

For example, let’s say you want to leverage conversational AI for employee support. Your employee might ask where a particular conference room that goes by the name of “Elvis Presley” is located. Without the added controllability needed for custom use cases like this, the model would spit out a nonsensical response based on the data it’s been fed about the entity “Elvis Presley.” In this situation, the domain-specific context is critical to produce a meaningful and actionable response.

Stale Knowledge

LLMs, like ChatGPT and GPT3, are trained to memorize knowledge and do reasoning in one shot. However, the knowledge LLMs are trained on becomes outdated very quickly — especially in an enterprise domain. That’s because knowledge is fluid, with the volume of data increasing every year. The result is inaccurate responses based on the model’s current dataset.

For example, a model trained on data from 2020 would not be aware of recent developments — like the James Webb Space Telescope that revealed the universe in a way never before seen by the human eye. Instead, it would say that James Webb is still in development, unaware of the past year’s success.

The aforementioned lack of controllability makes it challenging to separate this stale knowledge with the rest of models’ data. There is also no obvious mechanism to override its knowledge base and teach the model what the most appropriate answer to a specific prompt would be.

And, re-training LLMs requires a large amount of computational resources to be effective — making it an expensive endeavor every time the model needs to be re-trained. For enterprise applications, like customer or employee-facing chatbots, this just isn’t realistic or effective.

For enterprise LLM applications like these, it’s critical for the model to be living and breathing — meaning it ingests and delivers the most up-to-date information at all times.

So, What Are LLMs Good for Today?

The excitement around large language models is similar to what we saw early on with computer vision. When AlexNet first came out, many people were quick to say that Computer Vision had been “solved,” but this really wasn’t the case. Transforming such powerful technologies into consequential, real-life products that actually solve problems in daily life still requires tons of innovation.

Similarly, LLMs provide a new frontier to build conversational AI use cases on — but they were never meant to be a one-size-fits-all for conversational AI problems.

Instead, a significant amount of additional innovation is needed if businesses want to create meaningful outcomes with them.

For example, you could use an LLM as a starting point for a conversational AI system built for customer support. It would do an incredible job at understanding and interpreting language, but you would need to build custom algorithms that understand context, can identify domain-specific language and are able to take necessary action resulting from the interaction with the user.

How Do LLM Applications, Like ChatGPT, Change This?

The development of ChatGPT is a remarkable accomplishment. The speed with which we moved from classical natural language understanding (NLU) techniques to transformer models and LLMs to ChatGPT is way beyond what was anticipated a few years ago. And, its development brought conversational AI to the mainstream almost overnight due to the speed and creativity with which it generates responses to any given prompt.

For instance, you can ask it to draft a thoughtful email to your customer thanking them for their business, or you can use it to read your child an inventive bedtime story. It’ll handle those tasks with ease, and it’ll definitely impress and entertain anyone who interacts with it.

However, ChatGPT is not immune to the above challenges. It still suffers from a 21% hallucination rate. That’s 1 in 5! And, in its current interface, ChatGPT is very limited to the input and output of the prompt — meaning the only way to leverage ChatGPT is through OpenAI’s existing chat function.

The reality is that the true potential of ChatGPT is still very unknown. The full power of ChatGPT will reveal itself once the model is opened up for developers from around the world to leverage and innovate on top of. Similar to traditional LLMs, additional layers of controllability will allow businesses to create never-before-seen custom conversational AI use cases with ChatGPT.

But make no mistake, LLMs and applications like ChatGPT will require heavy innovation on top of existing LLM systems if they want to create meaningful outputs. That also means businesses who are not already architected to leverage LLMs will need to realign their machine learning strategies to include LLM, and LLM application adoption — otherwise they will quickly fall behind.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.