Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword
AI / Large Language Models / Software Development

7 Best Practices for Developers Getting Started with GenAI

With a little experience, you can tackle some pretty hard problems with GenAI, and like every new technology, the best way to learn is by doing.
Dec 13th, 2023 8:40am by
Featued image for: 7 Best Practices for Developers Getting Started with GenAI
Image from VP Photo Studio on Shutterstock.

With the advent of accessible generative AI into the mainstream, and the resulting ability to transform all of human knowledge via simple language, every enterprise is scrambling to integrate AI into its technical canon. For developers, the pressure is on, but so is a world of exciting possibilities.

You can tackle some pretty hard problems with GenAI if you have a little experience, and like every new technology since the dawn of HTML, the best way to learn is by doing. Let’s look at seven steps you can take to start laying a GenAI foundation and eventually work your way to a fully functioning, scalable application.

1. Play around with Available GenAI Tools

The best way to get started with GenAI is to practice, and the barrier to entry is incredibly low. With many free options now available on the market — Bard, ChatGPT, Bing and Anthropic — there are lots of options to learn from.

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI.

A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. Make sure you communicate this golden rule to your teams early and directly.

2. Understand What You Can Get from GenAI

Once you start playing around with GenAI, you’ll quickly learn which prompts produce what type of output. Most GenAI tools can produce various formats of text including:

  • Generating new stories, ideas, articles or bodies of text of any length.
  • Transforming existing text into a different format such as JSON, markdown or CSV.
  • Translating text into a different language.
  • Conversing back and forth in a chat style.
  • Scrutinizing text to surface certain elements.
  • Summarizing long-form content to get insights.
  • Analyzing the sentiment of a piece of text.

Anyone can produce these kinds of generative text results with zero programming skills. You simply type in a prompt, and out comes text. However, the more training a large language model (LLM) has had — the more bits and pieces of language it’s ingested — the more accurate it gets over time at producing, changing and analyzing text.

3. Learn Prompt Engineering

One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance.

Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. This is important, because all the available LLMs are stateless, meaning there’s no long memory, and every interaction only exists in small sessions.

These three things become important in prompt engineering:

  1. Context: The questions you’ve asked, the chat history and the parameters you’ve set.
  2. Knowledge: The combination of what the LLM has been trained on and what new information you’ve given it with your prompt.
  3. Form: The tone in which you expect information to be generated.

The combination of context, knowledge and form shapes GenAI’s massive store of knowledge into the type of response you’re hoping to get.

4. Explore Other GenAI Prompt Approaches

So far we’ve been talking about zero-shot prompting, which essentially means asking a question with some context around it. If you’re not getting the desired results from this approach, there are four more ways to prompt GenAI.

  1. Single-shot prompting: Provide an example of the type of output you’re looking for. This is particularly useful if you want a specific type of format, such as [Headline] and [4 bullet points].
  2. Few-shot prompting: This is similar to single-shot prompting, but instead you’d offer three to five examples instead of just one.
  3. “Let’s think step by step”: This hack can work just as well on an LLM as it does on a person. If you have a complex question with multiple parts, type this phrase at the end and wait for the LLM to break things down.
  4. Chain-of-thought prompting: For questions involving complex arithmetic or other reasoning tasks, chain-of-thought prompting instructs the tool to “show its work” and explain how it got to the answer. Here’s an example of what that might look like:

5. Check out Other Examples of GenAI Work

Once you’re familiar with GenAI tools and understand how to write a great prompt, check out some of the examples posted by OpenAI to learn what other people are doing — and what else might be possible. As you experiment, you’ll get more comfortable with the chat interface and learn how to fine-tune it so you can deftly narrow down the response and even transform responses into a CSV file or other kind of table.

Think about how you could apply your GenAI knowledge to your business to streamline difficult or repetitive tasks, generate ideas and make information easily accessible to a broader audience. What new use cases can you dream up? What’s now possible that wasn’t before?

6. Integrate with Third-Party GenAI Tools and APIs

Consider the role of using LLMs via APIs such as ChatGPT, Bard and Claude 2. These tools each have robust APIs available and the documentation to support it, so the barrier to entry is low. Most of these APIs are usage-based, so they’re more affordable to play around with.

Typically, with API integration, you can also integrate custom or private data into the LLM prompt via semantic search and embeddings powered by a vector database, typically called RAG (retrieval augmented generation).

Breaking down these two terms:

  • Semantic search: Uses word embeddings to compare the meaning of a query to the meaning of the documents in its index for more relevant results even without exact word matches.
  • Embeddings: Numerical representations of objects like words, sentences or entire documents into a multidimensional space. This makes it possible to evaluate the relationship between different entities.

Here’s an example of what this might look like:

This visual shows how the concepts of “cat” and “dog” are closer to each other than they are to “human” or “spider,” and that “vehicle car” is the furthest, being the least related of the concepts. (Here’s more on how to use semantic search and embeddings.)

7. Train Your Own Model from Scratch

This last tip is actually less of a tip and more of an “optional next step.” Training your own GenAI model is not for everyone, but you might consider it if you:

  • Have a unique and valuable knowledge base.
  • Want to perform certain tasks that aren’t possible with a commercial LLM.
  • Find that the inference costs of commercial LLMs don’t make business sense.
  • Have specific security requirements that you need to host your own LLM data and are not comfortable passing data through a third-party API.

One way to train your own model is to use an open source model such as Llama 2, Mosaic MPT-7B, Falcon or Vicuna — many of which also provide commercial-use licenses. These are typically labeled according to the number of parameters they have: 7B, 13B, 40B, etc. The “B” represents the billions of parameters the model has and how much information it can process and store. The higher the number, the more complex and sophisticated the model, but also the more expensive it will be to train and run. If your use case is not complex, and if you’re planning to run the model off a fairly powerful modern laptop, a lower-parameter model is the best and most cost-effective way to start.

Mid to large organizations may choose to build and train an LLM model from scratch. This is a very expensive, resource-intensive and time-consuming path to AI. You need technical talent that’s hard to hire and the runway to iterate for quite a while, so this path is not realistic for most organizations.

Fine-Tuning an LLM

Some organizations choose the middle path: fine-tuning a base-level open source LLM to achieve specific things beyond the pretrained abilities of the model. This is a great path if you’re looking to create a virtual assistant in the unique voice of your brand or a recommendation system built on real customer purchases. These models continuously train themselves over time as you incorporate highly ranked user interactions. In fact, Open AI reports that with this model, the prompt length can be reduced by up to 90% while maintaining performance. In addition, Open AI’s commercial API’s recent enhancements make it as powerful and accessible as the model that powers ChatGPT and Bing AI.

Confluent offers helpful resources to learn more about building real-time AI applications. With these seven steps in mind, start getting your hands dirty, learning from your mistakes and revolutionizing your organization with GenAI.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.