Modal Title
AI / Machine Learning / Software Development

Prompt Engineering: Get LLMs to Generate the Content You Want

This article introduces prompt engineering to developers using large language models (LLMs) such as GPT-4 and PaLM. I will explain the types of LLMs, the importance of prompt engineering, various types of prompts with examples.
May 19th, 2023 8:00am by
Featued image for: Prompt Engineering: Get LLMs to Generate the Content You Want
Feature Image by Alexandra_Koch from Pixabay.

The generative AI models are trained to emit content based on the input. The more descriptive the input instruction is, the more accurate and precise the output is. The input instructions fed to a generative AI model are aptly called prompts. The art of crafting the most suitable prompt leads us to prompt engineering.

This article introduces prompt engineering to developers using large language models (LLMs) such as GPT-4 and PaLM. I will explain the types of LLMs, the importance of prompt engineering, various types of prompts with examples.

Understanding Large Language Models

Before getting started with prompt engineering, let’s explore the evolution of LLMs. This will help us understand the significance of prompts.

Generative AI is based on foundation models trained on a large corpus of data based on unsupervised learning techniques. These foundation models form the base for multiple variations of the model fine-tuned for a specific use case or scenario.

Large language models can be classified into base LLMs and instruction-tuned LLMs.

The base LLMs are the foundation models trained on massive datasets available in the public domain. Out of the box, these models are good at word completion. They can predict what comes next in the sentence. Examples of base LLMs include OpenAI’s GPT 3.5 and Meta’s LLaMa. When you pass a string as input to the base model, it generates another string that typically follows the input string.

The instruction-tuned LLMs are fine-tuned variations of the foundation model designed to follow instructions and generate an appropriate output. The instructions are typically in a format that describes a task or asks a question. OpenAI’s gpt-3.5-turbo, Stanford’s Alpaca, and Databricks’ Dolly are some of the examples of instruction-based LLMs. The gpt-3.5-turbo model is based on the GPT-3 foundation model, while Alpaca and Dolly are fine-tuned variations of LLaMa.

These models implement a technique known as Reinforcement Learning with Human Feedback (RLHF), where the model gets feedback from a human for each given instruction. The input prompt for these models is more descriptive and task-oriented than the prompts fed to the foundation models.

The Importance of Prompt Design

Prompt engineering is an essential skill for leveraging the full potential of LLMs. A well-designed prompt ensures clarity of intent, establishes context, controls output style, mitigates biases, and avoids harmful content. By carefully crafting prompts, users can enhance LLMs’ relevance, accuracy, and responsible usage of generative AI in various applications.

Two key aspects of prompt engineering are a thorough understanding of the LLM and a command of English. A poorly crafted prompt only generates a poor response that is half-baked and inaccurate, that’s close to hallucination. Using the correct vocabulary to instruct the model in the most concise form is critical to exploit the power of LLMs.

Since we will be dealing with multiple LLMs, it is also essential to understand the best practices and techniques specific to the model. This typically comes from the experience of using the model and carefully analyzing the documentation and examples published by the model provider. LLMs are also limited by the number of tokens – a form of compressing the input text – for accepting the input and generating the output. Prompts must adhere to the size restrictions imposed by the model.

Types of Prompts

Prompt engineering is still a fuzzy domain with no specific guidelines or principles. As LLMs continue to evolve, so will prompt engineering.

Let’s take a look at some of the common types of prompts used with current LLMs.

Explicit prompts
Explicit prompts provide the LLM with a clear and precise direction. Most of the time, they are clear and to the point, giving the LLM a simple task or a question to answer. When you need to come up with short, factual answers or finish a certain task, like summarizing a piece of writing or answering a multiple-choice question, explicit hints can help.

An example of an explicit prompt would look like “Write a short story about a young girl who discovers a magical key that unlocks a hidden door to another world.”

This explicit prompt clearly outlines the story’s topic, setting, and main element, providing the LLM with specific instructions on what to generate. By providing such a prompt, the LLM can focus its response on fulfilling the given criteria and create a story that revolves around the provided concept.

Conversational prompts
Conversational prompts are meant to get you to talk with the LLM in a more natural way. Most of the time, these questions are less organized and give the LLM more freedom in terms of length and style. Conversational prompts are great for making answers that feel more natural and flow better, like in chatbots or virtual assistants. Let’s take an example of a conversational prompt.

“Hey, Bard! Can you tell me a funny joke about cats?”

In this conversational prompt, the user initiates a conversation with the LLM and explicitly asks for a specific type of content, which is a funny joke about cats. The LLM can then generate a response that fulfills the user’s request by providing a humorous joke related to cats. This conversational prompt allows for a more interactive and engaging interaction with the LLM.

Context-based prompts
Context-based prompts give the LLM more information about the situation, which helps it come up with more correct and useful answers. These questions often include domain-specific terms or background information that helps the LLM understand the conversation or subject at hand. Context-based prompts are helpful in applications like content creation, where it’s important to make sure the output is correct and makes sense in the given context.

An example of a context-based prompt would be similar to the one shared below:

“I’m planning a trip to New York next month. Can you give me some recommendations for popular tourist attractions, local restaurants, and off-the-beaten-path spots to visit?”

In this context-based prompt, the user provides specific information about their upcoming trip to New York. The prompt includes the user’s intention to seek recommendations for popular tourist attractions, local restaurants, and off-the-beaten-path spots. This context-based prompt helps the LLM understand the user’s current situation and tailor its response by providing relevant suggestions and information specific to the user’s trip to New York.

Open-ended prompts
The open-ended prompt is another type of question posed to the LLM. It encourages the model to come up with longer, more detailed answers. Open-ended questions can help you write creatively, tell a story, or come up with ideas for articles or writings. These questions let the LLM give a more free-flowing answer and look at different ideas and points of view.

Consider the below prompt, which represents an open-ended prompt:

“Tell me about the impact of technology on society.”

In this open-ended prompt, the user initiates a broad topic of discussion without specifying any particular aspect or angle. The LLM has the freedom to explore various aspects of the impact of technology on society, such as social interactions, economy, education, privacy, or any other relevant areas. This open-ended prompt allows the LLM to provide a more comprehensive response by delving into different dimensions and perspectives related to the impact of technology on society.

Bias-mitigating prompts
Prompts can be designed in such a way that they force the LLMs to avoid possible biases in the output. For example, prompts can ask for different points of view or suggest that the LLM think about evidence-based thinking. These questions help ensure that the LLM has no hidden biases and that the results are fair and equal.

Below is an example of a prompt asking the LLM to avoid bias.

“Please generate a response that presents a balanced and objective view of the following topic: caste-based reservations in India. Consider providing multiple perspectives and avoid favoring any particular group, ideology, or opinion. Focus on presenting factual information, supported by reliable sources, and strive for inclusivity and fairness in your response.”

This prompt encourages the LLM to approach the topic in a neutral and unbiased manner. It emphasizes the importance of presenting multiple perspectives, avoiding favoritism, and relying on factual information from reliable sources. It also emphasizes inclusivity and fairness, urging the LLM to consider various viewpoints without promoting discrimination or prejudice. Providing this kind of prompt aims to mitigate potential biases and promote a more balanced output.

Code-generation prompts
Since the LLMs are trained on code repositories from the public domain, they can generate snippets in various languages. A code-based prompt for LLM is a prompt that asks the LLM to generate code in a specific language. The prompt should be specific and clear and provide enough information for the LLM to generate a correct answer. The following are some examples of code-based prompts for LLM:

“Write a Python function that takes in a list of integers as input and returns the sum of all the even numbers in the list.”

In this example, the prompt asks for a Python function that calculates the sum of all the even numbers in a given list. The generated code defines a function called sum_even_numbers that takes a list of integers as input. It initializes a variable sum to store the sum of even numbers and then iterates over each number in the input list. If a number is even (i.e., divisible by 2 with no remainder), it adds that number to the sum. Finally, the function returns the sum. It also adds the documentation and explains how it arrived at the solution.

In the next article of this series, we will explore the techniques used in prompt engineering with examples. Stay tuned.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.