TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / API Management

Accessing Perplexity Online LLMs Programmatically Via API

Perplexity's online LLMs are groundbreaking, delivering new functionality that outperforms the best copilots and AI assistants today.
Jan 29th, 2024 8:20am by
Featued image for: Accessing Perplexity Online LLMs Programmatically Via API
Photo by David Clode on Unsplash

In my previous article, I discussed how Perplexity AI built online LLMs based on the approach explained in the FreshLLMs paper.

Let’s now see how we can build applications that consume the LLMs offered by Perplexity AI.

Step 1: Set up the Environment

If you have a pro account with Perplexity’s Copilot, you get $5 worth of credits every month. You can also sign up for the API separately by paying for the credits. Refer to the Perplexity Labs documentation for the details and rate limits of the API.

Once you have access to the API key, you can use the Python OpenAI module to access the models.

Create a Python 3.10 virtual environment and install the OpenAI and Jupyter Notebook modules.



Since we are accessing this on our workstation, we can launch the Jupyter Lab without a password or token.

Step 2: Accessing Llama 2 70B Through the API

Perplexity Labs offers various models, including its own online LLMs. As of this writing, it has the following LLMs:

We will first access Llama 2 to query about the upcoming 2024 ICC Men’s T20 World Cup. Obviously, we won’t get an accurate response because Llama 2 is not a FreshLLM.

Let’s start by importing the OpenAI module.


We will then initialize the variables holding the API key, model, and prompt values.


The next step is to construct the prompt template that contains the system and user roles. The user role will have the prompt that we initalized in the previous step.


We are ready to invoke the API and inspect the response. You can use the same API as OpenAI ChatCompletion to access the model, by passing the URI of Perplexity as the endpoint.

Notice how we initialize the client object with the URL and pass the model identifier to the ChatCompletion method.


The output is available in the response object, which can be easily accessed. Refer to the OpenAI Python SDK for details.


I got the response based on the World Cup held in 2020! This was expected, however, as Llama 2 cannot access the real-time data.

Now, let’s try the same prompt with Perplexity AI’s online LLM, pplx-7b-online.

Step 3: Accessing the Online LLM Through the API

We will change the model from llama-2-70b-chat to pplx-7b-online.


Since the model is an online LLM, it picks up the most recent data from the web and provides an accurate response.

A Few Caveats to Note

Based on my testing, the 7B parameter online LLM is very accurate but miserable at reasoning. If you need both, then pplx-70b-online is the ideal candidate.

The online LLMs cannot be used for multi-turn conversations that we typically expect from capable models such as GPT-4. These are good for fire-and-forget, one-time prompts that can replace search engine queries.

You can use pplx-api from Perplexity Labs to perform inference on popular open models such as Mistral and Llama. The platform also supports the latest from Mistral, the mixtral-8x7b-instruct model.

The API is extremely limited, with only one URI providing an OpenAI-compatible ChatCompletion endpoint. But the good news is that it can be used to build RAG-based applications that work with OpenAI. Frameworks like LangChain and LlamaIndex work well with this endpoint.

The other disappointment is the lack of an API for its popular Copilot functionality. Perplexity’s Copilot has the concepts of threads and libraries that group conversations into logical units. You can continue the thread at any time by accessing the library. The Copilot clarifies by interacting with the user to input additional information to complete the search. This is also missing in the API.

Perplexity’s online LLMs are groundbreaking, delivering new functionality that outperforms the best copilots and AI assistants on the market today. If they bring the API in line with the product’s functionality, it will be a game changer for developers.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.