TNS
VOXPOP
What news from AWS re:Invent last week will have the most impact on you?
Amazon Q, an AI chatbot for explaining how AWS works.
0%
Super-fast S3 Express storage.
0%
New Graviton 4 processor instances.
0%
Emily Freeman leaving AWS.
0%
I don't use AWS, so none of this will affect me.
0%
AI / Open Source

The AI Engineer Foundation: Open Source for the Future of AI

The AI Engineer Foundation, which is seeking to join the Linux Foundation, has begun its work by launching a tech-agnostic AI Agent Protocol.
Nov 2nd, 2023 10:14am by
Featued image for: The AI Engineer Foundation: Open Source for the Future of AI
Sasha Sheng, founder of the AI Engineer Foundation, at the recent AI Engineer Summit; photo by Richard MacManus.

Both the European Commission and the U.S. government are scrambling to create AI regulations. Actors’ and writers’ unions are striking, in part, to protect their creative work against the threat of job-stealing AI. Almost everyone acknowledges the potential risk of unregulated AI at scale. But governments and industries trend behind technological advancements — and the growth of artificial intelligence is at a speed like nothing we’ve witnessed before.

While regulators and external entities work things out, it’s left to the tech industry to set its own standards. And, as Spotify’s Helen Gruel reminded us recently, the path to standardization is via open source.

That’s where the AI Engineer Foundation, announced at this month’s AI Engineer Summit, comes in. A nonprofit dedicated to open sourcing and standardizing AI advancements so anyone — from individual developers to large organizations — can understand and participate in building AI in a way that’s interoperable, inclusive and community-driven.

“The goal here is to elevate everybody in the industry that is working in this space,” Sasha Sheng, the AI Engineer Foundation’s founder, told The New Stack. But what is this organization? How can it remain vendor-neutral? And what work has it done so far? Read on to learn more.

The AI Agent Protocol

As the tech matures and use cases and adoption become more widespread, the logical next step will be AI standardization as a way to welcome people to build on top of it — just like what HTTP achieved early on in the web. Sheng said this will “allow people to have standards on […] how to communicate so that things can be built on top of those mutually agreed upon standards.”

With this in mind, the first project out of the foundation is the tech-agnostic Agent Protocol, which is an OpenAPI specification that aims to create a common interface for interacting with AI agents. Also called intelligent agents, these are pieces of software that act autonomously, intuiting the next step and then executing the task.

Agent Protocol “is very simply just a spec [specification] of what the data input looks like, what the data output looks like, [and] what the endpoint looks like,” Sheng explained.

Early integration partners include LangChain and LlamaIndex, the popular platforms for developing with large language models, so that developers can use Agent Protocol out of the box. Other early adopters include product startups developing assistant-type workflows, like personal AI agents GPT Engineer, Smol Developer and AutoGPT.

The Path of Inclusive AI Is Open Source

Agent Protocol isn’t the only specification out there. It’s going up against the likes of SuperAgent, AIWaves Agents and Microsoft’s AutoGen, each vertically integrated frameworks with their own specific communication protocol. However, it’s a highly fragmented marketplace and those are “owned by individual companies that have a financial interest tied to it,” Sheng explained. “So, as a developer, when they build tooling, they can pick whatever they want to use, but the downside of using something that a company owns, that’s for profit, is potentially misaligned incentives.” She referenced how, earlier this year, previously open source Terraform changed its license to closed source. “We want to avoid things like venture-backed lock-in.”

Instead of competition, she’d love for these vendor-backed frameworks to also adopt Agent Protocol, which has been recently added to the AutoGen roadmap.

Adopting this vendor-neutral, open source strategy, Sheng explained, is also about giving developers the freedom to choose how to use and build their own AI agents — because, so far, each company has its own internal way that its agents communicate.

“I hope, by doing Agent Protocol and promoting it, these individual frameworks that people are using will slowly adopt Agent Protocol as well, so that we can actually have a unified interface,” she said. But, she also emphasized, “I am not in the position to say our Agent Protocol is the best out there. We want to be careful not to dictate what the industry should use.”

Instead, she continued, the AI Engineer Foundation aims to be “a house for standards and mutually agreed upon interfaces so we can collaborate more.” They aim to be the observer and see what people start using.

Indeed, there is interest in using it. When we spoke to Sheng, the AI Engineer Foundation was just closing out its first hackathon (run by the team behind AutoGPT), which saw more than 600 agents competing to create the best generalized agent that can handle tasks through natural language input.

The hackathon acts as an evaluation benchmark. “The AutoGPT team built a benchmark evaluation tool that basically can call various agents, all implemented differently; but the reason it can call [is that] all the agents have a unified interface to receive requests,” Sheng explained. “And so the benchmark doesn’t need to know how exactly the internals of how an agent is implemented, but, because they know what language the agent speaks, they can ping the agent and get a response to evaluate how well it does on the task.”

How the Agent Engineer Fits into the AI Engineer Job Demand

The AI Engineer Foundation is also developing a series of courses dedicated to training traditional software engineers to help fill the 27 million AI engineer job gap.

One such course could be to help developers train to become agent engineers. This is a subset of the AI engineer role, Sheng explained, and an evolution of the very human agent role. From customer support to travel agents, there are many roles in which the first or second form of contact has been or will be automated very soon, with generative AI answering commonly asked questions.

“You can loosely think of a human being as an agent, right?” she said. “We think what the next step to do is then act kind of autonomously.”

It’s evolving into humans plus agents performing tasks, like call centers going hybrid. The agent engineer will help build the self-learning agents and then the prompt engineers — whose roles are grounded in empathy and domain knowledge — will point out gaps in the knowledge base.

This can also be applied to software development processes themselves, like Kubiya for platform engineering.

AI agents already have a lot of potential in the developer productivity space, particularly around documentation.

“For a developer, there are a lot of resources that people can use to do the same task,” Sheng said. “The simple act of calling LLM [large language model], there are multiple different interfaces that wraps the call. So to developers, if we can standardize the interface, it’s essentially helping them move faster.”

But AI isn’t quite there yet. As we already know, generative AI will respond even if it doesn’t know the answer, so users need to be wary. In this transitory hybrid world, Sheng said, the human agents will take over the conversation when the agent doesn’t know the answer, while the prompt engineer will investigate and tune when the agent performs poorly.

Humans need to observe how the agent internals work, too, Sheng said, which is why there’s a demand for AI agent observability tooling, “when humans need to observe to see when an agent can give out an answer, and then act accordingly.”

Agent engineers and prompt engineers may just be stop-gap jobs for the near future, but will help traditional software developers diversify their offerings and enter the AI space. As AI advances, the subject matter experts — who outside of software development may not need technical knowledge at all — will be training these models in the future.

Next Steps for the AI Engineer Foundation

The AI Engineer Foundation is looking for open source software projects to apply to join the foundation, which, like the Linux Foundation, will have three stages: sandbox, incubation and graduation. Sheng said they are especially eager to find a schema project that wants to be donated to foster standardization of data labeling. The foundation also has a project under development that looks to standardize storing and sharing model definitions, including context windows and cost per token. They’re also searching for open source projects around both vector database interface and inter-LLM interoperability. Agent observability is another open source solution area the foundation is eager to explore, as agents can get stuck in a loop without an understanding of when or why.

Sheng said her foundation has kicked off the process of joining the Linux Foundation to access the reputation and support for the organization’s structure, legal and back office, as well as to gain access to other open source communities.

All AI Engineer Foundation projects will be open source and to continue to contribute back to AI research. The foundation is also in the process of looking for sponsorship.

As Sheng said, “Projects like this help the entire ecosystem.”

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.