TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
AI / Tech Life

Tech Works: How to Fill the 27 Million AI Engineer Gap

Learn how to help your software engineers gain the technical skills, theory and empathy to pivot into AI developer and LLM engineer roles.
Aug 18th, 2023 5:00am by
Featued image for: Tech Works: How to Fill the 27 Million AI Engineer Gap
Image by Diana Gonçalves Osterfeld.
Editor’s note: Tech Works is a monthly column by longtime New Stack contributor Jennifer Riggins that explores workplace conditions, management ideas, career development and the tech job market as it affects the people who build and run the software the world relies on. We welcome your feedback and ideas for future columns.

Around the globe, there are only about 150,000 machine learning engineers — a small fraction of the world’s 29 million software engineers.

Yet AI is driving a growing demand for large language model (LLM) developers that’s already tough to fulfill. External factors like global chip shortages and the current limits of technology mean the most in-demand skill sets will heavily vary from short term to long term — which in this new AI age can be just a few months. That’s why U.S.-based AI engineering job listings boast six-figure salaries.

The best opportunity to start to close this gap quickly is in retraining technologists.

So how do organizations help turn software engineers into AI developers?

For this installment of Tech Works, I talked to Mikayel Harutyunyan, head of marketing at Activeloop, which helps connect data to machine learning models, about the impact of AI on developer experience and the journey of prompt engineers, data scientists and LLM developers.

Prompt Engineers: A Short-Term Solution

Since the engineering mindset is inherently scientific, it’s no surprise that most of your engineering team is already experimenting with AI. Whether you’ve asked them to or not, they’re likely pair programming with GitHub’s Copilot and ChatGPT. (It’s important to note the recent revelation that while they seem very convincing, ChatGPT’s code suggestions are wrong more than half the time.)

It’s only logical that the next step in the developing AI market is to become a prompt engineer.

In AI, a prompt is any information, like questions and answers, that communicates to the artificial intelligence what response you’re looking for. Therefore a prompt engineer is tasked with:

  • Understanding the limitations of a model.
  • Designing a prompt in natural language.
  • Evaluating performance.
  • Refining when necessary.
  • Deploying over internal data.

A common current use case is a customer service chatbot. A prompt engineer needs an understanding of not only the model but the end user or customer.

But Harutyunyan predicted that this prompt engineer is more of a stopgap role reflecting current AI limitations — soon, AI models will likely do this better than humans, even reading and reacting to emotions like frustration.

In the next year or so, prompts that combine images and text will likely also be able to be translated by generative AI. Think of the opportunity to evaluate if a car accident insurance claim is valid by a written description and a few photos of the damage.

As chatbot tooling becomes more autonomous and less technical, prompt engineers will become the subject matter experts. Once customer support representatives get a repeated query, they will automatically feed the question and answer into a machine learning tool, so the chatbot answers that question next time.

It makes sense to remove the developer from the loop in order to bring the machine-learning model closer to what a specific industry or organization requires. After all, a building manager knows their building better than an off-site developer and will soon be better equipped to tweak the model that’s communicating with the HVAC and security cameras.

But until that evolution of the prompt engineer role, Harutyunyan said, the job requires more empathy for how your users think and speak. “The people will be writing this or that, and I need to make sure my model expects them to write this,” he noted, including slang, abbreviations, emojis and more.

Improv classes and pairing up engineers with your customer support representatives are two ways to build this empathy and verbal versatility quickly. Or you could offer technical training to the customer reps, who likely have that empathy and client perspective already.

And don’t worry, even if the prompt engineer role only lasts a year or two, empathy is an always in-demand skill for a software engineer.

Poll: What job role will grow the most in the near term due to increased use of LLMs and generative AI. AI engineering is being posited as a new profession that will surpass many existing job roles.Results: AI Engineer - 35% ML engineer - 13% MP Ops engineer - 12% Data engineer - 10% Full stack engineer - 19% Other - 10%

The New Stack VoxPop Results: 214 people responded from August 21 through August 29, 2023 to which technical role they think will be most in demand in the short term, as AI becomes an increasing part of software engineering workflows.

The Skills an AI Engineer Needs

It would be rare indeed to find AI engineering candidates who tick all the boxes, but there are certain technical and core skills that make you a better candidate than most. Harutyunyan lumped them into machine learning engineering and more LLM engineering skills.

Machine Learning Skills: Python and More

The open source programming language Python reigns supreme in machine learning. Even more so now that Facebook advocated for a very technical change in Python, which Harutyunyan said makes it much more suitable for LLM training. The global interpreter lock, or GIL, allows only one thread at a time, so removing that lock allows for multi-threaded processes, which in turn speeds up training.

The vast majority of software engineers have at least some familiarity with Python, but many lack other machine learning fundamentals, including statistics. Developers need to brush up on basic statistics, Harutyunyan said, as well as machine learning fundamentals like:

  • The differences between supervised and unsupervised learning.
  • What is bias in machine learning and how to remove it. (Especially in private data.)
  • How to evaluate machine learning models.

Alongside Python, be sure to learn about the LangChain framework for developing apps for large language models. Also, dive into vector databases for long-term memory for AI.

LLM Skills: the Transformer Model and More

Harutyunyan placed large language models more in the “deep learning skills” bucket, as it’s a still nascent topic that has been rather locked up in academia.

To kick off your LLM journey, he recommended learning about the Transformer machine-learning model. He compared it to a mystery novel where you collect clues page by page to identify the culprit.

“A Transformer model, it kind of takes a look at all the pages of the book at once and then cross-references the clues and says ‘OK, this is the probability of the next word,’ or whatever it is.”

This model, used for predominantly text data, Harutyunyan said, “helps to make sure that you understand some relationships and patterns that are spread out over very long distances within the data.”

Then, the Transformer attention mechanism allows you to assign greater importance to different outputs and other information.

Harutyunyan and many data scientists also recommend reading the seminal paper by Cornell University researchers, “Attention Is All You Need.”

If you’ve thus far missed reading the research paper, he added, that’s OK. “If you’re learning to drive a car, you don’t really need to read more about the first car ever made and how it was built,” he said. “This is what is so special about what’s happening right now.”

Many software engineers are simply jumping into the driver’s seat and connecting the LLM API to their data that are stored in a database for AI, Harutyunyan noted, “and they are building a demo that actually works.”

But, he added, an understanding of the fundamentals will give you an advantage: “That layer will get commoditized very, very quickly because everybody will be able to connect an API for the large language model to their data and build a generic app with a simple UI for a certain use case.”

Throughout this learning process, continue to learn how the LLM was trained — think natural language processing — and why your model is not working.

Once you’ve taken these steps, Harutyunyan said it’s time to learn about the data flywheel, where you productize data, increasing the speed of end-to-end value from private data. This real-time data and model runs in production, constantly feeding back changes and improvements, such as analyzing why a sale was successful or not.

He recommended checking out the popular deep-dive, step-by-step explainer videos for AI beginners created by Andrej Karpathy, formerly of Tesla and OpenAI.

Once you’re in production, you can then leverage knowledge retriever architecture for LLMs. This takes data across existing sources like Slack, email or customer chat, and understands how to store your data so that the responses to your questions will be relevant. This is more important when you don’t want to pay to store less relevant data and responses.

Core Skills: Language Paired with Engineering

Just like a DevOps team with different skills is more set up for success than a single full-stack developer, pairing or teaming up engineers — from frontend to backend to machine learning — and subject domain experts will accelerate your organization’s AI growth.

Contrary to the rumor that generative AI is stealing jobs from journalists, linguistic skills are more in demand than ever.

“What I’m seeing is that nontechnical [people] like myself can very often get better outputs from the LLM than technical people,” Harutyunyan said.

He’s found that pairing with his developer colleagues to create queries made for improved prompts and results.

“Engineers are known to be very object-oriented. So they’re like: X does Y, and then from Y goes Z,” he said. “Maybe what you also need to be is a bit more linguistically endowed and to be able to explain in better words — if you have this use case, you’re acting as this person.”

He noted that the University of California, Berkeley’s new College of Computing, Data Science, and Society was recently established, in part, to focus on the inclusion of human-centric skills in AI.

The Global Chip Shortage Demands Efficiency

All the money in the world can’t buy what doesn’t exist. Anyone who has tried recently to buy a car — or a cell phone or video game console — has been hit by the ongoing microchip supply chain crisis. There simply isn’t enough compute to go around. And large language models devour hundreds of terabytes of data, which increases as an LLM model grows.

“In our current paradigm, where computing is the constraint and not software talent, product leaders must redefine how they prioritize various products or features, bringing GPU limitations to the forefront of strategic decision-making,” Prerak Garg, a tech and strategy adviser, recently wrote in HackerNoon.

To help organizations make decisions about LLM training, he offered product leaders a GPU prioritization framework.

The first target audience to upskill for working with LLMs is the classic machine learning engineer, who can already train smaller models and can adapt those skills to the scale of large language models.

Such engineers need significantly more knowledge of how to store data and databases for AI, Harutyunyan said, and an understanding of the unique way to package data in order to train these exponentially larger scale models, more efficiently at a lower cost. This includes tabular, non-tabular and raw data, he said, like images that need to be labeled correctly.

Add to this a foundation of MLOps in order to train and deploy it, and you’ve got the complex LLM developer job description.

LLM developers who can optimize for compute are in high demand. Harutyunyan and his colleagues contend that CPUs are better than GPUs for fine-tuning LLMs for cost efficiency, particularly when GPUs are scarce.

But if you can optimize for very domain-specific performance, Harutyunyan reckoned you could cut that cost dramatically down via fine-tuning of models. It’s also important to note that an emphasis on compute efficiency always translates to an exponentially smaller environmental impact.

Because the field of LLM development is just starting to gain momentum, training programs for technologists are relatively scarce. However, Activeloop launched Gen AI 360: Foundational Model Certification, a free program, in June, in collaboration with TowardsAI and the Intel Disruptor Initiative.

Its course on LangChain, vector databases and foundational models, has already been taken by more than 10,000 senior-level developers and managers worldwide, according to Activeloop.

A subsequent certifications program on training and fine-tuning LLMs will launch in September, with a program focused on deep learning across business verticals slated to start in October or November.


Got an idea for a topic that Tech Works should explore? Send a message to @TheNewStack or @JKRiggins.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: The New Stack.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.