How LinkedIn Overcame Challenges of Building Generative AI
With now more than a billion members, LinkedIn already has to do things at a global scale. But just over a year ago, when the company started building its newest public feature of collaborative articles, they needed to work out how to leverage then nascent generative AI to accelerate advice sharing in the world’s largest professional network.
This four-month design and build process uncovered several sociotechnical challenges in building with GenAI, which two of the the team shared, exclusively, with The New Stack. Learn from LinkedIn’s Shweta Patira, director of creator engineering and this project lead, and Lakshman Somasundaram, director of product management for the Moonshots team. So that you too are prepared to build with generative AI, with both AI and the human users in mind, at scale.
LinkedIn Looks to Leverage GenAI to Spark Conversations
“It starts with the fact that everyone has problems at work every single day, right? It might be like, ‘Hey, I wish I could get promoted, and I don’t know how to get promoted,’ or ‘I need to conduct an interview and I don’t really know how to conduct an interview.’ Everyone’s got their own problems at work every single day,” Somasundaram told The New Stack. Until now, “the best way to get answers is to ask connections or ask people you know for their advice who have been there and done that, have solved those problems and solved them really well.”
But of course, these aren’t yes or no questions. There can be many right and wrong ways to conduct a tech interview or to get promoted — both of which have a lot of context and bias affecting results. Having more, varied perspectives increases the chance for better answers.
But not everyone has access to that rich of a network.
“What that means is that the people who often get access to the best answers to these questions are the people who have large networks. Because they can actually tap multiple people in their networks to get their advice and perspectives and opinions on how would you go about solving this problem.” Because, Somasundaram continued, “Most of the world does not have large networks. Most people don’t have a lot of people that they can tap into to ask those questions and [get] perspectives from multiple folks.”
The now more than a billion LinkedIn members bring over 10 billion years of work experience, putting the company in a unique position to concentrate on the social network of it all and hive mind its members to leverage and share their expertise, at scale.
On the other hand, it’s been famously claimed that it’s easier to critique something than to create something. This is especially true when workers have limited time to get things done — and when searching for questions and posting responses on LinkedIn is not your actual job.
Of course, if you’re an AI, you’d have the reverse problem. In the present world of artificial intelligence, there are less and less limitations on creating something — it’s more that we need a critical eye to review what it has created.
“If you can put something in front of someone, it’s much easier for them to say, ‘Hey, this doesn’t make sense. I would do this differently,” Somasundaram explained. “That’s what generative AI really enables and has enabled for us. And that’s why within collaborative articles, what we’ve been doing is creating all these starter articles with the power of AI, but then, with those starter articles, actually inviting all the world’s experts on those topics [to comment] on LinkedIn.”
Part of it is that these are often questions that you may ask a trusted peer, but you maybe don’t want to post — e.g. that you want to get paid more, or you don’t know how best to give an interview. So instead, the engineering team has trained the generative AI on the 40,000 skills within LinkedIn profiles, so that it can ask questions and suggest subtopics within those questions so that identified experts can respond without you having to be the one who asked.
But when they decided to build their first major generative AI feature for LinkedIn back in October 2022, there wasn’t exactly a playbook to make it all happen. They came across significant challenges that are unique to the GenAI space. Here’s what the engineering team learned from it.
GenAI Sociotechnical Challenge #1: Prompt Engineering
Yes, LinkedIn — like GitHub, the creator of Copilot; and OpenAI, the creator of ChatGPT — are all owned by Microsoft. But that doesn’t mean they were getting insider access so quickly. Patira told The New Stack that they only had limited to ChatGPT 3.5, which launched around the same time as this project kicked off, but they didn’t have access to 3.5 at scale. They mainly relied on its predecessor GPT-3 to build and release this feature.
“The infancy of generative AI at the time gave us a lot of realizations of the fact that we not only had to author LLM [large language model] prompts, but we also had to build a lot of the GenAI scaffolding, our program-to-workflow management, tooling from the ground up, [while] building the product,” she said.
They quickly realized that generative AI wasn’t — and still isn’t — where it needed to be to work completely autonomously. And certainly, GPT-3 wasn’t up to snuff. Human evaluation of generative AI was and is still necessary, which is exactly the pattern the LinkedIn stories team followed, Patira explained — “human evaluation of what the quality of these articles are.”
They’d kick off with a prompt like, What are common causes of fear of public speaking?
“Then, once we get responses from this prompt, we wanted to develop these responses for some of these questions, into these collaborative articles. And we wanted to do this in batches and at scale. We don’t want to just generate one, we want to generate a lot of them,” she continued. Then, “we give it to our amazing editorial team. They essentially go in and they look at the quality of each one of these and they say, ‘Okay, we’re going to score it on the basis of relevance, on the basis of accuracy, and on the basis of making sure there are no red flags’.”
At this point, they approve or reject the GenAI result, before moving on to honing, like asking the GenAI to make the writing more crisp.
“So we do this over and over again. And when you start doing something over and over again, you very quickly realize you need tools for it. It doesn’t make sense to do this in spreadsheets,” Patira said. “So then we went down the path of, while we are building the product, we’re also building this tooling to make sure that we can do this at scale, in batches, with human evaluation, with trust classifiers, all of this embedded in the workflow.”
Also Read: Generative AI for Developers Prompting Tips
GenAI Sociotechnical Challenge #2: Trust
But what does trust mean… with robots?
“Trust at LinkedIn, for us, is an embedded part of every product that we build,” Patira said. “So throughout the end-to-end flow of generating these articles, to the end of distributing your answers to someone, each step of the way, we use what we call trust classifiers, which are proactive defenses, and we use sort of a Swiss cheese model, because we know that one defense is not going to be enough. So every step of the way, we have these defenses that we put in place.”
This is really AI, not generative AI, she explained, like to be able to tell the difference between dissent and harassment.
“While we want to invite debate, we do not want to invite toxicity,” she continued. “We have classifiers that are essentially AI models that look at each of the human contributions. They also look at all of these generative AI-generated collaborative articles — both the AI content as well as the human content, at every step of the way — and say, Does this have toxic content? Is it harassment? Is it bullying?”
This necessary human-in-the-loop intervention meant that the team had to grow quickly to support this collaborative articles project at scale. They ended up with about 12 sociotechnical teams of four or five people on each, organized around solving the challenges and sub-challenges we are outlining.
“Through the entire journey, we found that AI is still a lot more trustworthy than humans are,” Patira reflected, as they looked to root out hate speech and spam, and put more proactive defenses in place early on. “Humans actually can be quite unpredictable. AI is more predictable. And how this manifested in our product is we had a lot of trust guardrails put in place in order to check AI. And we realized that we needed more trust guardrails for humans than we needed for AI.”
She went on to offer guidance to those working with generative AI, “We needed to do a lot more in order to make sure that our conversations remain healthy on LinkedIn by humans, not via AI.”
GenAI Sociotechnical Challenge #3: Expert Identification
LinkedIn is the world’s largest professional network, so there’s no doubt that people want to seem like an expert there, making it tricky to distinguish the real experts from the expert bullshitters. Making expert identification — among a billion members — a real problem, even before they launched the feature in March of this year.
“LinkedIn is especially good at this because we have a very dense Caleygraph,” Patira said, which includes individual job histories, our skills, skill endorsements, and any skill-based proficiency tests. “Based on this, we actually have a unique advantage in being able to tell if you’re a genuine expert in an area or not.”
Among all this publicly available profile data, LinkedIn then uses AI to identify who is the top 10% among the 40,000 skills that have been grouped into, at the time of publishing, about 1,000 expert topics they currently have available.
“We are going to rank things for you based on what we think is most relevant,” she explained.
There is of course incentive to reply where you want to be branded an expert, because if you collaborate to a number of articles in a topic area, you can get a sweet little “Top Voice” badge near the top of your profile. She clarified, “We are not going to hide things or hide contributions from you, unless we think that they are genuinely low quality or our professional community policies have been violated.”
But “experts” are also manually vetted, at least in this earlier stage of these collaborative articles. And, LinkedIn members interact with stories with the same reactions as posts, which then feeds back into both the AI suggestion algorithm and those LinkedIn humans in the loop. In addition, this feature has the same violation reporting system.
GenAI Sociotechnical Challenge #4: Distribution
First came the experts, then the eventual experts who learn from the collaborative articles.
“Say you’re an expert in podcasting. Once you have put in your collaboration, we want to deliver this to people who actually want to learn more about public speaking, or people who are seeking these answers. So we want to meet these Knowledge Seekers, as we call them, where they are at.” Patira said. “So distribution is another big tech challenge.”
Of course, LinkedIn members are humans, so the first place we go to answer a question is Google. Which happens to be set up with neat features like “People also ask” which LinkedIn is aiming to rank on. They also recommend this expert collaboration on your LinkedIn feed and if you’re signed up for LinkedIn email notifications. If you have asked for updates on an individual member, they’ll also ping you right away.
“We essentially want to change people’s habits, when and where they look for these answers,” she said. “We want to bring this content where they are already at.”
GenAI Sociotechnical Challenge #5: At Scale
“A lot of these problems are not rocket science in a smaller scale. They just get harder with large scale,” Patira said. The pair pegged LinkedIn as having flagged about 40 million prospective experts. Add to that, millions of questions via a mix of prompt engineering and generative AI. LinkedIn also has millions of jobs in the backend. Plus expert evaluation for identification to tackle at scale.
For just the prompt workflow, she explained, “We’re using workflows where we are dumping a lot of this data into queues, then picking them up from Kafka queues, dumping them in another part of the workflow, then making certain online calls to GPT, getting these responses, and storing them — all that end to end.”
The next workflow is in the expert identification, where there are backend jobs that they run offline every few hours to crunch the data for expert prospecting
“Let’s look at our current expert list. Who can now possibly be an expert? What has changed? And then returning all of that data again,” Patira said, most of which happens in the backend.
Especially export identification and prompt workflow management is done mostly offline. Online is reserved for timely notifications like on members’ feeds, “like you shared an answer that could be very useful to me. And so you want me to get a timely notification saying, ‘Jennifer just shared this, you might be interested in it’.”
The LinkedIn collaborative articles team is continuously working on optimizing and limiting the amount of data that needs to be crunched online versus offline or nearline, which is an intermediate type of data that’s not as fast as online or real-time data, but faster than retrieving offline data. All this while feed and notifications are the two big online systems that need to be managed at scale.
“On both of these, we use our LinkedIn Graph quite heavily,” Patira said, referring to LinkedIn’s Economic Graph.
And that scale is expected to continue to ramp up. They are also starting to implement Chat GPT-4, which is expected to hasten growth even more.
“It’s been eight months since we launched and since then, there’s been just a ton of acceleration, especially in the last month, month and a half,” Somasundaram said, including a global reach in English, as well as recent launches into French, Spanish, Portuguese and German. They’ve also recently released a massive desktop redesign.
Members will eventually be able to pose questions too. Because they both emphasized that the goal in these articles is not all AI text, but to leverage generative AI to jumpstart professional, human-led conversation.