Modal Title
AI / Frontend Development / Software Development

The Changing Role of Human Developers in an AI and LLM World

Senior developer David Eastman reflects on what AI and LLMs can do for software development, how jobs will be impacted, and how to prepare.
Apr 11th, 2023 10:22am by
Featued image for: The Changing Role of Human Developers in an AI and LLM World
Image via Shutterstock 

The problem with analyzing the place of “Artificial Intelligence” (AI) right now is that it isn’t purely an academic pursuit — it clearly has a strong financial factor. This means that ChatGPT’s true measure of success is how much social media traction it gets for Microsoft because beyond that it doesn’t have a specific purpose. So writing more on this subject feels partly like turning the hype flywheel. And yet people are already using code that Large Language Models (LLMs) have provided.

There are three separate areas that I want to look at: what can LLMs do for professional software development now, will professional software development be negatively impacted (i.e. will we lose our jobs) and will software development be opened up to many more people now because of more democratic access?

But First Some Context

Pronouncements on the dangers of LLMs are largely overblown and are often made by the same people who said blockchain would run everything while we would all be living in the metaverse. Laugh, but not so long ago experts stated that chess computers would never beat a grandmaster at the game. Chess (like the game Go) is a rules-based domain, and as computing resources grow exponentially cheaper, these domains that were once thought of as unfathomably deep are now merely murky puddles.

When I first studied a few AI courses back in the day, the subject was receding due to lack of progress. John Searle’s classic Chinese Room¹ problem hadn’t truly been answered and books were written about whether the mind was a special type of computer. But these were technically and literally academic issues. Today we have correctly stopped trying to “make brains” and are focused on intelligent outcomes. While we have bypassed expert systems, we have not approached Artificial General Intelligence (AGI). You will know when we have AGI — self-driving cars will work, for a start.

What we have now is transformer-based self learning systems, whose internal methods are totally opaque.

Technically, the answers to most questions you may want answered are probably somewhere on the internet by now. The Data is there. Almost every task has been done before, and written about, or filmed.

We learned quite an important lesson from Google’s correction of queries it receives. While these often appear in the guise of spelling corrections, we know that they are just continually comparing your query against the bulk of similar queries. More to the point, this is much more useful than a spelling check — the power of this innovation might have even surprised Google.

This is enough context to start looking at the effect of LLMs, and feedback systems. As the purpose of computing is to convert problems in the real world into virtual mathematical models that can be manipulated in predictable ways, it stands to reason that computing is repetitive. It is putting known Lego blocks into original shapes — not creating new Lego blocks for each problem.

LLMs started off as impressive stochastic parrots. I think we should put aside the idea of whether the parrot is or isn’t “intelligent” and just accept that if a real parrot had absorbed 570GB of training data, its replies might also be quite good. But it is clear that some “emergent qualities” indicate that much more is to come.

LLMs have already earned the chance to intercede in many human tasks — at the moment, Copilot is the most obvious in the developer space. Here, it intercedes between intention and content within the coding domain. The theoretical continuation of this is “conversational programming,” in which structures could be created by non-technical description. More on that later.

Running Copilot

You can get Copilot up and running fairly quickly. It isn’t free for most private developers, although the trial period is 60 days. I say this just to remind you that this is a business proposition that earned $7M last year. Having said that, it is pretty good value — and it starts working immediately. It certainly isn’t too hard to run on Visual Studio, as you might imagine. It can be loaded in as a marketplace extension. You must then authenticate against your GitHub account. (So you need to be a member of GitHub, which Microsoft is clearly using as a Trojan horse into the development community) and after some authorization tango, it will be ready. Right now, Copilot doesn’t run on VS 2022 for Mac, although VSCode for Mac is fine.

I’ll show one simple example in C#, but video is a better medium to see Copilot in action. I’ll use a very small example that nevertheless proves the point.

The FlagsAttribute in C# is used when you want to efficiently store a flag set — that is, a set of boolean values manipulated with bitwise arithmetic. I want one in my class to record a set of occurrences. If I define the enumerable and a variable for it:


Merely defining the name of the method was enough for Copilot to complete them. In short, I only wrote the signatures of the two methods below, Copilot did the rest:


It can do this because I’m using conventions (although it can read comments) and what I’m doing is a standard use and could probably be spotted in many code dojos and Stack Overflow examples — as well as in Microsoft’s own documentation. Now, as it happens there is a fresher way to check on a flag, but that requires an assumption about which .NET version I’m using. So I’m happy with my wingman.

The reason I focus on this particular LLM’s use within an IDE is that I can immediately confirm that the code is valid, and I can try building it. Just asking ChatGPT to write something from scratch gives you no such assurances, although it can seem more impressive initially.

Does this point to the destruction of junior development jobs? No, not at all. It just means they will be pushed higher up the stack.

Copilot is a tool, and works with or because of the developer’s ability to define small bits of functional code, use convention and work within the context familiar in any Microsoft solution. What it is effectively doing is removing the step of opening a tab on Stack Overflow. No LLM can take responsibility for actions (none of which are transparent), or add to corporate knowledge. I would expect a junior developer to use tools like Copilot judicially, to become more valuable in the market. In fact, the earlier they start using it, the more of an advantage they have over older colleagues who are likely more resistant to change.

But as the AI gets more information about what developers are doing, it could get braver and more proactive. For now, Microsoft will restrict that direction.

I would expect that LLMs are already entering software development curriculums in college, in the same way that using IDEs is considered standard practice.

Senior Devs: Team Shapers Fighting Competing Objectives

The energy steering software development is nearly always political. When managers say “work with the business” they really mean “understand the internal politics.” Project managers can create a safe bubble to work within, but by the time senior developers become aware of how politics is impacting their projects, it is normally too late.

While their focus is within development teams, seniors can only understand the road ahead for their projects by understanding what the organization is doing.

In short, senior developers have to concern themselves with a little of the why as well as just the how.

How many tests are written and what type, pushing back on overly aggressive release procedures, reacting to service issues, making suggestions based on monitoring, mentoring new starters — these are not the type of areas that I see being filled by LLM-driven decision tools yet. This is because picking up on business and political context within an organization of any size is hard to codify. We all know that a manager in a hurry will encourage staff not to bother with tests. But if asked explicitly, they will deny it. Normal everyday human behavior; but not within a stable computing domain.

Turning this on its head, it is quite possible that new businesses will appear where there is little external context and self-learning algorithms can be relied on to improve things unattended.

In my next post on this subject, I’ll look at the likely role of LLM tools further down the chain — where there may be more fertile ground for rapid change.

[1] If I am inside a closed room receiving Chinese text on a sheet of paper, using a book to translate it, then sending out translated text, does the room understand Chinese even though I don’t?

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.