OpenAI, ChatGPT and the Next Age of ‘Build’
The work from OpenAI, specifically, ChatGPT has dominated tech circles as of late. The most recent update to the 3.5 release has brought the concept of “generative AI” into the mainstream by presenting an interface that allows users to interact with the artificial intelligence conversationally.
Unlike other new technologies, this system’s approachability has quickly taken it beyond just software development circles. For some, this is generating entire reports or analysis on a topic; for others, it might be creating songs or other art. If you want to have a real laugh, ask it to write you a “snarky letter quitting your job.”
In the software development space, ChatGPT is ushering in the next age of building software. A quick scroll through any of the major social media platforms will show polarizing opinions.
Some approach the introduction of this conversational generative AI as revolutionary and exciting, completely changing their daily workflow. For others, it presents uncertainty and risk — risk of flawed and unchecked code making its way into production or even fear of job elimination.
The debate about whether to be excited or worried probably ends somewhere in the middle, and while both sides have strong claims to make, there’s a way we can approach ChatGPT that can truly shift the way developers build and ship code.
On one hand, ChatGPT drastically lowers the complexity bar to writing software in a way that we have never seen previously. I can’t help but think of the most common paths people take when learning new software languages and how that has been transformed.
The old way of spending hours hammering away at a search engine looking for example code or browsing forums searching for answers might have gone the way of Blockbuster or landlines. The old way sometimes gave you answers and hopefully an explanation, but you could also end your search more confused than when you began.
In these early stages of the “build with ChatGPT” era, we ask the AI for a component, and it returns the example code. I was curious to see how this could net out and tested this idea by asking ChatGPT, “Show me how to write a feature flag in React with LaunchDarkly.” True to the hype, it surprisingly and quickly returned an example of switching between two software components with a feature flag.
When I fed it a second prompt asking it, “Can you show me an example in Python?” it quickly pivoted to the new example. Both of these instances also included step-by-step examples showing what the code meant and implementation considerations specific to LaunchDarkly. It even included links to the documentation.
But was it “copy and pastable”? Not quite.
Even when it comes to AI, learning the right questions to ask is an art form of its own.
While this code was valid, there were critical parts of the code missing. LaunchDarkly requires the language SDKs to be initialized within the application code, and since the prompt didn’t ask for that, this part was omitted. When I augmented my query, it returned new code including the initialization components as well.
It’s a great example that even when it comes to AI, learning the right questions to ask is an art form of its own. Even with the right question finally asked, there’s still business logic that needs to be taken into account. ChatGPT will likely never understand specific different aspects of other implementations because the models it learns from might or might not include examples that match your own needs.
So in this simple example, we asked ChatGPT for some code and were given the results we asked for, alongside explanations and links to documentation, but it wasn’t quite enough because it was missing requirements that we didn’t consider asking it for. We slightly modified our questions, and in return received the additional details we needed. However, ChatGPT lacked enough definition to drop in and run, and it required us to have a certain level of knowledge of the tooling to know the right question to ask.When thinking about ChatGPT and the developer experience, it’s hard to imagine a world where ChatGPT entirely replaces developers who are building software.
What does this mean for the usefulness of the ChatGPT in the real world?
After tinkering around with ChatGPT, I believe that the answer lies in improvements to the developer workflow. When thinking about ChatGPT and the developer experience, it’s hard to imagine a world where ChatGPT entirely replaces developers who are building software. The system is certainly adept at answering questions and giving information, but it’s always going to struggle to understand the complexities of specific software implementations within a given environment.
There are possibilities in this new build era where a developer blindly copies code generated by ChatGPT into their environment and spends several hours debugging when/if something goes wrong. While this is certainly feasible, the consequences of a speedy code tool are likely outweighed by the possibility of something like a site crash.
The more likely outcome of using ChatGPT is that a developer approaches it as a tool they can use to augment their existing development workflow to accelerate the earlier stages of their development process.
Tasks like quickly bootstrapping the start of a project or filling in the gaps of knowledge (both from areas like language knowledge, and even into the realm of process knowledge) let developers accelerate the way they learn and build.
If the software world were made of cookie-cutter code, then there is a chance that ChatGPT might be able to replace development teams. However, that is not our reality. The reality is that every business approaches its problem space in different ways.
Layers of process, business logic, technical debt, design decisions and architecture create a context that ChatGPT can’t answer universally. It’s a tool that we can use to build software faster, expand our own learning and grow from using. Those that ignore it will end up behind the curve, but those that solely rely on it will end up on more outage retrospective calls and filling out root cause analysis docs.