Programmers Share Tips and Tricks for Working with AI
Here’s a fresh take on AI programming from Rina Diane Caballar, a New Zealand-based software engineer turned tech journalist. Writing for IEEE Spectrum, Caballar offered four “tips and techniques for coders to survive and thrive in a generative AI world.”
“With the current hype around generative AI, we didn’t want to offer more fodder for AI doomscrolling,” Caballar told me in an email interview. While acknowledging the site “has been covering the threats to coding over the past few years” (such as no code and AI) this time they found an optimistic angle.
“We wanted to provide as handy a reference as possible for the best tips and techniques coders can apply to make themselves more relevant in what appears to be a coming age of Large Language Modeling (LLM)-centered programming.”
It’s part of a larger discussion that’s spreading across the entire tech industry. With the arrival of powerful AI tools, are there ways to optimize the resulting code? As experimentation leads to both amazement and anxiety, several people are now ready to share their own real-world insights and experiences.
Let’s Talk about Hallucinations
There are some specific warnings in Caballar’s article — like don’t paste your company’s proprietary code into the window of an AI bot. But later Caballar issues this crucial caveat: be critical, since AI systems “tend to hallucinate and produce inaccurate or incorrect code.”
Fortunately, there are ways to address this, according to several experts cited in the article. Priyan Vaithilingam, a Ph.D. student at Harvard’s School of Engineering, recommends strong testing pipelines and code reviews.
The article also cites Armando Solar-Lezama, COO of MIT’s Computer Science and Artificial Intelligence Laboratory, who notes that experienced programmers bring “intuition about what to pay attention to and what raises red flags.” And Tanishq Mathew Abraham, CEO of medical AI research center MedARC, believes that programmers can still come out ahead. “It’s easier to verify the code than it is to write it from scratch in some cases, and it’s a faster approach to generate and then verify…”
It’s a top issue among programmers working with AI. In a recent discussion on Hacker News, one commenter complained they’d wasted a few hours “trying to work on solutions with GPT where it just kept making up parameters and random functions.” And another commenter agreed. “The time I spend attempting to fix its output in unfamiliar territory makes it more of a pain than it’s worth for me.”
But that problem improves with better tools, according to a comment from Chris Esposito, who founded a company that makes a USB-connected electronics lab-on-a-board. His experience? “GPT-4 reduces hallucinations by at least an order of magnitude, and hasn’t failed me yet.”
That discussion unearthed also another important consideration: sometimes the hallucinated code still compiles. (Though Portland-based developer Justin George quipped “It’s nice that we’ve taught the robots to make off-by-one errors just like a real developer.”) SIP platform engineer Alex Balashov said the issue just further underscores the need for experienced coders to review AI-generated output. “You really need to be quite competent in the thing you’re asking it to do in order to ferret out the hallucinations, which greatly diminishes the potency of GPT in the hands of someone who has no knowledge of the relevant language/runtime/problem domain/etc.”
Caballar also advises lots of experimentation to assess the quality of various tools — and suggests asking some specific questions about your AI assistant. “What data was this model trained on? What was filtered out and not included in that data? How old is the training data, and what version of a programming language, software package, or library was the model trained on?”
But others are also further contemplating “best practices” for the use of AI. Brian Sathianathan, co-founder of Iterate.ai, an enterprise platform for developing AI-powered low-code apps, recently shared their own best tips in an email interview. Sathianathan’s first suggestion? “As generative AI systems become mainstream users need to develop good prompt engineering skills.”
One important technique is making sure your prompt includes all the necessary context and information. “Keywords can help the system provide more specific responses,” emphasizes Sathianathan. (This is especially important when the area you’re working in is a narrowly-defined niche.) Sathianathan also recommends trying different prompts, to assess the results, and how they’re affected by changes in input.
Caballar agrees, recommending detailed, precise questions — and several iterations. Their article recommends reading up on prompts in tutorials like the official OpenAI Cookbook.
A recent comment at Hacker News put it more succinctly. “Programming is easy. Asking the right question is hard.” But the results are worth it, according to a comment from London-based James Padolsey — who has worked as a software engineer for Facebook, Twitter and Stripe. “I’ve been amazed at the things it can do if given nuanced and detailed enough prompts… if I prompt it well enough, and use my existing knowledge from those accrued 15 years, I can get awesome results.”
The CEO of AI research center MedARC also shared this tip in Caballar’s article: write the explanatory comments that would accompany your desired code snippet. And at least one programmer found that to be one of the unheralded benefits of working with an AI chatbot. “The biggest benefit, I’ve found, is it makes me comment my code,” they wrote on Hacker News. “If I can make the AI understand what I want, then it turns out that three months later I’ll also be able to understand the code.”
Software engineer Robert Macrae, a founder at Summer.ai, is convinced that advanced tools like ChatGPT4 can code in any language when given the right prompts. “Just say what you want from it like you were interviewing a developer,” Macrae posted in the discussion.
But it’s also important to use the tools wisely. “Look for potential bugs in the output and ask it about them. Look for memory leaks and ask. Then when you can’t see anything else wrong with it ask it whether there are any bugs or edge cases that might cause problems.”
The Human’s Role
Humans still have an important role in this process. Caballar spoke to Ines Montani, a Python Software Foundation Fellow and co-founder/CEO of Explosion, a software company specializing in developer tools for AI. Montani wanted to remind programmers that there’s a “creative aspect” in approaching problems. “Don’t fall into the trap of comparing yourself to the AI, which is more or less a statistical output of a large model… there’s more to being a developer than just writing arbitrary lines of code.”
MIT’s Armando Solar-Lezama pointed out that it’s humans who defined the code’s structure and choose the specific abstractions to be implemented (along with requirements for its interfaces). And Caballar got a similar response from Harvard’s Priyan Vaithilingam. “There is a lot more to software engineering than just generating code — from eliciting user requirements to debugging, testing, and more.”
So Caballar argues that employers still value basic skills like problem-solving. “Analyzing a problem and finding an elegant solution for it is still a highly regarded coding expertise.” And Caballar ultimately believes that good software-engineering practices are “proving even more valuable than before,” like planning architectures and system designs, “which serves as a good context for AI-based tools to more effectively predict what code you need next.”
Caballar’s article began with a warning from the CEO of medical AI research center MedARC. You may not have to worry about AI replacing you, but “you will have to worry about people who are using AI replacing you.”
Caballar’s article ends by urging programmers to “embrace AI as a tool and incorporate AI into their workflow,” while recognizing both “opportunities and limitations” — and where their human faculties will still shine.