How Should Developers Respond to AI?

The tech-oriented podcasts at Changelog released a special edition last month — focusing on how developers should respond to the arrival of AI. Jerod Santo, the producer of their “Practical AI” podcast had moderated a panel at the All Things Open conference on “AI’s Impact on Developers.” And several times the panel had wondered whether some of the issues presented by AI require a collective response from the larger developer community…
The panelists exploring AI’s impact on developers were:
- Emily Freeman, Amazon Web Services head of community engagement (after running developer relations at Microsoft) and the author of DevOps for Dummies: 97 Things Every Cloud Engineer Should Know.
- Developer/content creator James Quick.
Speaking about recent high-profile strikes in other industries, Quick had lauded the general principle and “the power of community, and people being able to come together as a community to stand up for what they think they deserve. I don’t know that we’re, like, here right now, but I think it’s just an example of what people that come together with a common goal can do for an entire industry.”
And then his thoughts took an interesting turn. “And maybe we get to a point where we unionize against AI.
“I don’t know, that’s — maybe not. But the power of those connections, I think, can lead to being able to really make positive influence wherever we end up.”
“Unionize against AI. You heard it here first,” moderator Santo said wryly — then moved on to another topic. (When Freeman warned about prompts that trigger “hallucinations” of non-existent solutions, quipping that generative AI “is on drugs”, Santo joked the audience was hearing “lots of breaking news on this panel.”)
As the discussion moved to other areas, it reminded the audience that the issue is not just the arrival of powerful, code-capable AI. The real question is how the developer community will respond to the range of issues raised, from code licensing to the need for responsible guidelines for AI-developing companies. Beyond preserving their careers by adapting to the new technology, developers could help guide the arrival of tools alleviating their own pain points. They could preserve that fundamental satisfaction of helping others, while tackling increasingly more complex problems.
But as developers find themselves adapting to the arrival of AI, the first question is whether they’ll have to mount a collective response.
The Influence of a Community
“Unionizing against AI” wasn’t a specific goal, Quick clarified in an email interview with The New Stack. He’d meant it as an example of the level of just how much influence can come from a united community. “My main thought is around the power that comes with a group of people that are working together.” Quick noted what happened when the United Auto Workers went on strike. “We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.”
Even this remains a concept more than any tangible movement, Quick stressed in his email. “Honestly, I don’t have much more specific actions or goals right now. We’re just so early on that all we can do is guess.” But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of “who owns the code.” AI has famously trained by ingesting code from public repositories — and during the panel discussion, Quick worried developers might be tempted to abandon open source licenses.
He acknowledged to the audience that there are obviously much larger issues and that they can seem a little overwhelming. But he also believes there’s some evolution that needs to happen, and in a lot of areas — “legally, morally, ethically open sourcedly. There has to be things that catch up, and give some sort of guidelines to this stuff that we have going on.” Quick later argued it will follow the trajectory of other advancements that humanity has made — including the need for “acknowledging that there’s probably a point where we need to have limitations.”
Although he quickly added, “What that means and what that looks like, I don’t know.”
Standards and Guidelines?
But soon the discussion got down to specifics. Santo noted there’s already ways that a robots.txt file can be updated by individual users to block specific AI agents from crawling their site. Quick suggested flagging GitHub repositories in the same way as a “reasonable intermediary step,” though later admitting that it’d be hard to later prove where AI-generated code had taken its training data.
But Freeman returned to the role of communities in addressing companies with a “profit-only” mentality — both developers and users. “To some degree, between our work and also where we spend our money, we have to tell the market that that is not acceptable.”
So “I don’t want to live in a world where we’re trying to hide from crawlers. I want to live in a world where we have decided on standards and guidelines that lead toward responsible use of that information, so that we all have some compromise around how we’re proceeding with this.”
At one point Freeman seemed to suggest a cautious choosing-your-battles strategy, telling the audience to “make demands where you can.” But one area where she sees that as essential? Calling for responsible development of AI — again meaning guidelines and standards. “We are in the place where it is truly our responsibility to push for this, and push against the sort of market forces that would say, ‘We’re moving forward quickly with a profit-based approach to this — a profit-first approach.'”
It’s a topic she returned to throughout the panel, emphasizing the importance of developers “recognizing our own power and influence on pushing toward a holistic and appropriate approach to responsible AI.”
Thriving and Surviving
The panel kept returning to the needs of the community. Freeman also agreed with Quirk that AI’s impact on developers will someday include tools designed to relieve their least-favorite chores like debugging strange code — though it may take a while to get there. “But I think — truly, I keep coming back to this — we have ownership and responsibility over this. And we can kind of determine what this actually looks like in usage.”
The biggest surprise came when Santo asked if they were “bearish” on “bullish” about the long-term impact of AI on developers. Santo admitted that he was “long-term positive” — and both his panelists took the same view.
Quick characterized his attitude as “a very super-positive thing,” with a goal of easing people’s fears about AI replacing their jobs. And Freeman also said with a laugh that she was bullish on AI — “because it’s happening, right? Like, this is happening. We have to kind of make it our own and lean into it, rather than try and fight it, in my opinion.”
Freeman’s advice for today’s developers? Learn as much as you can, whether it’s about designing prompts or understanding the models that you’re using, and “recognizing the strengths and the limitations — and being ready to adapt and change as we move forward…” Just as developers have in the past, it’s time to grow with a new technology.
And on the plus side, Freeman anticipates “a ton” of new AI tools being created as venture capitalists fund investment in the AI ecosystem.
The Thief of Joy
Toward the end, Santo asked a provocative question: since detail-oriented programmers take pride in their meticulous carefulness, is AI “stealing some of our joy?” And Emily Freeman responded: “I think you have a point.” Maybe we humans glory in our ability to spot errors quickly, and “that pattern recognition is something that makes us really powerful.”
But a moment later Freeman conceded that “I think that’s the joy for some people — it’s not the joy for others.” Freeman described her own joy as “building tools that matter to people… I think the spark of joy is going to be different for all of us.” But Freeman emphasized that joy and personal growth are important to humans, and will remain so in the future.
And this led back to the larger theme of taking control of how AI arrives in the developer world. “We set the standards here. This is not happening to us. It is happening with us. It is happening by us.” Freeman urged developers to “take ownership of that” — to identify which areas they want to hand off to AI, and the areas where they want developers to remain, growing and evolving with the newly-arrived tools.
So instead coding up yet another CREATE/READ/UPDATE/DELETE service for the thousandth time, “I want to solve the really complex problems.” The challenge of solving new problems at scale is interesting, Freeman argues. “And I think it’s that kind of problem-solving — and looking higher up in the stack, and having that holistic view — that will empower us along the way.”
In our email interview, we asked Quick if he’d gotten any reactions to the panel. His response? “I think we got an overwhelming response of ‘this is something I should be paying attention to’.”