TNS
VOXPOP
Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
0%
At work, but not for production apps
0%
I don’t use WebAssembly but expect to when the technology matures
0%
I have no plans to use WebAssembly
0%
No plans and I get mad whenever I see the buzzword
0%
AI / Tech Culture

Why The New Stack Won’t Give You AI-Written Articles

There's a lot of pressure these days on journalists and content providers to use generative AI to create their work. We won't. Here's our AI policy.
Dec 12th, 2023 12:21pm by
Featued image for: Why The New Stack Won’t Give You AI-Written Articles
Image by Possessed Photography from Unsplash.

Alongside the rise of online content, and especially since the emergence of Facebook, YouTube and Twitter in the mid-aughts, the public has lost faith in the credibility of the news media.

At The New Stack, we are committed to pushing back against that trend. Credibility is our crown jewel. We know our readers depend on us to help them make sense of the technology they work with, and identify empty hype when we see it.

Lately, generative AI has presented us with a challenge. It’s playing an increasing role in what we cover, and led us to create a policy governing its use in The New Stack.

It can be an uphill battle for a new outlet to maintain credibility with its readers. For instance, in 2022, only 34% of Americans said they put a great deal or a fair amount of trust in news media, according to Gallup Poll figures. In 2003, just before the advent of social media, 54% of survey participants said the same.

Chart of Gallup Poll data showing the decline in public trust in the media since the early 1970s

Source: Gallup

In November 2022, a new twist emerged in the story: the introduction of OpenAI’s ChatGPT-3, an advanced generative AI tool that can take written prompts and generate natural language answers.

The move sparked a mad scramble among tech giants like Google, Meta and Microsoft to compete, while companies of all stripes sought to incorporate the latest AI tech into their tools and workflows. On Tuesday, The New Stack is at the AI.dev North America conference, sponsored by The Linux Foundation, to learn how all that’s going.

And that brings us back to the news media. Just as developers worry that they could be rendered obsolete by AI and machine learning, so do those of us who work in journalism and produce content also fear the potential of our robot overlords.

I got a chill recently when I interviewed Heath Newburn, global field CTO of PagerDuty, for a New Stack Makers podcast. He told me about visiting a customer who said his executives demanded how many people they could fire because of AI.

Those executives were saying the quiet part out loud. Though, at this stage, according to the experts we talk to on a regular basis, it’s more likely that AI will change rather than eliminate developer jobs. At least for now. (In the meantime, it doesn’t keep people from inventing tech conference speakers.)

Likewise, some news outlets are already using AI in place of real-life human workers. Sports Illustrated got caught in November employing the likes of “Drew Ortiz,” an entirely AI-generated reporter with a made-up headshot and bio, to create articles. The articles were product reviews, which SI said were provided by a third-party content provider.

The photo and bio of Sports Illustrated's "reporter" Drew Ortiz, which was removed from its website — along with his articles— after Futurist, an online magazine, discovered he didn't exist. (Image source: The Wayback Machine)

The photo and bio of Sports Illustrated’s “reporter” Drew Ortiz, which was removed from its website — along with his articles— after Futurist, an online magazine, discovered he didn’t exist. (Image source: The Wayback Machine)

After Futurism broke the story (tipped by an anonymous insider at SI), the bio and articles were removed from the sports magazine’s website.

AI Tools and Experiments

Like the rest of the world, journalists and content creation practitioners are trying to figure out how to live with generative AI. The technology, even before the recent ChatGPT breakthroughs, can be an undeniable time-saver.

For instance: Typing up recordings after an interview, the first step in turning research into an article, was for decades one of the worst, most tedious parts of a journalist’s job — like shoveling up horse poop in the wake of a parade. But thanks to AI, no more.

Like reporters around the world, most of us here at TNS have long used transcription software to process our audio and video interviews, drag-and-dropping a file into the application and marveling, minutes later, when it spits out a good enough text transcription. (Good enough, but not perfect. I love the app so much I want to marry it, but it still returns some oddities that need untangling, such as the time it heard “Kubernetes” as “Cornell West.”)

And after ChatGPT-4 dropped, we did experiment a little with it this past spring. We get a tsunami of press releases every day; some of them warrant sharing with our audience, but there are limits to how much news our small, human staff can cover.

So we tried feeding a press release into a generative AI tool. We were fully transparent about the experiment, labeling our robot reporter, Stackie, as being an AI writer of AI-generated words, and linked to the original press release.

We did it exactly once. Turns out, Stackie needs as much, if not more, editing than a human reporter. And, anyway, the whole thing made us feel queasy. So Stackie has retired.

The New Stack’s AI Policy

It’s a great time to be covering software and related technologies. But as the ecosystem expands, so will pressures to produce more content, and more knowledgable coverage — keeping up with the speed of innovation and the need of our audience to keep up with it, so they can do their jobs better and keep moving up in their careers.

But we at TNS are not going to take shortcuts. We take the bond of trust we’ve built with you, our readers, very seriously. In a time when media outlets carry low credibility with the public, we cherish it.

Therefore, this month, with the help of my colleagues, we put together an AI policy, and are sharing it with our contributors, reporters and sponsors alike.

The rapid adoption of generative AI tools prompted us to formalize what had always been understood amongst ourselves at TNS. But we want to make sure that the rules are clear to everyone whose work appears under our brand: no AI-generated content can appear in articles or other content submitted to TNS for publication.

The rest of the policy follows.

This means: 

We permit using AI for research, as a way to query interviews you upload, summarize, and organize information. But never should your stories include AI-generated wording in the draft you submit to your editor.

All material you submit to TNS must be written in your own voice.

Do not rely on AI as a source for information; generative AI is prone to surfacing assertions that are simply not true. Verify any facts generated from an AI prompt independently, as you would facts derived from Wikipedia.

Why we make these rules:

The New Stack’s editorial integrity is paramount. We cannot present material under a reporter or contributor’s name that they did not write. Doing so breaks our bond of trust and authority with our readers, our sponsors, and the tech community we cover and serve.

We look forward to serving you in 2024 and beyond.

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.