Dr. Seth Dobrin: Building Practical AI Strategies Without the Hype
IBM’s former Global Chief AI Officer explains why the future of media AI lies in small, specialised models — and why control, privacy and trust matter more than chasing the biggest LLM.
The past two years have seen publishers rush towards large language models, often experimenting under pressure to “do something with AI”. But behind the headlines, a quieter, more pragmatic shift is under way.
Dr. Seth Dobrin has seen the cycle from the inside. As IBM’s first Global Chief AI Officer and now CEO of Qantm AI, he advises enterprises on how to make AI work in the real world. His message to media leaders is blunt: the era of “bigger is better” is over, and the publishers who succeed will be the ones who move towards smaller, specialised models that they control.
“Large language models make for great demos,” Dobrin told me in an exclusive interview. “But in an enterprise or a newsroom, what you need is predictability, privacy, and cost control. That’s where small, domain‑specific models win.”
The Case for Small Models
Dobrin’s argument rests on practical reality: the economics of large models do not add up for most organisations. “When you look at the full cost of running a large model at scale, you’re talking about infrastructure, API charges, and the operational burden of relying on a third‑party provider. For a typical enterprise deployment, a trained small model can deliver the same outcome for far less of the cost.”
That cost gap has direct implications for publishers working on tight margins. But the issue is not just financial. Smaller models can run locally, giving publishers full control over their tech stack and, critically, their data.
“In media, you are dealing with sensitive material—unpublished stories, source information, intellectual property. Running models locally or in a private cloud means that data stays inside your walls. You can’t get that level of sovereignty when you’re dependent on an external API.”
Bloomberg’s early move with BloombergGPT in 2023 showed how a domain‑specific, internally deployed model can deliver value without exposing data. That approach: specialised and local, is being replicated by newer projects like the Associated Press’ Local Lede tool which monitors more than 400 federal agencies, applies a custom model to identify local angles, and pushes story leads to reporters. Both examples show how specialised models can outperform general systems in well‑defined tasks.
Dobrin agrees, “Start with a defined use case, build a small model for that task, and keep control of your data. That’s how you get value without taking unnecessary risks.”
Lessons from the LLM Experiments
Some of the world’s biggest newsrooms have gone the other route, experimenting with large, general-purpose models. This year, for example, News Corp Australia introduced in‑house AI tools like “NewsGPT” and “Story Cutter” across major titles such as The Australian, Courier‑Mail and Daily Telegraph. These tools mimic specific writing styles, generate editorial angles and even reduce the need for sub‑editors, something the Australian Media Entertainment and Arts Alliance says, “threatens to undermine accountable journalism”.
Dobrin sees value in these deployments but also serves a warning. “You can make a big model do the job, but at what cost? And at what level of dependency on someone else’s system? Every time you send your content out to a black box API, you lose control over both your data and your costs. For most publishers, that’s not a sustainable position.”
For publishers, the past decade’s experience with Meta and Google—the pivot kings— should serve as a further warning.
AI Without The Hype: Problem First
Dobrin also says the conversation about AI in media has been too focused on what the tech could do rather than what it should do. He stresses that AI must be treated as a source, not a replacement for human judgement.
“Journalism has clear standards for verifying information. AI should be subject to the same scrutiny. Treat it like you would any other source—check its output, challenge it, and never publish anything without human review.”
This approach aligns with what Dobrin calls “effective adoption” as opposed to performative experiments. “You don’t integrate AI because it looks good in a press release. You integrate it because it solves a problem, whether that’s speeding up research or improving audience insights. The value comes from alignment with your editorial and business priorities, not from using the shiniest tool on the market.”
Trust, Attribution and Control
Dobrin is also critical of the current AI economy’s approach to rights and attribution.
“Creators are finding their work inside training datasets without consent or compensation. That’s not sustainable, ethically or commercially. Media companies need to be proactive in setting their own terms for how their content is used in AI systems, otherwise someone else will set those terms for them.”
For publishers, that means both protecting their own assets and being careful with the models they deploy. “If you don’t know what data trained your model, you can’t guarantee the outputs. That’s a risk to your brand and your audience’s trust.”
Running small, inspectable models locally gives organisations a clearer chain of custody for both data and outputs. Dobrin notes that this approach is increasingly seen as a requirement rather than a luxury, particularly in regulated sectors.
Avoiding the Colonialism Trap
Finally, Dobrin warns about a less visible risk: cultural bias embedded in AI systems.
“Most of the large models are trained on datasets that skew heavily Western and English‑language. When you deploy them globally without adjustment, you are effectively exporting a narrow cultural perspective. That’s a form of technological colonialism.”
For publishers with international audiences, this is not an abstract concern. “Editorial diversity doesn’t survive if your tools flatten nuance out of the content pipeline. The responsibility is on publishers to ensure their AI systems reflect their audience, not just the dominant data source.”
One solution is to build small models for specific cultural and linguistic contexts. “A small, well‑trained model on local data can outperform a massive general model because it understands the domain. That’s as true for a newsroom as it is for a bank or a hospital.”
Looking Ahead
As I wrap up the interview, I ask Dobrin what a healthy AI‑powered newsroom would look like in 2030, something he is reticent to forecast.
“Honestly, 2030 is way too far out,” he says. “AI is moving so fast that even three years is hard to call.”
“You’re going to see newsrooms that look pretty empty. More work will go to specialists and contributors. At the same time, you’ve got to figure out how to have completely AI-driven content and completely human-driven content running side by side—because people are going to want both.”
He sees small models as central to that future, especially as the industry faces what some are calling the “post‑website era” where AI agents mediate much of the news consumption. “If you want to deliver content to human readers and machine intermediaries, you need precision, reliability, and control. That’s exactly what small, specialised models provide.”
For Dobrin, the message to media leaders is clear. “Stop chasing the biggest model. Start building the right model for your needs. And always keep the human in the loop.”
That’s truly sustainable journalism.