AI

Where are newsrooms and AI in 2025?

12 Mar 2025

Newsrooms initially approached AI with caution, fearing job displacement, misinformation, and the ethical dilemmas posed by automation. However, as the technology matures and its applications become more refined, the conversation is shifting. Instead of fearing AI, many media organizations are now embracing it as a tool to enhance journalism, improve efficiency, and unlock new creative possibilities. As we enter the first half of 2025, it’s time to assess where newsrooms stand with AI.

A recent survey by FT Strategies of over 1,900 employees across 19 EMEA organizations reveals a growing optimism about AI in newsrooms. While 57% of senior leadership express enthusiasm for AI – likely due to cost-saving potentials – editorial teams remain the least optimistic, with only 36% showing positivity. Their concerns center around copyright infringement, AI hallucinations, and editorial control. However, an increasing number of journalists and editors recognize that generative AI can save time and effort when used effectively.

Structured AI training programs

In the early days of AI adoption, skepticism dominated newsrooms. Journalists engaged in cautious and unstructured experimentation, using AI primarily for transcription, data analysis, and automated news summaries.

Today, we see a marked shift. AI is more and more often viewed as an essential tool in a modern newsroom. Media organizations are transitioning from experimentation to structured AI training programs. For example, the Financial Times launched the AI Playground to enable staff experiment with AI-related competencies. Similarly, The New York Times is greenlighting AI use for its product and editorial staff, introducing training initiatives and internal AI tools like Echo. This tool aids in summarization and content generation, allowing staff to condense articles, create SEO headlines, draft social copy, and analyze internal documents.

Media companies are also increasingly using AI-driven tools for fact-checking & misinformation detection, personalization and audience engagement, and video editing – for example, with tools like Flipatic.

One of the major turning points in AI adoption has been the realization that AI works best as an assistant, not an autonomous journalist. Newsrooms have implemented editorial oversight to ensure AI-generated content meets journalistic standards. The New York Times, for example, has established guidelines that prevent AI from publishing content without human review. Similarly, Reuters uses AI for speed and efficiency but ensures that all AI-assisted reporting goes through rigorous editorial processes.

Moreover, transparency has become a priority. Leading publishers now label AI-generated or AI-assisted content, ensuring that audiences are aware when technology has played a role in content creation.

Challenges that still remain

While AI adoption is growing, several challenges persist. According to the NTS Network Report, key questions remain unanswered, such as:

  • What is the threshold of AI use in journalism that should be declared to audiences?
  • Do behind-the-scenes AI tasks (like brainstorming or pattern recognition in data sets) require disclosure?
  • Should AI usage in public-facing content (like removing watermarks or reflections) always be disclosed?
  • How can legacy processes (e.g., Photoshop tools or camera filters) be distinguished from AI automation?

Another challenge that still remains is the insufficient training and upskilling opportunities – only 33% of newsroom employees are satisfied with AI tools, while 34% report inadequate training, and 28% lack the time to develop AI-related skills (source: FT Strategies). According to the data from NTS Network Report, journalists feel that AI-generated content tends to be generic, lacks nuance, and requires substantial human oversight to ensure quality. Some newsroom employees still worry that increasing reliance on AI could lead to job reductions.

Participants were also more comfortable with AI being used in certain areas of the newsroom – such as features, design, and opinion – over hard news. There is also a prevailing belief that smaller and less-resourced newsrooms may deploy AI in riskier or more ethically challenging ways compared to larger, well-funded organizations. However, both large and small news outlets are experimenting with AI in highly public and audience-facing ways.

Journalists' and audiences' comfort with AI in journalism

Both journalists and audience members share concerns about AI’s potential to mislead or deceive. This concern ranks at the top of AI-related challenges in newsrooms. Interviews with journalists across various newsroom sizes and locations over the past three years indicate that many journalists are poorly equipped to detect AI-generated or AI-edited content. Few have systematic processes for vetting user-generated or community-contributed visual material.

At the same time, many journalists are unaware that AI is increasingly embedded in cameras, video editing software, and image processing tools. This means AI is sometimes used in journalism without journalists or news outlets even realizing it.

With our tool Flipatic that transforms written articles into videos, newsrooms have flexibility in labeling AI-generated content. Editors can choose to label entire videos or specific AI-generated slides according to newsroom policy.

See more about Flipatic here.

Additionally, news audiences' comfort levels vary greatly and hinge on several factors, including:

  • Where in the production process AI is used
  • The level of human oversight, if any, that is involved
  • The extent to which AI is being used to represent "real life" through photorealistic imagery and video
  • How transparent news outlets are with AI usage
  • Whether AI use results in accuracy or misleads or deceives the audience
  • Whether AI use supports or replaces human labor and creativity

For example:

  • 76% of audiences were comfortable with AI being used for feature recognition in news images. This number increased to 88.3% when human oversight was included. However, 10% had privacy or accuracy concerns, especially in cases where AI processed images of people.

  • 83.3% of respondents supported AI being used to brainstorm hard-to-visualize topics. This support increased to 88.3% when applied to non-sensitive topics.

  • Only slightly more than a quarter (28.3%) of audiences was unequivocally comfortable with journalists using AI to create b-roll (text-to-video). This proportion rose to 60.3 percent if it was disclosed that AI was used, if the b-roll depicted generic rather than specific content, or if the b-roll didn’t show people.

Source: News, Technology and Society (NTS) Network

The importance of transparency in AI use

Both news audiences and journalists emphasize the importance of transparency in AI use. Audiences want clear disclosures about when and how AI is used in journalism. Transparency expectations include:

  • Disclosure at the beginning of the content: Whether video, audio, or text, AI involvement should be stated upfront.
  • Quantification of AI involvement: Audiences want a sense of how much AI contributed to a given piece of content.
  • Consistent labeling: Labels indicating AI use should appear in the same place every time.
  • Embedded, not adjacent labels: Labels should appear directly on content rather than in separate metadata or adjacent descriptions. Also, audiences have clear expectations regarding the use of AI in newsrooms: newsrooms should have guidelines governing AI use, and the AI-generated content should be verified for accuracy and credibility.

The need for AI guidelines in newsrooms

However, many newsrooms lack formal AI policies. In a 2023 survey by INMA, only 20% of respondents said they had guidelines from management on when and how to use generative AI tools. In 2024, a researcher, Tomás Dodds, analyzed 37 AI guidelines from newsrooms in 17 countries and found that most were strikingly similar. However, a major issue emerged – a lot of guidelines were made from the top down. “They were developed by an editor-in-chief or parent company without consulting journalists”, Dodds said.

Currently, it is difficult to determine how many newsrooms worldwide actually follow AI guidelines. While large global organizations (such as Reuters, The New York Times, and The Financial Times) have documented AI policies, smaller and mid-sized outlets often lack clear strategies. More research is needed to assess newsroom compliance levels and adoption rates.

How the EU AI Act will impact newsrooms

With the upcoming EU AI Act, newsroom compliance requirements will become more stringent. The Act introduces mandatory transparency rules for AI-generated or AI-edited content, including:

  • Clear labeling of AI-generated text, images, video, or audio.
  • Editorial accountability for AI-assisted journalism.
  • Disclosure obligations when AI is used in public-facing content.
  • Stronger oversight to prevent misinformation and AI-related ethical violations.

Failure to comply with the AI Act could result in significant penalties, including fines of up to €35 million or 7% of global annual revenue for serious breaches.

As enforcement of the AI Act begins, newsrooms must establish AI policies that align with regulatory expectations. This includes:

  • Developing AI policies with journalist input to ensure relevance and applicability.
  • Standardizing AI labeling practices to improve audience trust.
  • Conducting regular training to keep newsroom staff updated on compliance requirements.
  • Monitoring AI-assisted content to prevent misinformation and uphold editorial integrity.

What will happen next?

How will AI evolve, and how will journalism benefit from it? Subscribe to our newsletter, The Newsroom of Tomorrow, to keep up with the latest developments, as the media industry is shifting from fear and skepticism to an era of transformation. AI is not here to replace journalism – it is here to enhance it, and we’ll keep a close eye on it!

Also, if you’re interested in learning more about Flipatic, and how you can turn your articles into videos – cost-effectively, en masse and fast – contact us. We’ll be happy to test it with you and take you to the next level with video production!

Want to stay up to date with latest trends of news?Subscribe to our CEOs Linkedin newsletter!
Share Article:
Let's talk
If you feel we are a good fit, email us today, or call Slawek +48 603 440 039You can also follow us on Linkedin, Facebook or Dribbble.