Analyze |

How AI can be used in news stories

We live in a digital age where AI is revolutionizing journalism. But with great power comes great responsibility. Let’s explore how AI can be used ethically in journalism.

One of the most common applications is the transcription of interviews or live translation into other languages. Instead of journalists spending hours listening to the interviews they recorded and transcribing them, an AI tool can do it in just a few seconds. Tools like Otter or Trint can save hours or days of work converting audio and video files to text. You upload the files to the platform, and it generates textual transcription of the conversations in no time. Finally, don’t forget to double check the material you plan to use in your news stories.

Tools like Chat GPT and Bard allow a quick overview of thousands of documents and databases. Without them, research like the “Panama Papers ” would not have been possible. The technology can quickly analyse large volumes of documents and identify key topics, people, organizations, locations and the connections between them. Algorithms are also excellent at recognizing data anomalies. But here, too, journalists are crucial, as they should do a final fact check.

Algorithms also help in making customized content for specific audiences. By analysing audience data, they adjust tone and style accordingly. These tools can also make suggestions for summaries, headlines, captions or meta descriptions for effective promotion on various platforms, maximizing the impact and reach of the journalistic product.

For a better visual presentation of the stories, you can also use synthetic media like videos or photos generated with tools like Midjourney, DALL-E or Sora. However, this should only be done in extreme cases. For example, if you are making a story in which the use of real photos or videos may threaten the rights and safety of children, then you may use synthetic visual content. If you are writing about historical events or doing a book review, then you can also enhance the text with photos or videos created by AI.

But what is considered ethical and what is unethical in the use of AI for news stories?

If you use resources without supervision to create and distribute journalistic content, you run the risk of spreading disinformation, hate speech, copyright infringement, and data misuse. That is why AI should always be supervised by appropriate people in the newsroom.

It is not ethical if another journalist from the newsroom is listed as the author of the stories written by AI. They should be clearly marked as stories created by AI. The same goes for videos and photos. For them you must also specify that it is synthetic content.

That’s why Reporters Without Borders created the Journalism Trust Initiative, a tool to help you create an adequate editorial policy for the use of AI. To comply with this mechanism, you need to ensure that the audience is aware when the content is created by AI. Transparency is key to gaining the trust of your audience!