One of Australia's top science magazines, Cosmos, faced criticism on Thursday for publishing AI-generated articles that were deemed incorrect or overly simplified by experts. Published by Australia's state-backed national science agency, Cosmos utilized Open AI's GPT-4 to create six articles last month. Despite disclosing the use of artificial intelligence, the Science Journalists Association of Australia expressed serious concerns about its application.
Association president Jackson Ryan highlighted inaccuracies in the AI-generated article 'What happens to our bodies after death?' published in Cosmos. He pointed out that the descriptions of scientific processes were either incorrect or greatly simplified. For instance, the AI incorrectly stated that rigor mortis sets in 3 to 4 hours after death, whereas scientific research indicates a less definitive timing. Another inaccuracy involved the description of autolysis, where cells are destroyed by their enzymes, which was poorly described as 'self-breaking'.
Ryan emphasized that these inaccuracies could harm public trust and perception of the publication. A spokesperson for the national science agency defended the AI content, stating it had been fact-checked by a trained science communicator and edited by the Cosmos publishing team. Cosmos plans to continue reviewing the use of the AI service throughout the experiment.
The magazine has also faced backlash for using a journalism grant to enhance its artificial intelligence capabilities, potentially at the expense of journalists. Former editor Gail MacCallum expressed her discomfort with AI creating articles, despite being a proponent of exploring AI. Another former editor, Ian Connellan, revealed he was not informed of the AI project and would have advised against it if he had been.
The use of AI is increasingly becoming a contentious issue among publishers and musicians. The New York Times recently filed a lawsuit against ChatGPT-maker OpenAI and Microsoft, accusing them of using millions of articles for training AI models without permission. Emerging AI giants are now facing numerous lawsuits over using internet content to develop systems that generate content based on simple prompts.