Monday, April 29, 2024
HomeSEOThe Guardian accuses Microsoft's AI of damaging its repute

The Guardian accuses Microsoft’s AI of damaging its repute


The Guardian has accused Microsoft of damaging its model by including an offensive AI-generated ballot to one among its articles.

In a narrative reporting on the loss of life of 21-year-old Lilie James, whose physique was discovered with severe head accidents in Australia, the tech big’s AI requested readers to vote on the reason for her passing, giving them the choices of homicide, accident or suicide.

Livid readers reacted by describing the ballot as “disgusting” and calling for the immediate dismissal of the journalist – who had nothing to do with the ballot.

Why we care. Microsoft’s determination to make use of AI as a substitute of human writers is inflicting issues once more. It is a clear reminder of why companies ought to make the most of AI to help human efforts fairly than change them. Neglecting this method may harm your model’s repute and adversely have an effect on your search rankings.

How this occurred. Microsoft has agreements with main information organizations around the globe, comparable to The Guardian and CNN, below which it might republish their articles in return for a portion of advert income. Nonetheless, when the tech big republished this story, its AI know-how robotically added the offensive ballot.

What The Guardian is saying. Anna Bateson, chief govt of the Guardian Media Group, wrote to Microsoft’s president, Brad Smith, accusing the corporate of upsetting James’ household, in addition to inflicting “important reputational harm” to each the newspaper and the journalist. She stated:

  • “That is clearly an inappropriate use of genAI [generative AI] by Microsoft on a probably distressing public curiosity story, initially written and printed by Guardian journalists.”

Bateson then requested Smith to reassure her that:

  • “Microsoft is not going to apply experimental AI know-how on or alongside Guardian journalism with out the information writer’s approval; and Microsoft will all the time make it clear to customers when AI instruments are used to create further models and options subsequent to trusted information manufacturers just like the Guardian.”

What Microsoft is saying. A Microsoft spokesperson stated:

  • “We’ve got deactivated Microsoft-generated polls for all information articles and we’re investigating the reason for the inappropriate content material. A ballot mustn’t have appeared alongside an article of this nature, and we’re taking steps to assist forestall this type of error from reoccurring sooner or later.”

Get the every day e-newsletter search entrepreneurs depend on.


Historical past repeating itself. This is not the primary time Microsoft’s generative AI has landed the tech big in bother. In September, the corporate was closely criticized after publishing an AI-generated obituary for NBA star Brandon Hunter.

The previous Boston Celtics and Orlando Magic participant handed away instantly this week, aged 42, after collapsing throughout a scorching yoga class in Orlando, Fl. Shortly after his passing, followers had been shocked to see the daddy of three described as “ineffective” in an obituary printed on MSN. The headline learn:

  • “Brandon Hunter ineffective at 42.”

Readers reacted by branding Microsoft “lazy” for leveraging AI to create articles and urged the corporate to rehire the editorial employees it changed with AI.

Deep dive. Learn the Guardian’s response in full for extra info.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments