The content moderation dilemma should be solved by GPT-4

GPT-4 tries to solve the content moderation dilemma: Why AI shouldn’t use AI? An openAI-inspired approach to content moderaters

While OpenAI touts its approach as new and revolutionary, AI has been used for content moderation for years. Mark Zuckerberg’s vision of a perfect automated system hasn’t quite panned out yet, but Meta uses algorithms to moderate the vast majority of harmful and illegal content. Smaller companies that don’t have the resources to develop their own technology might benefit from Openai’s technology.

Second, GPT-4 can allegedly help develop a new policy within hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. There’s a mention of the well-being of the workers who are continually exposed to harmful content.

It is not possible to perfect content moderation at scale. Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.

Source: OpenAI wants GPT-4 to solve the content moderation dilemma

OpenAI Shouldn’t Affect Human Research and Humans: The Times’s Suggestions to a Legal Action against the Times

However, OpenAI itself heavily relies on clickworkers and human work. Some of the people in African countries have annotations on their content. The pay is poor, the job is not great, and the texts can be disturbing.

The grey area of misleading, incorrect, and aggressive content is a great challenge for automated systems. Even human experts struggle to label such posts, and machines frequently get it wrong. The same applies to satire or images and videos that document crimes or police brutality.

The Times is considering legal action because the talks with OpenAI have become so hostile that the paper is no longer in a position to reach a license deal. The individuals who confirmed the potential lawsuit requested anonymity because they were not authorized to speak publicly about it.

The main concern of The Times is that they will be competing against them by creating text that answers questions based on the original reporting and writing of the paper.

If, when someone searches online, they are served a paragraph-long answer from an AI tool that relies on reporting from The Times, the need to visit the publisher’s site is greatly diminished, said one person involved in the talks.

If a judge finds that OpenAI copied The Times’ articles, the court could order the company to destroy the data, using only work that is authorized to be used.

They cited “protecting our rights” among their chief fears: “How do we ensure that companies that use generative AI respect our intellectual property, brands, reader relationships and investments?”

High-Judgement Defend Andy Warhol of the Use of Fair Use Doctrine: The Times CEO, The Bedwetter, and the Associated Press

At the festival, The Times CEO said it is time for tech companies to pay their fair share for tapping the paper’s archives.

The same month, Alex Hardiman, the paper’s chief product officer, and Sam Dolnick, a deputy managing editor, described in a memo to staff a new internal initiative designed to capture the potential benefits of artificial intelligence.

Comedian Sarah Silverman joined a class-action suit against the company, alleging that she never gave ChatGPT permission to ingest a digital version of her 2010 memoir “The Bedwetter,” which she says the company swallowed up from an illegal online “shadow library”

The fair use doctrine allows for the use of a work without permission in certain instances, like teaching, research, and news reporting, which is a reason why Artificial intelligence companies are likely to invoke a defense.

The court found that Google’s digital library of books did not create a “significant market substitute” for the books, meaning it did not compete with the original works.

In it, the high court found that Andy Warhol was not protected by fair use doctrine when he altered a photograph of Prince taken by Lynn Goldsmith. Importantly, the court found that Warhol and Goldsmith were selling the images to magazines.

The original and the copied work share a similar purpose, or where the risk of substitution for original or licensed derivatives of it is higher, the court wrote.

The Associated Press released guidelines around generative AI to its journalists as it and other news organizations look at ways to use the technology in news gathering.

Previous post The Georgia indictment charges a 19-member criminal enterprise
Next post The glaciers have caused the future emergence of new ecosystems