The news service The Verge

The FTC is ready to take heed of the changing times: Are Artificial Intelligence tools dangerously used to deceive people?

The Federal Trade Commission stated Tuesday that the US government has the authority to crack down on consumer harms associated with artificial intelligence, such as fraud and scam.

Addressing House lawmakers, FTC chair Lina Khan said the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

In recent months, a new crop of AI tools have gained attention for their ability to generate convincing emails, stories and essays as well as images, audio and videos. While these tools have potential to change the way people work and create, some have also raised concerns about how they could be use to deceive by impersonating individuals.

The FTC has previously issued extensive public guidance to AI companies, and the agency last month received a request to investigate OpenAI over claims that the company behind ChatGPT has misled consumers about the tool’s capabilities and limitations.

The FTC had to adapt to the changing technologies of the time, said Commissioner Rebecca Slaughter. “Our obligation is to do what we’ve always done, which is to apply the tools we have to these changing technologies … Don’t be scared off by the idea that this technology is new and revolutionary.

“Our staff has been consistently saying our unfair and deceptive practices authority applies, our civil rights laws, fair credit, Equal Credit Opportunity Act, those apply,” said Bedoya. Companies need to follow the law.

The Rise of Artificial Intelligence: Why AI Language Systems Should be Embedded in Internet Search, Not in Bard? An Analysis of a Project by Timnit Gebru and Margaret Mitchell

In late 2020 and early 2021, the company fired two researchers — Timnit Gebru and Margaret Mitchell — after they authored a research paper exposing flaws in the same AI language systems that underpin chatbots like Bard. With these systems threatening their business model, the company seems even more focused on their business rather than safety. According to the quote, “the trusted internet-search giant is giving low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments.”

The report shows how the ethical concerns that used to be a problem with the project have been removed in order to keep up with the competition. The company has been criticized for using business as a priority rather than ethics in Artificial Intelligence.

Others at the company would disagree. Some people argue that public testing is necessary to make sure these systems don’t get destroyed and that the harm caused by chatbot is minimal. Toxic text and misleading information can be found on the web, but so can countless other sources. (To which others respond, yes, but directing a user to a bad source of information is different from giving them that information directly with all the authority of an AI system.) Even though they are similar to Google, Microsoft and Openai are also just as compromised. The only difference is they’re not leaders in the search business and have less to lose.

Previous post Some Republicans say that New York is in the grip of a crime wave
Next post Live updates on a lawsuit