The nonprofit group says that there needs to be proof that the technology is safe
AI Companies Must Prove Their AI Is Safe, Says Nonprofit Group [AI Companies Must prove their AI is safe, says nonprofit group]
Nonprofits Accountable Tech, AI Now, and the Electronic Privacy Information Center (EPIC) released policy proposals that seek to limit how much power big AI companies have on regulation that could also expand the power of government agencies against some uses of generative AI.
The group sent the framework to politicians and government agencies mainly in the US this month, asking them to consider it while crafting new laws and regulations around AI.
The Zero Trust AI framework also seeks to redefine the limits of digital shielding laws like Section 230 so generative AI companies are held liable if the model spits out false or dangerous information.
“We wanted to get the framework out now because the technology is evolving quickly, but new laws can’t move at that speed,” Jesse Lehrich, co-founder of Accountable Tech, tells The Verge.
Lehrich pointed to the Federal Trade Commission’s investigation into OpenAI as an example of existing rules being used to discover potential consumer harm. Other government agencies have also warned AI companies that they will be closely monitoring the use of AI in their specific sectors.
Discrimination and bias in Artificial Intelligence are something that has been warned about for years. According to a recent article by Rolling Stone, experts such as Timnit Gebru sounded the alarm for years only to be ignored by the companies that employed them.
AI Companies Must Defend Their AI Is Safe, says Nonprofit Group [AI companies must prove their AI is safe, says nonprofit group]
Section 230 is a good idea, but it’s better to have a bad review on a restaurant than a good review on a GPT. (Section 230 was passed in part precisely to shield online services from liability over defamatory content, but there’s little established precedent for whether platforms like ChatGPT can be held liable for generating false and damaging statements.)
These include prohibiting AI use for emotion recognition, predictive policing, facial recognition used for mass surveillance in public places, social scoring, and fully automated hiring, firing, and HR management. They also ask to ban collecting or processing unnecessary amounts of sensitive data for a given service, collecting biometric data in fields like education and hiring, and “surveillance advertising.”
Accountable Tech urged that big cloud providers be prevented from owning or having a beneficial interest in large commercial artificial intelligent services to limit their impact on Big Tech companies. Cloud providers such as Microsoft and Google have an outsize influence on generative AI. OpenAI, the most well-known generative AI developer, works with Microsoft, which also invested in the company. Google released its large language model Bard and is developing other AI models for commercial use.
The group proposes a method similar to one used in the pharmaceutical industry, where companies submit to regulation even before deploying an AI model to the public and ongoing monitoring after commercial release.
The nonprofits do not want a government body to regulate them. However, Lehrich says this is a question that lawmakers must grapple with to see if splitting up rules will make regulations more flexible or bog down enforcement.
Lehrich says it’s understandable smaller companies might balk at the amount of regulation they seek, but he believes there is room to tailor policies to company sizes.
Source: AI companies must prove their AI is safe, says nonprofit group
OpenAI wants GPT-4 to solve the content moderation dilemma: a comparison of openAI, Meta, and Mark Zuckerberg opinions on the supply chain of artificial intelligence
“Realistically, we need to differentiate between the different stages of the AI supply chain and design requirements appropriate for each phase,” he says.
While OpenAI touts its approach as new and revolutionary, AI has been used for content moderation for years. Mark Zuckerberg’s vision of a perfect automated system hasn’t quite panned out yet, but Meta uses algorithms to moderate the vast majority of harmful and illegal content. Platforms like YouTube and TikTok count on similar systems, so OpenAI’s technology might appeal to smaller companies that don’t have the resources to develop their own technology.
Second, GPT-4 can allegedly help develop a new policy within hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. OpenAI mentions the well-being of the workers who are exposed to harmful content, such as child abuse or torture.
Every platform admits that moderation is impossible at a large scale. Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.
Source: OpenAI wants GPT-4 to solve the content moderation dilemma
The Harshness of Human Annotations in a Software Company: How Humans and Click Workers Get Their Work (and Their Pay)
The company relies on human work and click workers. Many people in African countries make annotations and label content. The texts can be disturbing, the job is stressful, and the pay is poor.
In particular, the gray area of misleading, wrong, and aggressive content that isn’t necessarily illegal poses a great challenge for automated systems. Humans struggle to label such posts and machines get it wrong. The same applies to satire or images and videos that document crimes or police brutality.