People are shutting off TikTok’s infamous Algorithm

Zero Trust Artificial Intelligence Governance: The Challenge of Discriminating, Biasing, and Unbiased AI in the 21st Century

And as lawmakers continue to meet with AI companies, fueling fears of regulatory capture, Accountable Tech and its partners suggested several bright-line rules, or policies that are clearly defined and leave no room for subjectivity.

The group sent the framework to politicians and government officials in the US asking them to consider it during the crafting of new regulations for artificial intelligence.

They have a framework which is called zero trust artificial intelligence governance and it involves three principles: enforce existing laws, create bold, easily implemented rules and place the burden on companies to prove the systems are not harmful in each phase of the lifecycle. Excluding the foundation models that enable it, and the algorithmic decision-making, the definition ofAI includes both generative and foundation models.

“We wanted to get the framework out now because the technology is evolving quickly, but new laws can’t move at that speed,” Jesse Lehrich, co-founder of Accountable Tech, tells The Verge.

As the government continues to figure out how to regulate generative AI, the group said current laws around antidiscrimination, consumer protection, and competition help address present harms.

Discrimination and bias in technology have been warned about for a long time. A recent Rolling Stone article charted how well-known experts such as Timnit Gebru sounded the alarm on this issue for years only to be ignored by the companies that employed them.

AI Companies Must Ensure Their AI Is Safe, says Nonprofit Group [AI Companies must prove their AI is safe, says nonprofit group]

“The idea behind Section 230 makes sense in broad strokes, but there is a difference between a bad review on Yelp because someone hates the restaurant and GPT making up defamatory things,” Lehrich says. Section 230 was passed to protect internet services from liability over defamatory material, but there is a lack of clarity as to whether platform can be held liable for generating false and damaging statements.

The use of artificial intelligence for things such as emotional recognition, facial recognition for masssurveillance in public places, and hiring and firing are all banned. They also ask to ban collecting or processing unnecessary amounts of sensitive data for a given service, collecting biometric data in fields like education and hiring, and “surveillance advertising.”

Accountable Tech also urged lawmakers to prevent large cloud providers from owning or having a beneficial interest in large commercial AI services to limit the impact of Big Tech companies in the AI ecosystem. Microsoft invested in OpenAI, the company that is the most well-known generative artificial intelligence developer. The Bard is a large language model that was released by the internet giant.

Companies would submit to regulation before they deploy a model to the public and continue monitoring after commercial release, if a group’s proposal is adopted.

The nonprofits do not call for a single government regulatory body. Splitting up rules can be seen as a question that will need to be asked by lawmakers.

Smaller companies might object to too much regulation, but he believes there’s room to tailor policies for company sizes.

Source: AI companies must prove their AI is safe, says nonprofit group

Digital Supply Chain Design Requires Human- and Machine-Generated Self-Determination for User-Affirmed Decision Making

“Realistically, we need to differentiate between the different stages of the AI supply chain and design requirements appropriate for each phase,” he says.

The TikTok policy change is a win, but not the final outcome. We urgently need to update our digital rulebook, implementing new laws, regulations, and incentives that safeguard user’s rights and hold platforms accountable. It is time for global action to prioritize cognitive liberty in the digital age because we need to leave the control over our minds to technology companies alone.

Nita Farahany is the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology (St. Martin’s Press 2023) and Robinson O. Everett Professor of Law and Philosophy at Duke University.

A well-structured plan requires a combination of regulations, incentives, and commercial redesigns focusing on cognitive liberty. User engagement models, information sharing, and data privacy must be regulated. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence.

Design principles that embody cognitive liberty should be adopted by technology companies. Options like adjustable settings on TikTok or greater control over notifications on Apple devices are steps in the right direction. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. You can read more opinions here. Submit an op-ed at [email protected].

Previous post People who sued their states over climate change got their first victory
Next post The newspaper investigated the police chief before the raid