Rage against profit driven machine learning

The Role of Technology and Industry in Responsible AI Research: A Study of Vallor, Ahmed, Ahmed and Acua in Boulder, CO, and Meta

Lucrative salaries, and the promise of being able to work on the cutting edge of AI technology allows companies to snap up much of the top talent from universities, while hiring inside academic computer-science departments has remained largely flat.

Major companies show little public engagement in responsible-AI research, indicating that speed is more important than safety. They found a discrepancy between the research and the implementation. The limited influence from responsible-AI research findings is what shows the products reaching the market.

Companies that develop and deploy AI responsibly could face a lighter tax burden, she suggests. “Those that don’t want to adopt responsible-AI standards should pay to compensate the public who they endanger and whose livelihoods are being sacrificed,” says Vallor.

For that scrutiny to happen, however, it is imperative that academics have open access to the technology and code that underpins commercial AI models. “Nobody, not even the best experts, can just look at a complex neural network and figure out exactly how it works,” says Hoos. It’s essential that we know as much about the systems as possible, since we don’t know much about their capabilities or limitations.

Theis says that many companies are making moves towards open access for their models to give more people a chance to work with them. He says that industry wants to have people trained on their tools. Meta, the parent company of Facebook, for example, has been pushing for more open models because it wants to better compete with the likes of OpenAI and Google. Giving people access to its models will allow an inflow of new, creative ideas, says Daniel Acuña, a computer scientist at the University of Colorado Boulder.

It is unrealistic to think that the company will give away all of their secret sauce, which is one reason that academia should retain its technology and talent.

Much of this work is not published in leading peer-reviewed scientific journals. The United States Nature index output in artificial intelligence was only accounted for by 3.84% by research by corporations. But data from other sources show the increasingly influential role that companies play in research. In a paper published in Science1 last year, Nur Ahmed, who studies innovation and AI at the Massachusetts Institute of Technology in Cambridge, and his colleagues, found that research articles with one or more industry co-author grew from 22% of the presentations at leading AI conferences in 2000 to 38% in 2020. In 2010, industry’s share of the biggest and the most capable models went from 11% to 98%. Industry alone and in collaboration with universities has had the most leading model on a set of 20 benchmarks used to evaluate the performance of artificial intelligence models. “Industry is increasingly dominating the field,” says Ahmed.

To make the most of that freedom, however, academics will need support — most importantly in the form of funding. “A strong investment into basic research more broadly, so it is not just happening in a few eclectic places, would be useful,” says Theis.

Even though governments are unlikely to be able to match huge amounts of money being spent by industry, smaller, more focused investments can have outsized influence. Canada’s strategy hasn’t cost a lot of money but has been very effective. The country has spent over $1 Billion on artificial intelligence since 2016, and plans to spend another 2 Billion in the next few years. Much of that money is earmarked for providing university researchers with access to the computing power they need for AI applications, to support responsible AI research and to recruit and retain top talent. Canada is near the top in both academic research and commercial development, thanks to this strategy. It placed 7th in the world for Nature Index output in AI research in 2023, and 9th in natural sciences overall.

An ambitious plan has been put forward by the Confederation of Laboratories for Artificial Intelligence Research in Europe, also known as CLAIRE. The plan was inspired by the idea of sharing large, expensive facilities across institutions and countries. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”

Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. “We are going to be hard-pressed to keep up with the latest state-of-the-art models when it comes to natural-language processing,” says Theis.

The United States, the AI Act, and the Challenges of Artificial Intelligence: Implications for Trade and Research in the 21st Century

It is possible for compliance with EU rules to be a good idea for US firms, but it will mean that the United States won’t have much protection against abuses of artificial intelligence and will be less regulated. The core of the Act remained intact despite it being faced with many compromises and lobbying. We will see if US state laws stay the course.

The UN proposals reflect high interest among policymakers worldwide in regulating AI to mitigate these risks. But it also comes as major powers—especially the United States and China—jostle to lead in a technology that promises to have huge economic, scientific, and military benefits, and as these nations stake out their own visions for how it should be used and controlled.

In contrast, the state bills are narrower. The Colorado legislation directly drew on the Connecticut bill, and both include a risk-based framework, but of a more limited scope than the AI Act. Although the framework covers the same areas of education, employment and government services, only systems that make consequential decisions impacting consumer access are deemed high risk. The legislation in Connecticut would not allow the creation of political deepfakes or explicit explicit deepfakes. Additionally, definitions of AI vary between the US bills and the AI Act.

The scope of the state bills are different from that of the AIT Act. The risk-based system of AI that is created by the Act is designed to protect the rights of people with family ties or education, but it is not allowed to use it for people based on their education. High-risk AI applications, such as those used in law enforcement, are subject to the most stringent requirements, and lower-risk systems have fewer or no obligations.

The US introduced a resolution to the UN urging member states to embrace the development of safe, secure and trustworthy artificial intelligence. In July, China introduced a resolution of its own that emphasized cooperation in the development of AI and making the technology widely available. All UN member states signed both agreements.

The remarkable abilities demonstrated by large language models and chatbots in recent years have sparked hopes of a revolution in economic productivity but have also prompted some experts to warn that AI may be developing too rapidly and could soon become difficult to control. A group of scientists signed a letter calling for a six month pause on the technology’s development in order to assess the risks.

Artificial intelligence could automate disinformation, generate deep fake video and audio, replace workers, and contribute to societal bias in an industrial scale. “There is a sense of urgency, and people feel we need to work together,” Nelson says.

There is only so much that the US and China can agree upon, says Joshua Meltzer, an expert at the Brookings Institute. a Washington, DC, think tank. He says the key differences are around protections for privacy and personal data, as well as what norms and values should be embodied by artificial intelligence.

Previous post A guide to the nature index
Next post Sickle cell therapies are slowly rolling out