Corporate lobbying is having an affect on artificial intelligence laws in the US
Artificial Intelligence: An Empirical Investigation into Open Access and the Importance of Openness for Enterprises and Higher-Order Scientists
Companies have access to much greater computing power than academic institutions, including the ability to buy the graphics-processing units (the most common chips used in AI) they need, or even design and manufacture their own. Firms are able to create models that are larger and complex than academic models. In 2021, industry AI models were 29 times bigger, on average, than academic models.
Vallor says that students can provide a critical and dispassionate view on the pros and cons of artificial intelligence and be an independent source of information on what works and what does not. Academics can help align research with what is in the public interest. “At the moment, there is a deficit of AI applications focused on the kinds of problems we most need to address,” says Vallor — including challenges such as climate change, health-care needs, and the social and democratic stresses that have been amplified by digital technologies.
Companies that develop and deploy AI responsibly could face a lighter tax burden, she suggests. Vallor says that people who don’t want to adopt responsible-ai standards should pay to compensate the public who they endanger.
For that scrutiny to happen, however, it is imperative that academics have open access to the technology and code that underpins commercial AI models. It’s not possible for the best experts to just look at a complex neural network and figure out how it works. It is extremely important that we know as much about the systems as possible, because we don’t know much about their capabilities or limitations.
Theis claims many companies are making moves toward open access for their models to make it easier for more people to work with them. “It’s a core interest for industry to have people trained on their tools,” he says. Meta, which is the parent company of Facebook, has pushed for more open models in an effort to better compete with OpenAI and others. The computer scientist at the University of Colorado Boulder says that giving people access to its models will allow an inflow of new ideas.
Even though they are free to pursue projects with no limitations, the academic can gain support from industry to help solve interesting and tricky problems. Theis says that it was common for his students to go to big tech to get a feel for the industry. “There’s actually a back and forth and diffusion between the two.”
Acuña and his colleagues have studied the different approaches of industry and academic researchers to AI3. They analysed papers presented at a variety of AI conferences between 1995 and 2020 to see how the composition of a research team was related to the novelty of their work, and its impact in terms of citations and models created.
The Confederation of Labs for Artificial Intelligence Research in Europe and Implications for Machine Learning in Particle Physics and Natural Language Processing
To make the most of that freedom, academics will need to receive funding. “A strong investment into basic research more broadly, so it is not just happening in a few eclectic places, would be useful,” says Theis.
Europe is also home to several initiatives to boost academic research in AI. Theis is the scientific director of the Helmholtz Association of German Research Centres. The unit provides funding, computing access and consulting for research labs to help them apply AI tools to their work, such as finding new ways of using the large data sets they produce in areas such as drug discovery and climate modelling. Theis wants to democratizing access to Artificial Intelligence. The research labs are supposed to be accelerated.
The confederation of laboratories for artificial intelligence research in Europe, which was co-founded by Hoos, has an even more ambitious plan. The plan is inspired by sharing large, expensive facilities in physical sciences all over the world. “Our friends the particle physicists have the right idea,” says Hoos. “They build big machines funded by public money.”
Companies also have access to much larger data sets with which to train those models because their commercial platforms naturally produce that data as users interact with them. “When it comes to training state-of-the-art large language models for natural-language processing, academia is going to be hard-pressed to keep up,” says Fabian Theis, a computational biologist at Helmholtz Munich, in Germany.
Environment and Climate: Working with the NSERC to shape policies that solve real-world problems for scientists and government agencies to strengthen science, protect communities and preserve biodiversity
I started talking directly to the fishing crews and their families. I attended the meetings to hear about the issues that have arisen in the area. Listening to the community informed my research and led to me becoming a fierce advocate for scientists who do work that can be applied directly to society. I make sure that they engage with the intended beneficiaries of their research early on, and on a continuous basis.
Chief of staff, climate and environment, and assistant director for climate resilience in the White House Office of Science and Technology Policy in Washington DC.
We’re also working with government agencies to weave in nature-based solutions. I know the science behind it, having trained as an ecologist. For example, restoring a marsh would also buffer nearby communities, buildings or roads against sea-level rise or storm surges. Such initiatives both strengthen nature and provide protective benefits for people.
When I began my master’s degree in astronomy at the University of Victoria, Canada, I became interested in policy. A good friend encouraged me to be vice-president of the graduate student association, through which I eventually helped to negotiate dental coverage for members. Meeting with the university’s provosts during negotiations, I realized so much needed to be done. There are so many problems to be solved through policy change. I was convinced that I actually enjoy this.
The National sciences and engineering research council distributes funds to university researchers throughout Canada. My job is to collect data to see whether council policies are working.
The CEO of the NSERC works to ensure that all data related infrastructure for the organization is kept up to date in order to give public service a certain level of focus.
For example, Canada’s government is ploughing money into electric vehicles, their batteries and recycling those batteries. Someone in parliament says that electric vehicles are the future. We have to go through reports to find the associated NSERC-funded projects. It takes a lot of time. But we’re trying to incorporate machine learning into our processes so that you don’t have to read every report with, for example, engineering as a keyword. With machine learning, you’re able to see a larger data set. And using those data, we can help officials to craft policy more efficiently.
Source: Science-policy advisers shape programmes that solve real-world problems
A Survey of NSERC Research Grant Applications Using Silos: Where Do We Stand? How Do We Need to Address Equality, Diversity and Inclusion?
One of the areas we’re working on is equality, diversity and inclusion (EDI). Making EDI policies more effective is important to me because I’m mixed race (my father is Black Jamaican and my mother is white Canadian) and neurodiverse — I have attention deficit hyperactivity disorder, am on the autism spectrum and have mental-health challenges. I had a hard time getting into science education due to all of these things. But I know I’m lucky to have made it to where I am.
To better understand EDI, our office put forward the idea that we needed to collect data on the people who apply for NSERC grants, to better understand EDI. There was data collected here and there, but in silos. The data was analysed by me and a colleague.
We showed not only aspects in which the council is doing really well, but also places where it had considerable gaps — NSERC needs to focus on these areas. For example, we found that the diversity of people applying for funding is not yet representative of the Canadian population as a whole. We used census data to show how many applications would be needed from specific demographic groups for award recipients to be represented equitably. So, for example, we would expect 50 Black people to apply for a student fellowship if the student pool was representative of the population, but instead we received only 30 applications.
For example, the NSERC can say to a university, “We see that you didn’t report having any Black applicants to your PhD programmes last year. Are you OK with that?” Or, “Why might that be?” We know about these gaps, because we can see the data.
Source: Science-policy advisers shape programmes that solve real-world problems
Science-Policy Advisors Shape Programmes that Solve Real-World Problems: Addressing India’s carbon footprints without compromising on development
Science-based evidence is put together to inform policymakers, regardless of whether they are government stakeholders at the national or state level. The team I manage is made up of 30 people who work on issues like long-term decarbonization to decrease companies’ carbon footprints without compromising on development. This is really important for India, a lower-middle-income country.
Trying to get an audience with policymakers is hard. We really need to follow up several times. I don’t say anything. They ride it up. That never happens. It takes many conversations.
To talk about what has the greatest impact, you have to speak their language. The social impacts that a legislative member works with could be something. For policymakers at the highest levels of government, it could be economic impacts and investments. These are the big tickets that resonate well. At the end of the day, they want to make a difference.
Source: Science-policy advisers shape programmes that solve real-world problems
Science-Policy Advisors Shape Programmes Shaped by Real-World Problems: Why Science Matters to Policymakers and the World Leaders
I spent 25 years at the Indian Institute of Science in Bengaluru as a forest ecologist. India’s Western Ghats mountain range is a biodiversity hotspot and I did long-term monitoring there.
I thought I would be a tenured faculty member at the university. I was a graduate student at Florida State University and I was doing research that could be useful to the community. I was looking at the decline in oyster populations that were caused by local lack of water.
In policymaking, deep knowledge doesn’t automatically make you an effective science adviser. It’s more about having a holistic understanding of an issue and the ability to build trust with policymakers.
Alongside running my research group — an international team working on nanotechnology for sustainable chemistry and clean energy solutions — I volunteer at organizations such as the World Economic Forum (WEF) and the International Union of Pure and Applied Chemistry (IUPAC), helping global leaders to create more effective public policies that are backed by sound evidence.
Since 2012, I’ve helped the WEF to put together a yearly report called the Top 10 Emerging Technologies. In this report, scientists around the world identify technologies that they think are going to transform industry and the economy. We explain these tools in simple terms and, every January, we communicate that information to policymakers and world leaders during the WEF’s annual summit in Davos, Switzerland.
We highlighted mRNA vaccines in the 2017 report. We didn’t know the COVID-19 was coming. We thought that the government should put more money into developing the technology because people aren’t paying enough attention.
Another area in which science policy can help to steer research is making AI more useful for chemists. To do that, we need an innovative chemistry ‘language’ that can be ‘read’ by machines. A standard way of storing chemical substances’molecular information will be created by IUPAC. The implementation of artificial intelligence in scientific discovery will speed up.
Systems thinking also means connecting various disciplines. In chemistry it means linking the reactivity of compounds with their roles in health, economy and environment, for example, always putting people at the centre.
Source: Science-policy advisers shape programmes that solve real-world problems
Scientific Advice Beyond the State-Level and State-level Regulated Environments of the U.S. Artificial Intelligence (AI) Act
The guidelines, teaching tools and training workshops are provided by us. We have workshops in South Africa, the United States and Egypt to teach this approach to secondary-school teachers.
Producing high quality reports that stand up to the most rigorous scrutiny is something this requires. Beyond the report, you need to build trust by listening and empathizing. It begins on the day you are asked to advise and continues until the day the legislation is implemented.
Scientific advice is about current knowledge in context. Just as decision makers work with lawyers to ensure their decisions are constitutionally sound, they need to work closely with scientific advisers to incorporate the latest knowledge into policies.
The Artificial Intelligence Act has caused some speculation about how it could affect the EU’s regulation, which can have a global impact as companies make it easier to operate internationally. The way in which the EU’s rules on data privacy influenced state-Level legislation and corporate self- governance in the United States is a prime example of how this can happen.
For these reasons, lobbying groups claim to prefer national, unified AI regulation over state-by-state fragmentation, a line that has been parroted by big tech companies in public. In private, some advocates for light-touch rules showing their dislike for both state and national legislation. If neither kind of regulation emerges, AI companies will have preserved the status quo: a bet that two divergent regulatory environments in the EU and United States — with a light-touch regime in the latter — favour them more than the benefits of a harmonized, yet heavily regulated, system.
The wider tech industry’s power, however, can extend beyond this kind of passive inspiration. The Connecticut draft bill did contain a section on generative AI inspired by part of the AI Act, but it was removed after concerted lobbying from industry. The bill received some support, but is still in limbo. The Governor of Connecticut threatened to veto the bill, due to the industry associations’ belief that it would stifle innovation. Its progress is frozen, as are many of the other more comprehensive AI bills being considered by various states. The Colorado bill is designed to be changed so as to not hinder innovation before it takes effect.
A major difference between the state bills and the AI Act is their scope. The risk-based system that the AI Act establishes is a way to protect fundamental rights, but it is not suitable for people with ties to their family or education. Low risk systems and high risk systems have fewer or no obligations, as they are subject to the strictest requirements.