An Artificial intelligence company will help map methane pollution from space
OpenAI’s Natural Intelligence: Why the Environmental Costs of Generative AI are Soaring – And Why We Are Doing It
The head of OpenAI admitted a long time ago that the artificial intelligence industry is headed for an energy crisis. It is an unusual admission. At the World Economic Forum, in fact, there was a warning that the next wave of generative artificial intelligence will consume vastly more power and energy than expected. “There’s no way to get there without a breakthrough,” he said.
I was happy that he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.
So what breakthrough is Altman talking about? Nuclear fusion is the design and deployment of more sustainable artificial intelligence systems. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.
There is no reason this can’t be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAI’s GPT-3 with a much lower carbon footprint. It is not happening in the industry at large.
Source: Generative AI’s environmental costs are soaring — and mostly secret
AI for Environmental Impacts Act and other Government Measures: The Need for More Action and More Needs to be Doped by Innovation and Data Science
Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. Given the urgency, more needs to be done.
Researchers could optimize neural network architectures for sustainability and collaborate with social and environmental scientists to guide technical designs towards greater ecological sustainability.
Finally, legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start, but more needs to be done.
Data from a soon-to-be-launched satellite will help to create the most comprehensive global methane map. The EU is considering a tough AI law which could open up the deep fake and allow for more research.
The pattern of scales on the tails of animals can be told by an artificial intelligence system. The algorithm was trained on hundreds of photos from 100 animals that had been hunted before, and it was able to identify individuals with 98% accuracy. The method could make it faster and easier to track beaver populations, which is usually done by capturing individual animals and giving them ear tags or radio collars.
Tuning in on Artificial Intelligence-Generated Fakery: the Case of Ok-Robot and the MIT Technology Review
OpenAI, creator of ChatGPT, has unveiled Sora, a system that can generate highly realistic videos from text prompts. Hany Farid says that this technology and the voice cloning will open up an entirely new front when it comes to making deepfakes of people saying and do things they never did. It is possible to detect Sora’s output, thanks to its mistakes like changing a walking person’s legs. In the future we will need to find other ways to adapt, says computer scientist Arvind Narayanan. This could, for example, include implementing watermarks for AI-generated videos.
The graphic shows how complicated tidying a room can be for a robot, with some of the things that can go wrong. OK-Robot has a wheeled base, tall pole and retractable arm, and is run on open-source models. When given tasks such as “move the soda can to the box”, it managed to complete them almost 60% of the time, and more than 80% of the time in less cluttered rooms. TheMIT Technology Review is a 4 min read.
The European Union will exempt models developed solely for research from being put into the strictest rules on riskiest options. “I would be amazed if this is going to be a problem since they don’t want to cut off innovation.” Some scientists think that the act could bolster open-sourced models, while others think it could hurt small companies that drive research. Powerful general-purpose models, such as the one behind ChatGPT, will face separate and stringent checks. Critics say that regulating AI models on the basis of their capability, rather than use, has no scientific basis. Jenia Jitsev says smarter and more capable does not mean more harm.
Scientific publishers have started to use AI tools such as ImageTwin, ImaCheck and Proofig to help them to detect questionable images. The tools make it faster and easier to find rotated, stretched, cropped, spliced or duplicated images. They are not as astute at spotting more complex and artificial intelligence-generated fakery. Current approaches will soon be obsolete, as the existing tools are not good enough to show the tip of a large problem, says the editor of EMBO Reports.
Anai system built by a start-up is used to analyse drugs impact on donated human cells. According to the company’s co-founder Asif Hasan, the method can achieve almost the same results as animal tests while reducing the time and cost of pre-clinical drug development by 45%. VeriSIM is another start-up. How a drug would react in the body is predicted with the use of Artificial Intelligence. The use of animals in pharmaceutical experimentation can cause ethical concerns as well as risks to humans. It’s not possible for animals to resemblance human physiology, according to the Quantiphi scientist. “And hence, when you go for clinical trials, there are many new adverse events or toxicology that get discovered.”