Massive lost mountain cities are revealed by lasers
NaturePodcast: A Podcast on Natural Science and Artificial Intelligence Approaches the Emergence of a Human-Brane Against Birth Control Pills
Never miss an episode. The NaturePodcast is available on Apple Podcasts,Spotify, YouTube Music and the app of your choice. An RSS feed for the Nature Podcast
is available too.
What one researcher found after repeatedly scanning her own brain to see how it responded to birth-control pills, and how high-altitude tree planting could offer refuge to an imperilled butterfly species.
GANGT Artificial Intelligence is the sudden and widespread emergence of a tool or technology from the research world into the public consciousness. Large language models can create text and images that are almost indistinguishable from what humans created, which is disrupting many fields of human activity. Yet the potential for misuse is already manifest, from academic plagiarism to the mass generation of misinformation. There is a fear that if guard rails are not present, the rapid growth of the technology will make it hard to ensure accuracy and reduce harm.
The size of two ancient cities discovered by a drone: How can we protect ourselves and our environment in watermarking generative AI systems?
Researchers have found the size of two ancient cities buried in the mountains of Uzbekistan. The teams were able to see what was underneath the ground using drones, because they were unaware of the extent of the cities. The survey surprised researchers by showing one of the cities was six times bigger than expected. The two cities, called Tashbulak and Tugunbulak, were nestled in the heart of Central Asia’s medieval Silk Road, suggesting that highland areas played an important role in trade of the era.
In a welcome move, DeepMind has made the model and underlying code for SynthID-Text free for anyone to use. Although the technique itself is in its early stages, the work is an important step forward. We need it to grow up fast.
There is an urgent need for improved technological capabilities to combat the misuse of generative AI, and a need to understand the way people interact with these tools — how malicious actors use AI, whether users trust watermarking and what a trustworthy information environment looks like in the realm of generative AI. These are all questions that researchers need to study.
watermarking can be useful if it’s acceptable to companies and users. Although regulation is likely, to some extent, to force companies to take action in the next few years, whether users will trust watermarking and similar technologies is another matter.
Getting watermarking right matters because authorities are limbering up to regulate AI in a way that limits the harm it could cause. Watermarking is considered a linchpin technology. Last October, US President Joe Biden instructed the National Institute of Standards and Technology (NIST), based in Gaithersburg, Maryland, to set rigorous safety-testing standards for AI systems before they are released for public use. NIST is seeking public comments on its plans to reduce the risks of harm from AI, including the use of watermarking, which it says will need to be robust. The plans will not be finalized on a certain date.
The authors have been watermarking LLM outputs for a while. A version of it is also being tested by OpenAI, the company in San Francisco, California, behind ChatGPT. But there is limited literature on how the technology works and its strengths and limitations. One of the most important contributions came in 2022, when Scott Aaronson, a computer scientist at the University of Texas at Austin, described, in a much-discussed talk, how watermarking can be achieved. Others have also made valuable contributions — among them John Kirchenbauer and his colleagues at the University of Maryland in College Park, who published a watermark-detection algorithm last year3.