What science photos can’t be created by Artificial Intelligence

What science photos do that AI-generated images can’t? The case for low- and middle-income women in the U.S.

People in low and middle income countries are the most affected by breast cancer. What can science photography do that artificial intelligence can’t?

People in low- and middle-income countries (LMICs) face higher death rates from breast cancer than those in wealthier nations, because of a lack of screening and treatment options. For example, people aged under 50 in low-income countries are four times more likely to die from breast cancer than those in high-income countries, on the basis of the most-recent available data, from 2022. The greatest increase in breast cancer cases and deaths will be seen over the next 25 years because of the rising prevalence of risk factors, such as obesity, drinking alcohol and less breastfeeding.

The administration of US President Donald Trump has taken a number of actions to undermine the country’s efforts to reduce its contribution to global warming. As courts look at legality of certain Trump policies, the uncertainty is slowing down programmes that are related to climate change. Daniel Cohan, an atmospheric scientist, says that there is a viciousness that he didn’t anticipate in tearing everything down.

Meanwhile, a coalition of nonprofit groups, archivists and researchers are working to ensure that the federal environmental data they rely on remains available to the public. Alejandro Paz and Eric Nost, members of the Public Environmental Data Partners network, wrote a book detailing how to find and save US government data.

Source: Daily briefing: What science photos do that AI-generated images can’t

What we don’t know about climate change: Do we see it as a human body? How we think about climate, but how much do we really know?

Do you recall your thief or granny from your grief? Scouts and seafarers will know that these are similar-looking knots, and the reef is strongest. “The grief knot, aptly named, is so weak you could sneeze on it and it would fall apart,” notes brain scientist Sholei Croom. But people aren’t that good at guessing which knot is stronger just by looking at them — even when they showed a good understanding of the underlying structure. This blindspot in reasoning sheds light on how our brains see the world.

There will be a time when the results of my experimentation arecartoon-like and I will use them as documentation, but often they are not. In conversations with their colleagues, they all agree that there should be clear standards for what is and isn’t allowed. A visual should never be used as documentation, that’s my opinion.

According to Harini Nagendra, researchers often overlook communication as a tool for driving climate action. The joy of nature, rather than simply showing scientific information, can make climate science more relevant to the public. These stories must be made available to as many people as possible. “We must share the stage with others affected by climate change, to help us understand how it feels,” Nagendra writes.

Cosmic crows: what do they need to know about artificial intelligence (AI)? A photograph of Moungi Bawendi

Researchers have been eavesdropping on unusually close-knit families of carrion crows (corvus corone corone) in Spain, collecting data on hundreds of thousands of different vocalizations. Small microphones recorded a variety of soft calls, far quieter than the familiar ‘caws’. The team then used AI to analyse and group the sounds. Researchers want to understand how the crows cooperate and experiment with human-crow chats.

Sarah Gabbott and Jan Zalasiewicz write in their book that synthetic clothing is going to be a problem in the future since it has degraded into a technofossil in the past. 7 min read by The Guardian.

One of the privileges of being on the campus of the Massachusetts Institute of Technology (MIT) in Cambridge is seeing glimpses of the future, from advances in quantum computing and energy sustainability and production, to designing new antibiotics. Do I understand it all deeply? I am able to wrap my head around most of it once I am asked to create an image to document the research.

First, let’s remind ourselves of the differences between a photograph, in which each pixel corresponds to real-world photons, and a genAI visual, created with a diffusion model — a complex computational process that generates something that seems real but might never have existed.

In 1997, Moungi Bawendi, a chemist at MIT, asked me to take a picture of his nanocrystals (quantum dots). The crystals fluoresce at different wavelength depending on how excited they are. The first image in which I had placed the vials on the bench was not liked by the man who shared the prize with me. You can see air bubbles in the tubes, that’s how I placed them. It was intentional, I thought it made the image more interesting.

The Journal of Physical Chemistry B had a cover depicting the second iteration. An important aspect of my process is the importance of collaborating with the scientist and that photograph shows that.

To generate a comparable image in DALL-E, I used the prompt “create a photo of Moungi Bawendi’s nanocrystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light”.

The implication that the samples have a mix of materials that fluoresces at a range of wavelengths is incorrect. Furthermore, some of the dots are shown lying on the surface of the table. Was that decision made by the model? The resulting visual is fascinating to me.

Taking the point even further, the colours we see in all of those amazing images of the Universe are digitally enhanced and give us yet more renditions of reality. It is clear through this lens that we have been creating images for many years without labelling them as such. There is a crucial difference between using software to enhance a photograph and using trained data to create a reality.

The intent with illustration is to describe the work. GenAI visuals will probably excel in that task. The goal for the documentary photograph is to bring us as close to reality as possible. Both are a form of manipulation and need to be defined and discussed before we include Genai tools.

Publishers now have software in place to identify various manipulations in images that already exist (see Nature 626, 697–698; 2024), but, frankly, AI programs will eventually be able to circumvent these fail-safes. There are ways to find out the provenance of a photograph, as well as document any manipulation of the original. The Coalition for Content Provenance and Authenticity is a forensic photography community that provides information to camera manufacturers about the ability to trace the provenance of a photograph. Not all manufactures are on board.

Two articles have raised an important issue by highlighting potential privacy and copyright violations when using diffusion models (N. Carlini et al. Preprint at arXiv https://doi.org/grqmsb (2023); and see go.nature.com/4jqyevn). Credit is only feasible in a closed system (which diffusion models are not) for which the training data are known and fully documented. Springer Nature, which is publishing Nature, recently included an exception into their policy for the use of a specific set of scientific data in order to cover this sort of use. The people should remember that AlphaFold doesn’t generate images through the use of a Genai tool, but through the generated structural models that people turn into images.

Happily, efforts are addressing privacy issues. Adobe explains in its manual that creators can use a kind of ‘tamper-evident’ content credentials to help them get proper recognition and promote transparency.

For example, I recall one experience with an engineer who altered a photograph that I had made of their research and wanted to publish it, along with the submitted article (see Supplementary Information). The researcher had not been taught the basic ethics of image manipulation or visual communication, so they did not think that altering the image was the same as changing their data.

Previous post Trump 2.0. is an assault on science everywhere
Next post 21 staffers from the DOGE resigned, saying that they wouldn’t help dismantle public services