1,600 researchers were asked what they think of artificial intelligence and science

What Do Scientists Think about Artificial Intelligence? An Empirical Study of the Challenges that AI Can Provide for Scientific Research in 2020

The survey shows that many scientists believe there are barriers preventing them from developing and using artificial intelligence, even though the obstacles seem different for different groups. The researchers who directly studied AI were most concerned about a lack of computing resources, funding for their work and high-quality data to run AI on. Those who work in other fields with Artificial Intelligence tended to be more worried about lack of trained scientists and resources than those who use it in their research. Researchers who didn’t useArtificial Intelligence generally said they didn’t need it, or they lacked experience or time to investigate it.

AI is also widely being used in science education around the world. The methods of education need to change to account for the use of LLM tools by students at universities and schools.

Survey respondents told us, for example, that they are using AI to process data, write code and help them write papers. English-language science communication is a benefit for many. Generative-AI tools powered by large language models (LLMs), notably ChatGPT, help researchers whose first language is not English, but who need to use English to publish their research. Scientists can use LLMs to improve their writing style and grammar, and to translate and summarize other people’s work.

Meanwhile, AI has also been changing. Whereas the 2010s saw a boom in the development of machine-learning algorithms that can help to discern patterns in huge, complex scientific data sets, the 2020s have ushered in a new age of generative AI tools pre-trained on vast data sets that have much more transformative potential.

Many of the ways that Artificial intelligence tools help researchers are focused on machine-learning. From a list of possible advantages, two-thirds noted that AI provides faster ways to process data, 58% said that it speeds up computations that were not previously feasible, and 55% mentioned that it saves scientists time and money.

The Duke University Computational Biologist said that artificial intelligence helped her answer biological questions that were previously out of reach.

“The main problem is that AI is challenging our existing standards for proof and truth,” said Jeffrey Chuang, who studies image analysis of cancer at the Jackson Laboratory in Farmington, Connecticut.

Respondents added that they were worried about faked studies, false information and perpetuating bias if AI tools for medical diagnostics were trained on historically biased data. A team based in the United States said that when they asked the GPT-4 to suggest diagnoses for a series of clinical case studies, the answers differed depending on the patient’s race or gender. Preprint at medRxiv, it is likely to reflect the text that the chatbot was trained on.

“There is clearly misuse of large language models, inaccuracy and hollow but professional-sounding results that lack creativity,” said Isabella Degen, a software engineer and former entrepreneur who is now studying for a PhD in using AI in medicine at the University of Bristol, UK. “In my opinion, we don’t understand well where the border between good use and misuse is.”

Researchers thought that the most important benefit was that the LLMs aided researchers with a first language other than English who wanted to improve the style and accuracy of their research paper. The academic community can demonstrate how to use the tools for good even if there is a small number of malicious players.

Moreover, the most popular use among all groups was for creative fun unrelated to research (one respondent used ChatGPT to suggest recipes); a smaller share used the tools to write code, brainstorm research ideas and to help write research papers.

Artificial Intelligence and Large-Length Machine Learning in Bioinformatics and Alcoholism: Why it Matters to LLMs?

The principle of LLMs can be useful in building models similar to those in bioinformatics, but it is clear that these models must be large, says Morris, the chemist at the University of Oxford. Only a very small number of entities on the planet has the ability to train large models, run them long term, and pay the electricity bill. That constraint is limiting science’s ability to make these kinds of discoveries,” he says.

That can be difficult for someone who worked in earth sciences, according to a Japanese man who worked in that field. It is very difficult to find reviewers who are familiar with machine-learning methods and the science that they are applied to.

Many researchers, however, said AI and LLMs were here to stay. Yury Popov is a specialist in the treatment of alcoholism at the Beth Israel Deaconess Medical Center in Boston, Massachusetts. “We have to focus now on how to make sure it brings more benefit than issues.”

Previous post Efforts need to be made to educate older adults of the risk of the disease
Next post It’s a search for a connection between asthma and respiratory syncytialvity