The search wars are going to start
A Brief History of Generative Artificial Intelligence (ChatGPT): How OpenAI is Growing and What it Can Do for the Internet
It is predicted thatrative artificial intelligence will be used in a number of industries, and will do more than just spit out images. David Song, a senior at Stanford University who is tracking the boom, has collated a list of over 100 generative AI startups. They are working on applications that include music, game development, writing assistants, customer service, coding aids, video editing tech, and assistants that manage online communities. Guo has invested in a company that plans to generate legal contracts from a text description—a potentially lucrative application if it can work reliably.
Are you curious about the boom of generative AI and want to learn even more about this nascent technology? Check out WIRED’s extensive (human-written) coverage of the topic, including how teachers are using it at school, how fact-checkers are addressing potential disinformation, and how it could change customer service forever.
Stability AI, which offers tools for generating images with few restrictions, held a party of its own in San Francisco last week. It announced $101 million in new funding, valuing the company at a dizzy $1 billion. Tech celebrities were at the gathering, including Sergey Brin.
When Jasper was first launched, it was mostly considered a really cool toy, but last year I couldn’t get any of you in this room to return my emails, so I look a little wide-eyed. My inbox is full. Love was present in the time of generative artificial intelligence.
Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. But getting the service to respond in a particular way required crafting the right prompt to feed into the software.
ChatGPT, created by startup OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any theme, craft literary parodies, answer complex coding questions, and much more. The service is predicted to make search engines and homework assignments obsolete.
Some information was shared in a post by the company on how it gave the text generation software a naturalistic new interface. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.
Jacob Andreas, an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. He said the mental model that you would normally apply to other agents is being presented to you in a familiar interface.
But LLMs have also triggered widespread concern — from their propensity to return falsehoods, to worries about people passing off AI-generated text as their own. The researchers were excited when Nature asked them about the potential uses of artificial intelligence in science. The University of Colorado School of Medicine’s Director of Postgraduate Studies said that anyone who believes in the potential of the technology to change lives has to be nervous. Much will depend on how future regulations and guidelines might constrain AI chatbots’ use, researchers say.
“At the moment, it’s looking a lot like the end of essays as an assignment for education,” says Lilian Edwards, who studies law, innovation and society at Newcastle University, UK. Dan Gillmor, a journalism scholar at Arizona State University in Tempe, told newspaper The Guardian that he had fed ChatGPT A good grade was earned by the student who produced the article in the response to the homework question.
Nature wants to discover how artificial-intelligence tools can affect research integrity and education, as well as how research institutions deal with them. Take our poll here.
The end of essays is not necessarily a bad thing, says the computer scientist at the university. essays are used to determine a student’s knowledge and writing skills “ChatGPT is going to make it hard to combine these two into one form of written assignment,” he says. But academics could respond by reworking written assessments to prioritize critical thinking or reasoning that ChatGPT can’t yet do. This might ultimately encourage students to think for themselves more, rather than to try and answer essay prompts, he says.
Last December, for instance, Edward Tian, a computer-science undergraduate at Princeton University in New Jersey, published GPTZero. There are two ways the tool analyses text. The measure of how familiar the text seems is called perplexity. If the tool finds most of the words and sentences predictable, the text is likely to have been written with an artificial intelligence engine. The tool also examines variation in text, a measure known as ‘burstiness’: AI-generated text tends to be more consistent in tone, cadence and perplexity than does that written by humans.
Why is Chatbot so popular, and how does it affect students’ learning? The New York City Department of Education is concerned about the misuse of AI-generated writing
How necessary that will be depends on how many people use the chatbot. More than one million people tried it out in its first week. But although the current version, which OpenAI calls a “research preview”, is available at no cost, it’s unlikely to be free forever, and some students might baulk at the idea of paying.
She believes that education providers will adapt. “Whenever there’s a new technology, there’s a panic around it,” she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”
The New York City Department of Education has blocked access to ChatGPT on its networks and devices over fears the AI tool will harm students’ education.
In a story first reported by the education-focused news site Chalkbeat New York, the department said the ban was because of concerns over the safety and accuracy of content.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Lyle.
Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan, is concerned that the use of artificial intelligence will not only cause toxic content, but will also make assumptions about the world based on training data. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.
But such adaptations will take time, and it’s likely that other education systems will ban AI-generated writing in the near future as well. Some online platforms fear the tool will taint the accuracy of their sites and have banned it.
Right now, there aren’t clear answers to those questions. One thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. The next big paradigm is not the metaverse, it is this new wave of artificial intelligence content engines and it is here now. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. Building with generativeAI is going to be a big shift in the 2020s. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of making a generic copy will be zero. Artificial intelligence video- generation systems are likely to dominate by the end of the decade. They may not be anywhere as good as the innovative creations of talented human beings, but the robots will quantitatively dominate.
Currently, nearly all state-of-the-art conversational AI technologies are proprietary products of a small number of big technology companies that have the resources for AI development. Major tech firms are racing to release similar tools that are funded largely by Microsoft. Given the near-monopolies in search, word processing and information access of a few tech companies, this raises considerable ethical concerns.
Assuming that researchers use LLMs in their work, scholars need to remain vigilant. Expert-driven fact-checking and verification processes will be indispensable. High-quality journals may decide to use a human verification step, or even ban certain applications that use this technology when LLMs canAccurately expedite summaries, evaluations and reviews. To prevent human automation bias — an over-reliance on automated systems — it will become even more crucial to emphasize the importance of accountability8. Humans should always be accountable to the scientific community.
No LLM tool will ever be accepted as a credited author on a research paper. Attribution of authorship carries accountability for the work, even if it’s done by an artificial intelligence tool.
Researchers using LLM tools should document their use in the methods section. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.
Science has always been open and transparent about how it works, no matter how new the technology is. Researchers should ask themselves how the transparency and trust-worthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.
Nature is setting out these principles because it wants to make sure research has transparency and integrity. This is the foundation of science and it needs to advance.
The link between viral infections and risk of brain disease versus Earth’s inner core spinning faster than the mantle during the cold war and then by earthquakes
Huge health-record review links viral illnesses to elevated risk of brain disease. Plus, Earth’s inner core might have slowed its spin and many of Australia’s researchers are miserable at work.
Earth may have stopped rotating faster than the rest of the planet around 2009. Researchers studied seismic waves generated by US nuclear test blasts during the cold war, and later by earthquakes, to discover that the inner core started spinning faster than the mantle after 1971, then slowed back down. The results can shine light on some mysteries such as what part of the inner core plays in maintaining the magnetic field and how fast the planet rotates. The study may not have the final word on the topic.
Researchers have found a link between common viral infections and an elevated risk of having a neurodegenerative condition, such as Alzheimer’s or Parkinson’s disease, later in life. Such links have been found between single viruses and diseases before — for example, between Epstein–Barr virus and multiple sclerosis. The new study looked at electronic health records from all over the world. Researchers caution that the data show only a possible connection, and that it’s still unclear how or whether the infections trigger disease onset.
Source: https://www.nature.com/articles/d41586-023-00209-8
The impact of the Indonesian Science Pandemic on the Australian Science Policy and Society: Why science is undermining? – A survey of researchers in Australia
Job satisfaction among early-career scientists in Australia has dropped over the pandemic years. In a survey of 500 researchers, 57% were satisfied with their jobs, compared with 62% in a similar survey in 2019. That’s much lower than the average of 80% across the nation’s entire workforce. The survey’s authors suggest that the root cause might be the high number of science PhD degrees awarded in Australia each year despite a paucity of secure science positions.
The EPA has been reinvigorated by an influx of funding and a renewed place at the centre of public policy. The agency has to codify a large number of rules and regulations at a rapid rate in order to achieve US President Joe Bidens environmental and climate goals. But the EPA must simultaneously recover from an era in which it was largely sidelined, morale plummeted and many scientists left. Policy analyst Max Stier said that an organization was at some level traumatized to begin with and now has a new set of requirements that are going to call for new capabilities. They are going to have to build their strength up over time.
We asked you whether researchers should be allowed to use generative AIs like ChatGPT to help them write academic papers, and almost 58% of the more than 3,600 Briefing readers who responded to our poll said no.
Many people thought it would be a good idea to ban the use of the chatbots until there is a way to detect fake and inaccurate papers.
Nature picked the tools and technologies that are set to shake up science, such as single-moleculeProteinsequencing, high-precision radiocarbon dating, and more.
The government in Indonesia is freezing out foreigners to protect its reputation as a success story. They think that the result is a black hole that makes it hard to know if the population of orangutan, elephant, rhino, and tiger is recovering as much as the official figures claim. A group of Indonesian and international non-governmental organizations is planning to launch a court action that seeks to overturn what they say is a pattern of undermining science.
Source: https://www.nature.com/articles/d41586-023-00209-8
Why Artificial Intelligence is Dumber than Humans: A Nature Survey on AI-Based Text Generation and Plagiarism Sensing
The remains of 10 crocodile mummies that had remained hidden underneath a Byzantine-era rubbish dump have been found in an ancient Egyptian tomb. Researchers were able to see that there were two different species of Crocodylus niloticus when the linen bandages that would have wrapped them were eaten by insects. The mummies offer a glimpse into the afterlife of ancient Egyptians because niloticus will eat you, whereas with suchus, you can swim in the pool and live. (The New York Times | 4 min read)
May is considering adding oral components to his written assignments, and fully expects programs such as Turnitin to incorporate AI-specific plagiarism scans, something the company is working to do, according to a blogpost. The outlines and drafts are the intermediate steps of the writing process.
She says that someone can have great English, if they aren’t a native English speaker. “I think that’s where the chatbots can help, definitely, to make the papers shine.”
As part of the Nature poll, respondents were asked to provide their thoughts on AI-based text-generation systems and how they can be used or misused. Here are some selected responses.
It is my concern that students will look for the end of a paper without seeing the value of the creative work and reflection that goes into it.
The students were struggling with writing before the OpenAI release. Will this platform further erode their ability to communicate using written language? The return to handwritten exams raises many questions about equity, ableism and inclusion.
“Got my first AI paper yesterday. Quite clear. I have adapted my syllabus to note that oral defence is required for any work submitted that is suspected of being an imitation of an original work.
In December of 2015, a Rutgers University student came to the conclusion that artificial intelligence might be dumber than humans.
The act of using someone elses work or idea without giving proper credit to the original author is called plagiarizing. But when the work is generated by something rather than someone, this definition is tricky to apply. Emily Hipchen, who is a board member of Brown University’s Academic Code Committee, puts the use of artificial intelligence by students in a crucial point of contention. She says that she doesn’t know if we have a person who is being stolen from.
There is more than one person in Hipchen’s speculation. The chair of the academic integrity program at the college is trying to figure out whether or not to call an algorithm a person, if it involves text generation.
Daily believes that professors and students will need to understand that when they use digital tools that generate text instead of just collect facts, they need to be aware of the dangers of plagiarizing.
Artificial Intelligence in YouTube: Some Issues and Where to Look for the Next-Generative AI in the Internet of Things (Google+Bing)
On February 8th,artificial intelligence integrations are expected to be announced by the company. It’s free to watch live on YouTube.
For a long time, Microsoft’s Bing remained a distant competitor to search engine leader Google. Microsoft plans to use generative artificial intelligence in its search engine in order to improve the experience of searching for things and increase the number of users. Will this year be a renaissance for Bing? Who knows, but users can expect to soon see more text crafted by AI as they navigate through their search engine of choice.
The company may give more details about its response to the chatg pt service called “Bard”, which uses the company’s Language Model for Dialogue Applications. It is not available to the public just yet, but the company claims it’s rolling out the feature to a small group for testing and that more people will get to experience Bard in the near future.
We think that the use of this technology is inevitable, therefore, banning it will not work. The implications of this technology need to be debated by the research community. Here, we outline 5 key issues and suggest where to start.
A Critical Review on the Effectiveness of Cognitive Behavioural Therapy for Anxiety-Related Disorders: AI, Robotics, Cognition and Artificial Intelligence
A constant increase in the quality and size of data sets and the use of sophisticated ways to calibrate these models have made them much more powerful than before. LLMs will lead to a new generation of search engines1 Complex user questions can be answered with detailed and informative answers.
Next, we asked ChatGPT to summarize a systematic review that two of us authored in JAMA Psychiatry5 on the effectiveness of cognitive behavioural therapy (CBT) for anxiety-related disorders. There were several factual errors, misrepresentations and incorrect data in the convincing response. For example, it said the review was based on 46 studies (it was actually based on 69) and, more worryingly, it exaggerated the effectiveness of CBT.
To counter this opacity, the development and implementation of open-source AI technology should be prioritized. Non-commercial organizations such as universities typically lack the computational and financial resources needed to keep up with the rapid pace of LLM development. We therefore advocate that scientific-funding organizations, universities, non-governmental organizations (NGOs), government research facilities and organizations such as the United Nations — as well tech giants — make considerable investments in independent non-profit projects. This will help to develop advanced open-source, transparent and democratically controlled AI technologies.
Critics say that collaborations will not compete with big tech, but BigScience has built an open-sourced language model called BLOOM. Tech companies might benefit from such a program by open sourcing relevant parts of their models and corpora in the hope of creating greater community involvement, facilitating innovation and reliability. Academic publishers should give LLMs access to their archives so that they can see the results of the models.
Some researchers say that academics should refuse to support large commercial LLMs altogether. Problems like bias and safety concerns, and the huge amount of energy needed to train, are among the issues that raise concerns about their ecological footprint. A further worry is that by offloading thinking to automated chatbots, researchers might lose the ability to articulate their own thoughts. “Why would we, as academics, be eager to use and advertise this kind of product?” wrote Iris van Rooij, a computational cognitive scientist at Radboud University in Nijmegen, the Netherlands, in a blogpost urging academics to resist their pull.
One key issue to address is the implications for diversity and inequalities in research. LLMs are a double-edged sword. They could help to level the playing field, for example by removing language barriers and enabling more people to write high-quality text. With most innovations, high-income countries and privileged researchers will discover ways to exploit LLMs, in ways that accelerate their own research and widen inequalities. Therefore, it is important that debates include people from under-represented groups in research and from communities affected by the research, to use people’s lived experiences as an important resource.
• What quality standards should be expected of LLMs (for example, transparency, accuracy, bias and source crediting) and which stakeholders are responsible for the standards as well as the LLMs?
Helping someone to improve their papers, says an assistant who uses large language models to write and edit their research papers. In response to the challenge of overflow, one researcher tells us about his experience with chatgt-3
In December, two biologists asked an assistant to help them improve three of their papers and he was not a scientist. Each manuscript took five minutes to review after the aide suggested changes to sections in seconds. The helpers spotted a mistake in one biology manuscript. The trial didn’t always run smoothly, but the final manuscripts were easier to read — and the fees were modest, at less than US$0.50 per document.
This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the generativeai chatbot style tools that can be used to create prose, poetry, computer code, andediting research papers, among other things.
The most famous of these tools, known as large language models or LLMs, is a version of GPT that shot to fame because it was free and easy to use. There are different generative Artificial Intelligences that can produce images or sounds.
Pividori is employed by the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.” Other scientists say they use LLMs more than ever to help them write or check code, and to help them think of new ideas. Hafsteinn Einarsson, a computer scientist at the University of Iceland, uses his LLM every day. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. He says many people are using it as a digital assistant.
Researchers believe that the LLMs are unreliable at answering questions. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
But the tools might mislead naive users. Stack Overflow temporarily banned the use of chatgtp in December of 2016 because the site’s admins found themselves flooded with incorrect but seemingly persuasive answers from enthusiastic users. This could be a nightmare for search engines.
The Elicit, for instance, uses its ability to guide queries for relevant literature, and then to briefly summarize each of the websites or documents that the engines find to produce an output of seemingly referenced content.
There are problems that companies are aware of. Last year, a paper on a dialogue agent called Sparrow was published by DeepMind and later the chief executive of the company and co-founder told Time magazine that there would be a private alpha this year. According to Anthropic, an opponent, they have solved some of the problems of other competitors. However, other competitors declined to be interviewed for this article.
Without output controls, LLMs can easily be used to create hate speech, as well as racist, sexist and other harmful associations, which is a safety concern ethicists have been pointing out for years.
OpenAI’s guardrails have not been wholly successful. In December last year, computational neuroscientist Steven Piantadosi at the University of California, Berkeley, tweeted that he’d asked ChatGPT to develop a Python program for whether a person should be tortured on the basis of their country of origin. The chatbot replied with code inviting the user to enter a country; and to print “This person should be tortured” if that country was North Korea, Syria, Iran or Sudan. OpenAI subsequently closed that kind of question.
A group of academics came up with an alternative to a master’s degree. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources. The team involved also made its training data fully open (unlike OpenAI). It is unclear whether big tech firms will follow this example.
A further confusion is the legal status of some LLMs, which were trained on content scraped from the Internet with sometimes less-than-clear permissions. Copyright and licensing laws currently cover direct copies of pixels, text and software, but not imitations in their style. When those imitations — generated through AI — are trained by ingesting the originals, this introduces a wrinkle. The creators of some AI art programs, including Stable Diffusion and Midjourney, are currently being sued by artists and photography agencies; OpenAI and Microsoft (along with its subsidiary tech site GitHub) are also being sued for software piracy over the creation of their AI coding assistant Copilot. The outcry might force a change in laws, says Lilian Edwards, a specialist in Internet law at Newcastle University, UK.
Some researchers think that setting boundaries for these tools is crucial. Existing laws on discrimination and bias, as well as planned regulation of dangerous uses of Artificial Intelligence, will help keep use of LLMs honest and fair. She says there are a lot of law out there and that it is just a matter of applying it or tweaking it.
One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this idea, which is to use LLMs themselves to spot the output of artificial text.
Many products attempt to detect content written in a language other than English. OpenAI itself had already released a detector for GPT-2, and it released another detection tool in January. The firm that develops the anti-plagiarism software, Turnitin, has already been used in schools, universities, and scholarly publishers around the world. The company says that it will be releasing an application for detecting artificial intelligence in the first half of this year.
It is an advantage of watermarking to rarely produce false positives. If the watermark is there, the text was probably produced with AI. Still, it won’t be infallible, he says. If you are determined enough, there are certainly ways to defeat any watermarking scheme. Detection tools and watermarks make it harder to cheat with artificial intelligence.
Source: https://www.nature.com/articles/d41586-023-00340-6
Genai: A Generative AI Conference Held in Los Alamos’ Embarcadero with Dave Rogenmoser, Jasper’s Chief Executive
Eric Topol, director of the Scripps Research Translational Institute in San Diego, California, says he hopes that, in the future, AIs that include LLMs might even aid diagnoses of cancer, and the understanding of the disease, by cross-checking text from academic literature against images of body scans. He emphasizes that this needs careful oversight from specialists.
Most notably Microsoft announced that it is rewiring Bing, which lags some way behind Google in terms of popularity, to use ChatGPT—the insanely popular and often surprisingly capable chatbot made by the AI startup OpenAI.
China’s biggest search company is Baidu, which is in the last position in the new search wars. The competitor, “Ernie Bot”, was announced in English. Baidu says it will release the bot after completing internal testing this March.
Dave Rogenmoser, the chief executive of Jasper, said he didn’t think many people would show up to his generative AI conference. It was supposed to be a last-minute event, but it was actually scheduled for the day of love. If the views of the bay were as bad as those in San Francisco’s Embarcadero conference hall, it would still be better to be with your loved ones.
The event Jasper hosted, called Genai, sold out. The lanyard crowd was the most registered for the event, and by the time they got to the stage, it was standing room only. The walls were soaked in pink and purple lighting, Jasper’s colors, as subtle as a New Jersey wedding banquet.
How Does Artificial Intelligence Imply the Growth of the Economy and Consumer Demand? A Comment on OpenAI, Google, Jasper, and Roblox
OpenAI introduced a simple, interesting search box late last year. AI got a UI. We knew about it suddenly. This was a show about the modern era. A new kind of search that interpreted our dumb questions and spit out smart answers (or at least, smart-sounding). Microsoft launched a bot within Bing and made an investment in OpenAI. Google noticed, too, and recently demoed its own version of a chatbot-powered search tool. Smaller companies like Jasper, which sells its generative AI tools to business users, are now faced with tech existentialism. There’s the sunny side of all that attention, and the shadow of Big Tech looming over you.
There’s a chance that the economic system that encourages artificial intelligence will be criticized for taking away work from humans. “Roblox stands apart as a platform with a robust creator-backed marketplace and economy, and we must extend that to support in-experience user-creators as well as AI algorithm developers,” Sturman writes. Sturman didn’t specify exactly how it will pull that off, but I suspect the company is motivated to do so; it wouldn’t be a great look if Roblox undermined its own Talent Hub with generative AI tools.
I put the video at the bottom of the post so you can see how they will work. One example is that somebody types in different descriptions of materials for a car, and the patterns are applied right away. In other places you can see how autocompleting code might work for things like turning on a car’s lights in the game world.
The company is looking at moderation, because of Roblox’s popularity with kids. “In all cases we need to keep Roblox safe and civil,” Sturman says. We need to build a moderation flow for all types of creation.
A Biologist’s Guide for Generative Language Models – On the Use of Artificial Intelligence to help Writers and Editors
The hope was that the system could be used to speed up writing tasks, by providing a quick initial framework that could be edited into a more detailed final version.
“Generative language models are really useful for people like me for whom English isn’t their first language. It helps me write faster and more quickly. A Biologist at the Central Leather Research Institute says that having a professional language editor by his side is similar to having a professional linguist by his side.
The key, many agreed, is to see AI as tool to help with work, rather than to replace work altogether. It is possible to use artificial intelligence a useful tool. It must stay one of the tools. The limitations and defects have to be clearly looked at and governed, according to a retired Biologist from Milan, Italy.