There is a risk that will prevent the launch of a rival
How Does Generative AI Make Images, Texts, or Code? Jeff Dean, President and Chief Technology Officer of Stability AI, Sergey Brin, Announced $101 million in Google Funds, Reveals a
Last year, I attended an event hosted byGoogle to celebrate it’s work with artificial intelligence. About a hundred people gathered in a pierside exhibition space to watch scripted presentations from executives and see demos of the latest advances as the company’s domain now extends onto the Hudson River. Jeff Dean, the high priest of computation, spoke from the West Coast and promised a hopeful vision for the future.
Are you curious about the boom of generative AI and want to learn even more about this nascent technology? How teachers are using it at school, how fact-checks are addressing potential misinformation, and how it could change customer service all have been covered by WIRED.
Stability AI, which offers tools for generating images with few restrictions, held a party of its own in San Francisco last week. It announced $101 million in new funding, valuing the company at a dizzy $1 billion. The gathering attracted tech celebrities including Google cofounder Sergey Brin.
Everyprompt makes it simpler for companies to use text generation. Like many contributing to the buzz, he says testing generative AI tools that make images, text, or code has left him with a sense of wonder at the possibilities. He says it has been a long time since he used a website or technology that made him feel good. I feel like I’m using magic because of generative Artificial Intelligence.
Past research shows that the approach works and the scale of the model could provide significant gains over the work done before. Tech companies want to dominate Artificial Intelligence research and use their unique advantages in terms of access to computer power and training data to do so. A comparable project is Facebook parent company Meta’s ongoing attempt to build a “universal speech translator.”
Google has already begun integrating these language models into products like Google Search, while fending off criticism about the systems’ functionality. Language models have a number of flaws, including a tendency to regurgitate harmful societal biases like racism and xenophobia, and an inability to parse language with human sensitivity. Google itself infamously fired its own researchers after they published papers outlining these problems.
The company believes that by building a small model, it can make it easier to bring various AI features to languages that are poorly represented in online spaces.
“By having a single model that is exposed to and trained on many different languages, we get much better performance on our low resource languages,” says Ghahramani. “The way we get to 1,000 languages is not by building 1,000 different models. Languages are like organisms, they’ve evolved from one another and they have certain similarities. And we can find some pretty spectacular advances in what we call zero-shot learning when we incorporate data from a new language into our 1,000 language model and get the ability to translate [what it’s learned] from a high-resource language to a low-resource language.”
Access to data is a problem when training across so many languages, though, and Google says that in order to support work on the 1,000-language model it will be funding the collection of data for low-resource languages, including audio recordings and written texts.
The company says it has no direct plans on where to apply the functionality of this model — only that it expects it will have a range of uses across Google’s products, from Google Translate to YouTube captions and more.
It’s interesting that big language models and language research can do a lot of different things. “The same language model can turn commands for a robot into code; it can solve maths problems; it can do translation. The really interesting things about language models is they’re becoming repositories of a lot of knowledge, and by probing them in different ways you can get to different bits of useful functionality.”
Openingai: Openai and GPT-3.5. What do LLMs really do about faux pas linguists who can neither understand nor understand the English language?
Over the past week, many people like Bindu Reddy have fallen under the spell of the free robot that can answer all manner of questions with stunning and unprecedented eloquence.
Since its release last week, the online space has become enamored with the creation of the startup Openai. The early users have posted pictures of their experiments, which include generating short essays on just about any theme, craft literary parodies, answer complex coding questions, and much more. It inspired predictions that the service will make conventional search engines obsolete.
Openai shared some information in a post, but did not release full details on how it gave its text generation software a naturalistic new interface. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.
The system could widen the pool of people who are able to use artificial neuralgia, says Jacob, an assistant professor at MIT. He says that the mental model that you used to apply to other agents is beingpresented to you in a familiar interface.
One of the known weaknesses of LLMs is the failure to deal with negation; this makes sense from the perspective of the faux pas that occurred. A few years ago, Allyson Ettinger demonstrated this with a simple study. When asked to complete a short sentence, the model would answer 100% correctly for affirmative statements (ie. “a robin is..”) and 100% incorrectly for negative statements (ie. “a robin is not…”). It became clear that the models could not differentiate between the two scenarios and give the same response to both of them. This remains an issue with models today, and is one of the rare linguistic skills models do not improve at as they increase in size and complexity. Such errors reflect broader concerns raised by linguists on how much such artificial language models effectively operate via a trick mirror – learning the form of what the English language might look like, without possessing any of the inherent linguistic capabilities demonstrative of actual understanding.
Additionally, the creators of such models confess to the difficulty of addressing inappropriate responses that “do not accurately reflect the contents of authoritative external sources”. Galactica and ChatGPT have generated, for example, a “scientific paper” on the benefits of eating crushed glass (Galactica) and a text on “how crushed porcelain added to breast milk can support the infant digestive system” (ChatGPT). Stack Overflow had to temporarily ban the use of built in answers because it was clear that the LLM was generating wrong answers to coding questions.
Yet, in response to this work, there are ongoing asymmetries of blame and praise. Tech and model builders alike say that a mythically self-sufficient model is a technological marvel. The decision-making involved in the development of a model is lost as model feats are seen as independent of the design choices of its engineers. Without naming and recognizing engineering choices, it is almost impossible to acknowledge the related responsibilities of these models. As a result, both functional failures and discriminatory outcomes are also framed as devoid of engineering choices – blamed on society at large or supposedly “naturally occurring” datasets, factors those developing these models will claim they have little control over. But it’s undeniable they do have control, and that none of the models we are seeing now are inevitable. It would have been entirely feasible for different choices to have been made, resulting in an entirely different model being developed and released.
Google, LaMDA, and Bard: When Generative AI Meets Artificial Intelligence: A Conversation with Nadella and Eck
Satya Nadella, Microsoft’s CEO, claimed the new features signal a paradigm shift for search. A new race started today, he said. Bard, a product that will not initially be included in the search engine, is Microsoft’s interpretation of Nadella’s statement that it is right.
It seems that OpenAI is trying to damp down expectations. As CEO Sam Altman said, “chat gunkt is limited but good enough at some things to create a misleading impression of greatness.” Its a mistake to be relying on it right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
Most of the toys Google demoed on the pier in New York showed the fruits of generative models like its flagship large language model, called LaMDA. It can answer questions and work with creative writers to make stories. Other projects can even help create 3D images from text Prompts, by cranking out storyboard like suggestions on a scene-by-scene basis. But a big piece of the program dealt with some of the ethical issues and potential dangers of unleashing robot content generators on the world. The company took pains to emphasize how it was proceeding cautiously in employing its powerful creations. The most telling statement came from Douglas Eck, a principal scientist at Google Research. “Generative AI models are powerful—there’s no doubt about that,” he said. “But we also have to acknowledge the real risks that this technology can pose if we don’t take care, which is why we’ve been slow to release them. I am proud that we have been slow to release them.
Google is expected to announce artificial intelligence integrations for the company’s search engine on February 8 at 8:30 am Eastern. It’s free to watch live on YouTube.
Among all these announcements, one core question persists: Is generative AI actually ready to help you surf the web? The models are expensive to keep updated and love to make shit up. Public engagement with the technology is rapidly shifting as more people test out the tools, but generative AI’s positive impact on the consumer search experience is still largely unproven.
Microsoft executives said that a limited version of the AI-enhanced Bing would roll out today, though some early testers will have access to a more powerful version in order to gather feedback. People are being asked to sign up for a broader launch which will take place in the coming weeks.
The response also included a disclaimer: “However, this is not a definitive answer and you should always measure the actual items before attempting to transport them.” A box at the top of every response will allow people to respond with a thumbs-up or thumbs-down, which will help train Microsoft’s programs. Yesterday, we learned that text generation can enhance search results.