Bard had an incorrect response in the public demo

Google’s 1,000-language model and the public will have to bear the brunt of the mistakes made during its first year in the Artificial Intelligence community

Google has announced an ambitious new project to develop a single AI language model that supports the world’s “1,000 most spoken languages.” The company is launching its first step towards this goal by introducing the largest language coverage seen in a speech model today.

The language models will be integrated into products like Search, while they fight against criticism that the systems are useless. There are a lot of flaws with the models, including an inability to analyze language with human sensitivity and a tendency to reuse harmful societal biases. GOOGLE fired its own researchers because they published papers detailing the problems.

According to a report, a vice president of research at the company said that they wanted to create a model of this size that will be easier to use in languages that are not depicted well in online spaces.

A model that is trained on many different languages improves performance on low resource languages. “The way we get to 1,000 languages is not by building 1,000 different models. Languages are like organisms, they’ve evolved from one another and they have certain similarities. And we can find some pretty spectacular advances in what we call zero-shot learning when we incorporate data from a new language into our 1,000 language model and get the ability to translate [what it’s learned] from a high-resource language to a low-resource language.”

Access to data is a problem when training across so many languages, though, and Google says that in order to support work on the 1,000-language model it will be funding the collection of data for low-resource languages, including audio recordings and written texts.

The company has no intention of applying the model directly to its products, only that it will apply it to a range of other products.

There is a new discussion about the use of large language models with the release of chatGPT and Galactica. The models are said to contain humanity’s scientific knowledge, and even resemble consciousness, at the peak of their capabilities. However, such hype is not much more than a distraction from the actual harm perpetuated by these systems. People get hurt from the very practical ways such models fall short in deployment, and these failures are the result of choices made by the builders of these systems – choices we are obliged  to critique and hold model builders accountable for.

Bard’s blunder highlights the challenge for Google as it races to integrate the same AI technology that underpins Microsoft-backed ChatGPT into its core search engine. In trying to keep pace with what some think could be a radical change spurred by conversational AI in how people search online, Google now risks upending its search engine’s reputation for surfacing reliable information.

Additionally, the creators of such models confess to the difficulty of addressing inappropriate responses that “do not accurately reflect the contents of authoritative external sources”. Galactica and ChatGPT have generated, for example, a “scientific paper” on the benefits of eating crushed glass (Galactica) and a text on “how crushed porcelain added to breast milk can support the infant digestive system” (ChatGPT). As it became evident that the LLM generated good answers to coding questions, Stack Overflow had to temporarily ban the use of chatGPT-generated answers.

Yet, in response to this work, there are ongoing asymmetries of blame and praise. Model builders and tech experts alike say that the model’s output is flawless, it is a technological marvel. The decision-making of human beings in the model development process is eliminated and model feats are observed as independent of the design and implementation choices of its engineers. It becomes almost impossible to recognize the responsibilities of the engineering choices that contribute to models without naming and recognizing them. As a result, both functional failures and discriminatory outcomes are also framed as devoid of  engineering choices – blamed on society at large or supposedly “naturally occurring” datasets, factors  those developing these models will claim they have little control over. They do have control over the models we see now, and that they are not inevitable. It would have been entirely feasible for different choices to have been made, resulting in an entirely different model being developed and released.

The launch of ChatGPT has prompted some to speculate that AI chatbots could soon take over from traditional search engines. The technology is too immature to be put in front of users, with problems including bias, toxicity, and their propensity for simply making information up.

OpenAI, too, was previously relatively cautious in developing its LLM technology, but changed tact with the launch of ChatGPT, throwing access wide open to the public. The result has been a storm of beneficial publicity and hype for OpenAI, even as the company eats huge costs keeping the system free-to-use.

In 2023, we will see Codex and other large AI models used to create new “copilots” for other types of intellectual labor. The applications are potentially endless if one can imagine scenarios in which they could be applied to other types of complex, cognitive work.

Historically, computer programming has been all about translation: Humans must learn the language of machines to communicate with them. But now, Codex lets us use natural language to express our intentions, and the machine takes on the responsibility of translating those intentions into code. There is a translator between the imagination and the piece of software.

Codex has allowed the creation of the virtual programming partner, GitHub Copilot, which is used by more than 40 percent of developers who use it. As larger artificial intelligence models become more powerful, the use of GitHub Copilot will grow, freeing up developers’ time for more creative work, and enhancing their efficiency.

In and of itself, that’s a truly remarkable step forward in productivity for developers alone, a community of knowledge workers who are wrestling with extraordinary complexity and unprecedented demand for their talents. It is the first step of many that will follow in three decades as we see this pattern repeat across other types of knowledge work.

Every year our world gets more complicated and demands more work from workers in every field. Copilots for Everything could offer a genuine revolution for types of work where productivity gains have barely changed since the invention of the computer and the internet.

Machine Learning of Syntactic Intelligence: The Rise and Fall of Neural Transformers in Natural Language Processing, Summarization and Information Retrieval

Artificial intelligence has much to offer, but there is a problem holding it back from being used well by billions of people: a challenge to understand one another in natural language.

Neural networks designed to model sequential data and generate predictions are called transformers. The idea of attention is a core part of their success because it allows the transformer to attend to the most important elements of an input rather than processing everything.

These new models have delivered significant improvements to applications using natural language like language translation, summarization, information retrieval, and, most important, text generation. Each used to need custom architectures in the past. The results are delivered by the transformers.

Although Google pioneered transformer architecture, OpenAI became the first to demonstrate its power at scale, in 2020, with the launch of GPT-3 (Generative Pre-Trained Transformer 3). The largest language model ever created was created at that time.

“Parameter count” is generally accepted as a rough proxy for a model’s capabilities. So far, we’ve seen models perform better on a wide range of tasks as the parameter count scales up. Every year for the past five years models have been growing by an order of magnitude, so the results have been impressive. The large models are costly to serve in production.

Source: https://www.wired.com/story/artificial-intelligence-neural-networks/

The Boom of Generative AI: How Google and NASA will be integrating AI into its search engine, and why JWST may be the first exoplanet

The area that I am most excited about is language. Throughout the history of computing, humans have had to painstakingly input their thoughts using interfaces designed for technology, not humans. With this wave of breakthrough in the future, we will begin chatting with machines quickly and comprehensively. Eventually, we will have truly fluent, conversational interactions with all our devices. This promises to change human- machine interaction.

Over the past several decades, we have rightly focused on teaching people how to code—in effect teaching the language of computers. That will still be important. But in 2023, we will start to flip that script, and computers will speak our language. That will make it easier to access tools for learning, playing and creativity.

Google is expected to announce artificial intelligence integrations for the company’s search engine on February 8 at 8:30 am Eastern. It’s free to watch live on YouTube.

Microsoft was a distant competitor in the online search market, while Google was the undisputed leader for years. Microsoft, an OpenAI investor, hopes to improve the experience of its search engine by making generative artificial intelligence available. Will this year be a turning point for Bing? Who knows, but users can expect to soon see more text crafted by AI as they navigate through their search engine of choice.

Are you curious about the boom of generative AI and want to learn even more about this nascent technology? Check out WIRED’s extensive (human-written) coverage of the topic, including how teachers are using it at school, how fact-checkers are addressing potential disinformation, and how it could change customer service forever.

The factual error highlights the importance of a rigorous testing process, a spokesper said in a statement Wednesday. External feedback will be used with internal testing to make sure that Bard’s responses meet a high bar for quality and safety.

Bard was asked by the user in the demo if he could tell his nine year old about the discoveries from the James Webb Space Telescope. One of Bard’s bulletpoints states: “JWST took the very first pictures of a planet outside of our own solar system.”

The first image showing an exoplanet was taken nearly two decades ago by the European Southern Observatory, according to NASA.

The Conversations Between Google, Bing, and Baidu: Implications for AI Search Engines, or What the New AI Technologies Can Say About Us?

The decline in shares for the company came after the inaccurate response from Bard was first reported by the news agency.

In the presentation Wednesday, a Google executive teased plans to use this technology to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle.

In case you’ve been living in outer space for the past few months, you’ll know that people are losing their minds over ChatGPT’s ability to answer questions in strikingly coherent and seemingly insightful and creative ways. Want to understand quantum computing? Need a recipe for whatever’s in the fridge? Can’t be bothered to write that high school essay? You have the support of the person known as ChatGPT.

The all-new Bing is a lot more active. Demos that the company gave at its headquarters in Redmond, and a quick test drive by WIRED’s Aarian Marshall, who attended the event, show that it can effortlessly generate a vacation itinerary, summarize the key points of product reviews, and answer tricky questions, like whether an item of furniture will fit in a particular car. It’s a long way from Microsoft’s hapless and hopeless Office assistant Clippy, which some readers may recall bothering them every time they created a new document.

Last but by no means least in the new AI search wars is Baidu, China’s biggest search company. It made a name for itself by announcing another competitor, dubbed “Enie Bot” in English. Baidu says it will release the bot after completing internal testing this March.

The excitement over the new tools may be concealing a dirty secret. The race to build high-performance, AI-powered search engines is likely to require a dramatic rise in computing power, and with it a massive increase in the amount of energy that tech companies require and the amount of carbon they emit.

Carlos Gmez-Rodrguez is a computer scientist at the University of Corua in Spain. “Right now, only the Big Tech companies can train them.”

While neither OpenAI nor Google, have said what the computing cost of their products is, third-party analysis by researchers estimates that the training of GPT-3, which ChatGPT is partly based on, consumed 1,287 MWh, and led to emissions of more than 550 tons of carbon dioxide equivalent—the same amount as a single person taking 550 roundtrips between New York and San Francisco.

It isn’t bad, but you must take into account. The fact that you have to serve millions of users and train is the biggest hurdle, Gmez-Rodrguez says.

As a stand-alone product, the investment bank says thatChatGPT has 13 million users a day, while Bing handles half a billion searches every day.

It will have to change to fulfill the requirements of search engine users. “If they’re going to retrain the model often and add more parameters and stuff, it’s a totally different scale of things,” he says.

Previous post In the Red-light District of Amsterdam, it is planned to ban marijuana use on the street
Next post The office says that she is OK after the assault