If it cannot comply with future regulation Openai could cease to operate in the EU

Open and Closed: What are Closed Sources? The Impact of Openness and Midjourney on the Emerging GPT-based Model and Future AI Systems

There is no definitively safe release method or standardized set of release norms. Neither is there any established body for setting standards. Before GPT-2’s staged release in 2019, generative systems such as EL Mo and BERT were largely open, sparking discussion about how to deploy increasingly powerful systems, such as the release or publication obligations. Systems from large organizations have shifted towards closedness, raising concern about the concentration of power in high-resource organizations capable of developing and deployment these systems.

In the middle of the gradient are the systems casual users are most familiar with. Both ChatGPT and Midjourney, for instance, are publicly accessible hosted systems where the developer organization, OpenAI and Midjourney respectively, shares the model through a platform so the public can prompt and generate outputs. These systems prove useful and risky with their broad reach and no-code interface. While they can allow for more feedback than a closed system, because people outside the host organization can interact with the model, those outsiders have limited information and cannot robustly research the system by, for example, evaluating the training data or the model itself.

There are systems that are so closed that the public is not aware of them. It’s hard to cite any concrete examples of these, for obvious reasons. A new type of closed system is becoming more common, and it is video generation. Because video generation is relatively recent, there is less research on the risks it poses and how best to mitigate them. When Meta announced its Make-a-Video model in September 2022, it cited concerns like the ease with which anyone could make realistic, misleading content as reasons for not sharing the model. Instead, Meta stated that it will gradually allow access to researchers.

Many people think that systems can either be open source or closed source. Open development decentralizes power so that many people can collectively work on AI systems to make sure they reflect their needs and values, as seen with BigScience’s BLOOM. The potential for harm and misuse from malicious actors increase with more access, even though openness allows more people to contribute to research and development. Closed-source systems are not audited or evaluated by researchers despite being protected from outside actors.

The recent comments from Altman help fill out a more nuanced picture. Altman has told US politicians that regulation should mostly apply to future, more powerful AI systems. The current capabilities of the software that the EU AI Act is based on is very important to it.

In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Many of the data used in the training of artificial intelligence systems like Dall-E is copyrighted. When companies disclose these data sources it leaves them open to legal challenges. According to a report in the Associated Press, OpenAI rival Stability Artificial Intelligence is being sued by a stock image maker for using copyrighted data.

OpenAI shared this type of information before but has stopped doing it because its tools have become more valuable. In March of this year, Openai’s co- founder, Ilya Sutskever, stated that the company had been wrong to reveal too much in the past, and that keeping information secret was needed to stop rival companies from copying its work.

In comments reported by Time, Altman said the concern was that systems like ChatGPT would be designated “high risk” under the EU legislation. OpenAI would have to meet a number of safety requirements. “Either we’ll be able to solve those requirements or not,” said Altman. There are limits to what is possible.

The Times of Sam Altman: OpenAI in the Era of the Optimal Constraints on New Artificial Intelligence Rules

According to The Financial Times, the details are what really matter. “We will try to comply, but if we can’t comply we will cease operating.”

Sam Altman, the CEO of OpenAI, warned that his company might pull its services from Europe in response to the EU’s proposed rules for artificial intelligence.

Previous post It is too early to say that DeSantis is done
Next post The BMW 5 Series sedan has almost 300 miles of range