The search engine cut back on information about Artificial Intelligence before Pizza Glue Fiasco

Security Issues of Private AI: a Report from Prism Infosec on Microsoft Copilot, OpenAI, ChatGPT and BrightEdge

There are obvious risks that consumer-grade AI tools can create. However, an increasing number of potential issues are arising with “proprietary” AI offerings broadly deemed safe for work such as Microsoft Copilot, says Phil Robinson, principal consultant at security consultancy Prism Infosec.

There is concerns around the use of machine learning tools that could be used to interfere with staff’s privacy. The recalled feature from Microsoft says that the snapshots are yours and that you are in control of privacy.

Concerns are also mounting over OpenAI’s ChatGPT, which has demonstrated screenshotting abilities in its soon-to-launch macOS app that privacy experts say could result in the capture of sensitive data.

The threat of leaking House data to non-House approved cloud services was deemed by the Office of Cybersecurity to be a threat to users, and so the US House of Representatives has banned the use of Microsoft Copilot among staff members.

Meanwhile, market analyst Gartner has cautioned that “using Copilot for Microsoft 365 exposes the risks of sensitive data and content exposure internally and externally.” Last month, Google was forced to make adjustments to its new search feature, after some strange answers to queries were found on the internet.

At the same time, there’s the threat of AI systems themselves being targeted by hackers. “Theoretically, if an attacker managed to gain access to the large language model (LLM) that powers a company’s AI tools, they could siphon off sensitive data, plant false or misleading outputs, or use the AI to spread malware,” says Woollven.

It is possible to use this to look for sensitive data if access privileges have not been locked down. Employees are willing to ask to see pay scales, M&A activity and documents with credentials, which can be leaked or sold.

Adriance said that people who chose to enroll in the test were shown information on a wide range of queries, while those who didn’t chose to join the test were not. According to BrightEdge, the areas where AI Overviews can be most useful are the ones where they believe it will help the most. The majority of health careKeyword searches have an artificial intelligence answer on them. Sample queries included in BrightEdge’s data included “foot infection,” “bleeding bowel,” and “telehealth urgent care.” More than 22% of queries about the return of Artificial Intelligence Overviews are about ecommerce, while restaurants and travel only bring up around 2% of the time.

Jim Yu, BrightEdge’s founder and executive chairman, says the drop-off suggests that Google has decided to take an increasingly cautious approach to this rollout. “There’s obviously some risks they’re trying to tightly manage,” he says. These early problems may be a problem, but Yu is generally optimistic about how the tech titan is going to address them.

Previous post Openai employees do not enjoy a culture of risk and retribution
Next post How will countries comply with the first step in Negotiating a Pandemic Treaty?