Everything was said at I/O
DJ Mode for MusicFX: From Deep Mind to Artificial Intelligence Detectors (AudioJansky’s Experiment)
Developer conferences aren’t exactly known for having an energetic, party-like atmosphere, but thankfully, that didn’t stop Google’s latest hype man. The company’s I/O event this year was kicked off by Marc Rebillet — an artist known in online spaces for pairing improvised electronic tracks with amusing (and typically loud) vocals. He also wears a lot of robes.
He was perfect to demo the DJ mode that came with the new generative artificial intelligence tool, MusicFX. Adam Roberts of Deep Mind described the feature as aninfiniteai jam that you can control.
Users are presented with a tool that spits out music based on text prompt, and then layers them together to make a track. The music can be changed in real time by adding additional prompts to the mix. On the Artificial Intelligence Test Kitchen you can try it out. MusicFX is still in development a year after it was introduced.
Google AI Overviews of Recent Advances: How Users Can Use Google Lens to Ask Questions about a Product, Service, or Service?
Rebillet is best known for his song and sound clip titled “Night time Bitch”, which was uploaded to TikTok and has over 2 million followers. He opened I/O by climbing out of a coffee mug and yelling for nerds to wake up, and then fired rainbow colored robes into the crowd that said “loop daddy” on the back.
Even though it has some very good and slightly more private options, Google is still the most prominent player in the search industry. The core product has been shifted by the newestGoogle’s newest artificial intelligence updates.
A new feature called Multi step reasoning lets you find several layers of information about a topic when you’re searching for things with some contextual depth. Google used planning a trip as an example, showing how searching in Maps can help find hotels and set transit itineraries. It then suggested restaurants and helped with meal planning for the trip. If you are looking for vegetarian options, you can look for specific types of cuisine. All of this info is presented to you in an organized way.
The short summaries of the artificial intelligence overviews that we saw were to answer the questions you entered in the search box. These summaries appear at the top of the results so you don’t even need to go to a website to get the answers you’re seeking. Publishers and websites are worried that these overviews will hurt their websites when it comes to showing up in search results if a search that answers questions without clicking links doesn’t work out. Nonetheless, these newly enhanced AI overviews are rolling out to everyone in the US starting today.
Lastly, we saw a quick demo of how users can rely on Google Lens to answer questions about whatever they’re pointing their camera at. (Yes, this sounds similar to what Project Astra does, but these capabilities are being built into Lens in a slightly different way.) A lady in a demo was trying to get her turntable to work, but she got a few options for video and text instructions after being told that the tonearm needed adjusting. It even properly identified the make and model of the turntable through the camera.
One of the last noteworthy things we saw in the keynote was a new scam detection feature for Android, which can listen in on your phone calls and detect any language that sounds like something a scammer would use, like asking you to move money into a different account. If it hears you getting duped, it’ll interrupt the call and give you an on-screen prompt suggesting that you hang up. Google says the feature works on the device, so your phone calls don’t go into the cloud for analysis, making the feature more private. (Also check out WIRED’s guide to protecting yourself and your loved ones from AI scam calls.)
Google has also expanded its SynthID watermarking tool meant to distinguish media made with AI. It can help you find out what is going on. The tool leaves an imperceptible watermark that can’t be seen with the naked eye, but can be detected by software that analyzes the pixel-level data in an image. The new updates have added a new feature to Scan in Veo-generated videos. The open-source tool will be released later this summer.