Nightshade — A tool to punish AI models that are not trained to copy a work’s art’s (artist’s opinion)
One such tool, Nightshade, won’t help artists combat existing AI models that have already been trained on their creative works. The leader of the University of Chicago team that built the tool claims it will be the first to break future models.
“So it will change the image of a dog so that it looks like a cat to you and I, but it’s not a dog to you and I,” she says.
AI models like DALL-E or Stable Diffusion usually identify images through the words used to describe them in the metadata. For instance, a picture of a dog pairs with the word “dog.” Zhao says
“You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it’s literally trying to confuse the training model on what is actually in the image,” Zhao says.
So Casey, we’ve been talking a lot on the show about models and copyright, this issue of whether artists and writers and other people whose works are sort of ingested by large AI models have any recourse when it comes to getting paid or credited, or even potentially suing the companies that make these models.
There is a tool called Glaze which can subtly change the artwork’s style to make it hard for an artificial neural network to duplicate a specific artist’s style.
A sign from Washington that we are watching you: How the executive order will tell us what’s going on, and what we can do about it
And then there’s Kudurru, created by the for-profit company Spawning.ai. The resource is currently in the early stages, and tracks scrapers’ addresses, blocks them from sending unwanted content, such as a Rickroll internet prank that impersonates British singer Rick Astley, or an extended middle finger.
Spawning co-founder Jordan Meyer says that they want artists to communicate differently to the bots and scrapers used for the purpose of being able to provide more information to their fans.
When they discovered that their name had been used as an invocation for an artificial intelligence prompt, they discovered that more than fifty of their paintings had been used for models from LAION-5B, a massive image dataset.
The class-action lawsuit alleges the companies used online images to train their systems without compensation or consent. There is a case going on.
New digital tools help them feel aggressive and immediate in their defense in a world of slow moving lawsuits and even slower moving legislation according to McKernan.
The bigTakeaway from this is that if you don’t know anything else about the executive order, it’s basically a sign from Washington that we are watching you. Right? This will not be the same social media where you have 10 years to build and spread your products all over the world before we start holding hearings and holding people accountable. We are looking at this in the early days of generative artificial intelligence.
Putting AI to the test: How effective are the tools in cyber-artists’ arsenals, and why should they be scrapped?
“So, for now, this is kind of like, alright, my house keeps getting broken into, so I’m gonna protect myself with some, like, mace and an ax!” It’s the defensive opportunities that they say are provided by the new tools.
“These types of defenses seem to be effective against a lot of things right now.” said Gautam Kamath, who researches data privacy and artificial intelligence at Canada’s University of Waterloo. “But there’s no kind of guarantee that they’ll still be effective a year from now, ten years from now. Heck, even a week from now, we don’t know for sure.”
Social media platforms have also lit up lately with heated debates questioning how effective these tools really are. The creators of the tools may be involved in the discussions.
“This is not about writing a fun little tool that can exist in some isolated world where some people care, some people don’t, and the consequences are small and we can move on,” says the University of Chicago’s Zhao. “This involves real people, their livelihoods, and this actually matters. So, yeah, we will keep going as long as it takes.”
But Yacine Jernite, who leads the machine learning and society team at the AI developer platform Hugging Face, says that even if these tools work really well, that wouldn’t be such a bad thing.
Data should be accessible for research and development. But AI companies should also respect artists’ wishes to opt out of having their work scraped.
“Any tool that is going to allow artists to express their consent very much fits with our approach of trying to get as many perspectives into what makes a training data set,” he says.
The models that were shared on the Hugging Face platform were trained by artists who spoke out against the practice and asked that they be removed. The developers don’t have to comply.
Still, many artists, including McKernan, don’t trust AI companies’ opt out programs. “They don’t all offer them,” the artist says. “Many of them don’t make the process easy.”
What have we learned about GPT 5 and GPT 4? I’m afraid to admit that I haven’t learned much about it yet
This transcript was created using speech recognition software. It may contain errors, since it has been reviewed by humans. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
But do I want the government saying, oh, if you’re going to train the largest language model yet, we’d like you to tell us? I do lean on the side of, like, yes, like, let’s tell someone. I want someone paying attention to this. This is kind of where I am. Where are you?
So I don’t want the government to get so far out ahead of things that it is prevented from doing all the things that Ben Buchanan just talked about, like helping to address climate change, for example, using the power of AI. If the government could do it, that would be a great thing. I don’t think we need to slam on the brakes so hard that we don’t allow for the possibility of that.
That is the first part of it. The second part of it is the sort of mythical-future GPT 5 and all the equivalents where all the other companies — we just don’t know how good they’re going to be. We know there have been leaps in each version of these models.
What does the next massive leap look like? Humans are not good at anticipating exponential change. Our brains think linearly. And so if we’re one step away from an exponential change, I’m just telling you, it’s like my brain is not good about understanding all of what that is going to mean.
But then, a week will go by, and my everyday life looks the same as it has for a while, and I think, well, maybe society has actually just sort of adapting to this, and this isn’t quite the disruptive change that I was thinking. It is difficult to know what the future of all of this will be in the moment. And so I try to just keep my eyes focused on, well, what happened today?
Well, let me take the first question first. Is something that has made me less nervous? I kind of go back and forth on this. It depends on the day. Sometimes, when I use GPT 4, it does something so great that it makes me think that the future is going to look a little different. What do we do now?
I’m hearing you discuss the need for balance and try to find the green shoots of what AI could do. So has your view changed on AI, or has something in AI itself changed in a way that makes you less nervous? And do you actually think that more regulation is needed?
Absolutely. So let’s talk about some of the controversies around this executive order. Since you mentioned the computing threshold that you have to give to the government to train a model of artificial intelligence, it has got a lot of blowback from the industry. So let us know what you are hearing.
They still have a lot to do. Again, the policy reads very sweeping. What it means in practice, I think we’ll have to see how it plays out. Good ideas can be found here.
Yes. And I would say that honestly, this was a pleasant surprise. Right? Like, I write about technology policy and proposed regulations a lot, and I don’t like a lot of what I see. When he was campaigning to be president, President Biden said that we should get rid of Section 230 of the Communications Decency Act, which would mean, effectively, that Google and every technology platform was responsible for every person posted on its platform, which I just think would be bad for a lot of reasons we don’t have to get into. But like, to me, that was the worst kind of tech policy, because you’re painting with the broadest possible brush, you’re ignoring any positive use cases, and you’re just sort of legislating with a giant hammer. This is not that approach. These are people who have done the homework, who have been very thoughtful.
They are angry that they might have to give the government information about what they are doing, which is similar to what companies have to give the government when they are making new drugs. It has to be approved. Why is this different from that?
It was clear that there would be a point where the government stepped in. Now, that arrived quicker than I would have thought, right? Because the government is usually pretty sclerotic and slow-moving.
They want the government to help protect against some of the worst-case scenarios. The open-source people — I think I struggle to understand what they believe. I don’t think they’re saying that there’s no risk attached to it.
Some of them are. I am receiving text messages from people who say that you can create a bio weapon by searching on the internet, and that if you think that the artificial intelligence makes things simpler, then you are a fool. This is what people are talking —
Proliferation, democratization, and the case for open-source development: a spoke at the White House about the new AI executive order
I think that having a discussion about safety with this stuff is something that’s really difficult because I don’t try to use these tools for evil. It is difficult to figure out what the case is here.
Yeah, so you got a very exciting invitation this week to go to the White House to actually talk to some officials there about this new AI executive order. I wanted to know where my invite was. I want to know what it was like.
So on this open-source point in particular, I talked to Arati Prabhakar, who directs the Office of Science and Technology Policy. Does the government want to see more open-source development or more closed development? She told me what she said.
If I were still in venture capital, I would say the technology is democratizing. If I were still in the Defense Department, I would say it’s proliferating. They are both true.
Biden’s Concerns about AI: Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT
According to Reed, Biden’s concerns about AI also grew after watching ‘Mission Impossible: Dead Reckoning, Part I’ at Camp David,” which is a movie where there’s this sort of mysterious AI entity that wreaks havoc on the world. Casey, what do you think about this? Did “Mission Impossible” come up in your conversations with President Biden’s advisors?
I think it’s even more than green shoots. I think we wouldn’t be so careful in calibrating the policy here if we didn’t think there was a lot of upside. So look at something like microclimate forecasting for weather prediction, reducing waste in the electricity grid and the like, accelerating renewable energy development. There’s a lot of potential here, and we want to unlock that as much as we can.
And to be honest, that is just an issue where I am trying to learn and listen and read and talk to people. But I’m curious if you have a gut instinct on that.
But this is crazy to me. Because it’s not like these companies and the people running them started sort of hyping up the risks of AI recently, right? Some of them have been talking about this for a long time.
Why do you need to report to the government if you’re training an open source technology bigger than a certain size? Anthropic, OpenAI and Google are not interested
They went to the government. They freaked them the hell out. They said to regulate us now and here is exactly how to do it. And now, they are starting to get what they want. Everyone else will be left out, as they will be the winners who take all.
It’s when an industry sets out to ensure that regulations are passed on their own terms. And it sort of pulls the ladder up so that incumbents always maintain the power and challengers can never compete.
I agree but I will just try to get Steelman’s arguments out of the way. These folks are in the open-source community. They believe that what we are seeing is the beginnings of regulatory capture.
You need to tell the government if you find anything dangerous in the models you are testing. The people objecting to this are not objecting to anything specific that applies to models that are existing.
I would include Anthropic, OpenAI and Google in that group. And they’re saying, well, we do see a lot of potential avenues for harm here. We are going to build it ourselves, instead of letting anyone download it and go crazy, because we want it to be great. We’re going to do a bunch of rigorous testing. Everyone is not going to be allowed to play with the test.
It’s possible to analyze open-source technology for a short time. You can look at the code. You can fork it, change it to your liking. And the people who love it say, this is actually the safest way to do this.
If you get thousands of eyes on this, you are going to eventually build safer and better tech, and we are all going to be better off. Right? And then you have the people that are not open to new ideas.
And this debate has been swirling in Silicon Valley for months now, but it really seems to have come to a head over this issue of having to report to the government if you are training a model larger than a certain size. So let’s just talk about that. I don’t get the backlash on this.
It is not telling developers that they are not allowed to make a very large model. It does not mean you cannot make an open source model that is large. All it’s saying is, if you’re building a model that is bigger than a certain size, 10-to-the-26th-power FLOPS, or —
Hard Fork Executive-Order Artificial Intelligence: Is It Necessary to Tell the Federal Government? A Phenomenological Study
It’s so fun to say “FLOPS.” And the next time one of my friends has a huge failure, I’m going to say, it’s giving 10-to-the-26th-power FLOPS. I’m saying, you FLOPSed so hard, you’re going to have to tell the federal government, bitch.
And it turns out that one threshold for when these requirements kick in is when a model has been trained using an amount of computing power that is greater than 10-to-the-26th-power floating point operations, or FLOPS. I looked this up. The number is 100 septillion.
Absolutely. The industry, I would say, was surprised by this. The people I talked to at AI companies — they did not know that this exact thing was coming. They weren’t sure what the threshold would be. Would they apply to all models, big or small?
The Phantom of the Opera and the Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT
I mean, from what I understand, they’re made out of recycled plastic, so I don’t know why they’re feeding them to children. Have you ever tasted those things? Thank you Good Lord.
Yes, literally. It was the only thing remaining at Target. So we bring home these Dots, and I’m testing the candy, as one does. So I bite into a Dot, and a tooth comes out.
You know, I could recommend a lot of good costumes for that and Phantom of the Opera comes to mind. Really, anything with a mask that covers at least half your face.
Why Dots? The Case Fork Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT
You know what’s so funny about this is that every year, there is a panic around Halloween candy. You know, it’s like, well, you’d better open up every single wrapper and make sure nobody’s stuck a razor blade in that. We laugh all the time. We say that the people need to calm down. You had to go to the emergency room for dental work after you bit into some candy.
Yes. Yes. It was not good. These Dots are too sticky. I am calling for the Biden administration to outlaw Dots because we have to do something.
The President’s Dog: What Happened When I Walked Into the White House, And What We Learned About It
Because here are the things — I went to the White House once when I was a child, part of a school tour. It is very exciting. Remember very little of it. Here are some things I know about the White House. It is where the president lives.
I know there’s something called the Oval Office and something called the West Wing. I also know that until recently, there was a dog at the White House, named Commander, who bit people.
[LAUGHS]: I was — let me tell you. I was on my head when I walked onto the grounds. I’m saying, where is that dog? It was important for me to meet him and pet him. I think it would be better if I had bitten the President’s dog.
No, that’s right. It is funny. You mentioned the treats. Because we went on the Monday before Halloween, so Monday of this week. I went down with our producer. We took in the sights and sounds. And as we walk onto the grounds of the White House, there are children in costumes everywhere.
I see a lot of toys, but not a dog, but a Cheeto, a Transformer, a lot of Barbies. And everywhere we went throughout the executive office building, the offices of the staffers had been transformed into some sort of, you know, Hollywood intellectual property is, I guess, what I would say. There was a room for Barbie. There was a Harry Potter room.
The hosts in the White House digital office had transformed their office into something called the Multiverse of Madness. And when you took a left, you were standing in Bikini Bottom from the SpongeBob Squarepants Universe. There were bubbles all around.
And I’m setting this scene, because you have to understand, I am there to listen to the President talk about the most serious thing in the world. And while we were interviewing his officials about the executive order, we’re literally hearing children screaming about candy. So it was an absolute fever dream of a day at the White House.
So amid all of the shrieking children and the costumes and the Multiverse of Madness, there was actually, like, a signing ceremony with the President where he did put this executive order into place.
That really is the case. Yeah. When we went to the office building, we saw many people but we walked over to the East Room where there was a lot of people advocating for these issues. The President came out and the Vice President did the same. Chuck Schumer, the Senate majority leader, was there.
The Fake $Casey$ Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT
Yeah. And we could dive in in any number of places. I think the part of the order that has gotten the most attention is the aspect that attempts to regulate the creation of next-generation models. So the stuff that we’re using every day — the Bards, the GPT 4s — those are mostly left out of this order.
If there is a Claude 3 or a GPT 5, this will be under the rubric that the President has established here. And when it does, it will then have some new requirements, starting with, they will have to inform the federal government that they have trained such a model, and they will have to also disclose what safety tests they have done on it to understand what capabilities it has. So I mean, to me, that is the big screaming bullet that came out — is like, OK, we actually are going to at least put some disclosure requirements around the Bay Area.
I am not a lawyer but I feel like I have a good grasp on one of the issues that is at stake, which is, who does the liability fall on? So if I’m using Photoshop and I create a counterfeit picture of money I tried to use it for store, not on Adobe, but printed it out. That is on me.
Right, or counterfeit money or something like that. That would be harder to protect and it would be easier to fake money. Courts are less likely to see that as an interference due to it being able to do all these other things. Is that what you’re saying?
So I see why you say that’s strange, but in fact, it’s exactly how you would make a general-purpose tool. While there is a program that will draw Disney characters, a program that is useful for other things is more objectively a neutral tool.
The case of Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT: A jury’s verdict on the licensing deal
The larger your model is, the more data it has trained on, the more potentially protected you are from some of these claims. If you want to win lawsuits brought by individual creators or publishers, you should make your model big and gather as much data as you can, because it is sort of a strange incentive. They can’t say, hey, that looks a lot like what I made.
The court ordered us to go to a jury on that. Westlaw owns the set on which the things are trained. But that’s also to make my point that these licensing deals are not going to help individual authors. The people who wrote the summaries at Westlaw do not see any more money even if Westlaw prevails on this.
There are situations where, for example, if you just train entirely on one artist, that might well be different. That is a design choice. Westlaw brought the case to court, since they wanted to write their own summaries of the court decision.
What can you do if you’re going to go to Starbucks? How much can you expect to be paid for an image? A remark on the models that make money out of it?
I suppose you can either give it randomly or give it through the fraction of the time that it looks close to an image. And I would just say, are you going to be able to go to Starbucks on that money? I wouldn’t place too many bets.
Here is the thing. I am skeptical of the models. Because, if they are done by the big publishers, they are not able to deliver most of the money to the authors or the artists. Because the fact of the matter is, a lot of the time, the image will not look like anything in the data set.
You know, at the same time, we’re starting to see companies like Adobe put out models that do compensate artists. It does seem that the moral and ethical case for using a tool like that is strong even if there isn’t a strong legal case to use. And so I wonder if maybe the long-term future here is just that we have to rely more on moral arguments and shame to get the world we want than these copyright laws that are less well suited to the purpose.
I think a development like that is powerful even though it is not based on any legal requirements. So I would say there’s definitely things you can do in terms of getting paid. I mean, the classic thing about this is, only publishers with big piles of works can ever hope to get paid. It is not worth the hassle to license on an individual basis.
Is the Rise of Voluntary Opt-outs a Signal of the Fair Use of the Internet? The Case for a Licensing of the Associated Press
Well, what I would say is, you’re seeing this rise of voluntary opt-outs. It was very similar to what developed with the search engine. There’s a robot exclusionheader that Google respects. Although it’s probably fair use to scrape for many purposes, they still won’t do it.
people will say “oh, you licensed this, you have to license everything.” The law has historically not been receptive to that argument. It’s expensive because of litigation. A lot of fair use cases say that if you were willing to negotiate to avoid a really expensive lawsuit, it doesn’t mean it isn’t fair use.
I was struck a few weeks ago, when OpenAI licensed articles from the ” Associated Press.” Many of these articles were already online and could have been used to train future models, so it would make sense that OpenAI would have used some of them. If you’re a lawyer for OpenAI and they say, we want to license that data, as a lawyer, are you thinking, hmm, this could create a perception that this work has value and that we should be paying to license all of it? Or are the laws robust enough that it can do that as a goodwill gesture without incurring any more liability?
This is still very early in the game. The fair use analysis for the training part of the claim needs to be different than the other claims because it is about the outputs. And so I would say nobody should really rest on their laurels right now.
What is a Copyright? What is the Copyright and what isn’t? How can you defend yourself against the copyright?
Can you enforce your rights over this? Well, it’s America. You can always file a lawsuit. Right? Can you win? That’s a very different question. Can you afford to go to court? A completely different question.
So before we get into talking about this specific case, I want to just understand how a copyright law expert thinks about AI and these AI image generators, and also these language models we’ve been hearing so much about and all the copyright questions that have come up around them. Last year, you saw things like Stable Diffusion and DALL-E rise up to prominence, what did you think?
Right. If you are an author of a book, you should have the right to make a movie or a translation of the book, and that is what we expanded it to cover.
Right. Copyright, at least when it was first conceived, is about literal, identical copies of something that you do not own, that you are directly profiting from.
We have been discussing what is not a violation of the Copyright Act. It might help me just to remind myself, what is a copyright violation? Give me some cut-and-dried cases of things that are against the law.
Do I Really Need to Copy Someone’s Output? No, I’m Not a Programmer, But I’d Like to Ask Who is Responsible for It
So I would say, in some sense, though, it doesn’t really matter in the traditional fair use analysis. Because courts have generally said, if you’re doing something internally that involves a lot of copying, but if your output is non-infringing, then that’s a strong case for fair use.
It’s a little perplexing. I am also not a programmer, but it does sound fairly consistent when you talk to them, that no, there aren’t pictures in the model. There’s a whole bunch of data. And there are these unusual occurrences, usually, when the data set contains 500 versions of “Starry Night,” where it might get pretty good at producing something that is a lot like “Starry Night,” but for the average image, it’s not in there and can’t be gotten out, no matter how hard you try.
Now, the companies and people who work in AI research have said, like, this is not actually how these models work. The artists in this case are making this argument. Is there anything you think about that argument?
But we have a robust system for attributing responsibility to the person who tried really hard to find the infringing copy on Google. So there are definitely some principles of safe design. They aren’t perfect, but the question isn’t who is responsible for it. I was able to make something that looked like Sarah Anderson’s cartoons after a 1,500 word prompt if you would just stop saying “I tried really hard and I was able to make something that looked like Sarah Anderson’s cartoons.”
So there’s lots of circumstances where, for example, people can use Google and say, I want to watch “Barbie.” And although Google has made reasonable efforts to make that not the first thing that you get, it’s not impossible to figure out how to use Google to watch “Barbie” without authorization —
And so part of the answer is, well, is the output actually infringing? Right? So if it’s not, then no. And if it is, then actually, I want to start asking questions. Who is responsible for it?
I think the question for me is, is that truly analogous to a situation where I’m a very popular artist, people love to type my name into Stable Diffusion, you get images that look like my life’s work, and I get $0 for that?
What Google Books is and isn’t: It’s an Index of the web (Google Books is an index of the internet) and it is a copyright battle over AI
And in many cases, it is making copies of those pages. It is caching those pages, so that it can serve them up faster. That is all intellectual property of one sort or another. The intellectual property you get from entering a query into GOOGLE is not reproduced exactly because it was spit out in a result.
So I think this is an interesting analogy to think about for a minute. If I am hearing you right, you say that when you think about what Google does, it creates an index of the web. right? It looks at every single page.
The idea of doing something new with existing works is not really new, I think. The question is, of course, whether we think that there’s something uniquely different about LLMs that justifies treating them differently. So that’s where I end.
So they made a lot of copies of stuff that wasn’t on the internet. So that’s the Google Books project. This is basically all fair use according to the courts.
I think that we have a set of tools to deal with this. And of course, you can disagree with them. But the background is, of course, the rise of the internet and Google looming large over everything.
What Do You Think about the Harnessing of Serendipity in Artificial Intelligence, or How Do We Get What We Want?
I want to know what you think about that argument. Writers are mad that their books have been used to train artificial intelligence language models. How these models are trained can have a big effect on the copyright of these models.
There is room for accident and serendipity in human creation if it wasn’t within your contemplation. There is a point at which the serendipity is no longer yours.
Is it true that they all look different, and that the prompt didn’t specify enough to be connected to the output? The second question is “what about those that you reject?”, as I have considered that the prompt should be enough to get copyright. You’re like, no, that’s not what I wanted. Are they still yours?
So I guess what I would say is I’m still mostly of the opinion that the prompt alone shouldn’t count, although you can find people who disagree. My pitch is you often get multiple outputs that look quite different from each other. I have two questions that I’m wondering about.
But these days, people are writing these meticulous prompts. It’s a banana that is dressed like a detective in a 1940s noir movie, but he’s at Disneyland, right? It appears as though the output of that does have more human authorship to it. But I’m not a lawyer. Like, in your view, is that all sort of the same thing?
I am at the risk of derailing, and I’m really interested in this question. I can see your point of view. If I type “banana” into Dall-E and it produces a fruit, I can see the argument that I didn’t have much to do with any of that and shouldn’t have a copyright.
Is it the same as giving a copyrighted image in a selfies to a security camera that is running 24/7. And you know, although you sometimes do have to draw lines, that’s not unknown to the law, and we can just decide what our rules are going to be without really disrupting anything, in part because most of the time, it doesn’t come up whether a human is enough involved.
So I thought that copyright had the tools to handle this, that they’re pretty conventional questions. If people decide that we need to change the laws, we have done it before. So it’s quite possible that we could fruitfully get a new law. We have established principles right now. And I don’t think that they break when confronted with AI.
The Core Clues about the Black Hole Casey Goes to the White House, the Copyright Battle Over Artificial Intelligence, and HatGPT
So that totally surprises me, right? I feel like when we’ve talked about this on the show, it has been in the context of, wow, this seems really new. But what about it struck you as conventional?
[LAUGHS]: Yes. We brought in Rebecca Tushnet. She teaches at Harvard Law School. She specializes in the First Amendment, intellectual property, and copyright law. I also read, according to her bio, that she is an expert on the law of engagement rings —
Yes. On one hand, it’s true. But on the other, the core claim, the one that you mentioned at the top of this segment, is allowed to go forward. And so we are going to see these two sides hash it out, at least a little bit, about whether the artists have been wronged here in a way that can get them some money.
Does the White House Look Pretty Good? (Laughs) The Copyright Battle Over Artificial Intelligence Is Exactly What Cameron Goes To The White House Does
Yeah, this feels like one of the big questions in AI right now. We’re using these tools. We’re thinking, hmm, on some level, I actually helped make this thing without my consent. Where did I get my cut?
I will wear a tie next time. (LAUGHING) One of our minders asked one of the producers if most people in the White House wore a tie. And the man looked very uncomfortable, because I think he wanted to not embarrass me, but he was like, yeah, pretty much everybody wears a tie.
I mean, here is the thing. The government can be pretty awesome even when it is not for the federal government. As a kid, like, you ingest so much mythology about American history and democracy and everything. You are in the room and you can see it happen. So yes, I will — at the risk of sounding cringe, yes, I did enjoy my trip to the White House and watching democracy in action.
Causey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT: It’s a big problem for the government
It does similar stuff around the possibility of bioweapons. So I do think the smart thing here is, they’re trying to identify, well, what sort of seems like it might be easy to do with a much more powerful version of this thing and start to develop some mitigations today?
It might be in a few years, but it is not a big problem today. So the government is getting ahead of that. The hope would be that when the stuff becomes more serious, we are prepared, so they can develop some authenticity standards.
I think that is true. But there is still reason to hope, I think, in this executive order. For example, it talks about using the Department of Commerce to try to develop content authenticity standards for the very meaningful reason of wanting to ensure that when the government communicates to its citizens, that citizens know that the communication actually came from the government. That is a big problem for the government.
The Hard Fork: Bringing Back the Bargaining Era to the Fork. The Casey Goes to the White House + The Copyright Battle Over AI + HatGPT
If we know one thing about the history of regulation, in this country at least, it is that often, the biggest regulations are passed in the wake of truly horrendous damage. Right? It took the financial markets collapsing in 2008 for Dodd-Frank to be passed to regulate the banking system. A lot of our labor laws and labor protections came after things like the Triangle Shirtwaist Factory fire, when people died because there were not adequate safety protections at their workplace.
The audience editor of our magazine is, no doubt, Ms. Gallogly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. You can email us at [email protected].
“Hard Fork” is produced by Rachel Cohn and Davis Land. Emily Lang was there for us this week. Jen Poyant edits us. It was fact-checked by a person named Caitlin Love. Today’s show was engineered by Rowan Niemisto, original music by Elisheba Ittoop, Sophia Lanman, Rowan Niemisto, and Dan Powell.
Waymo : A new barf bag to celebrate the end of Hat GPT, from Casey Goes to the White House + the Copyright Battle Over Artificial Intelligence
The ride was very smooth and so I was confused for a moment. I was like, should I be expecting turbulence? Is this the right time to put up extra tight? What’s going on here? All right. It is the end for Hat GPT.
This had become a beloved pastime for the citizens of this fair city. And now, well, if you can’t find a Waymo, you’re out of luck. It is true. Well, I did take a Waymo this week, and I noticed something new in the Waymo, which is that they now come with barf bags.
I don’t think it’s for turbulence. I think it is for people who have been drinking. I believe it was a trick or treat. There must be a story behind this. If you vomit in an app-assisted vehicle, the driver has to clean it up and charge you a fee.
The case for a self-driving vehicle to be used as a taxi service: a case of Cruise going to the White House and the copyright battle over artificial intelligence
Oh, god, no. Here is the thing. I haven’t talked to the regulators. I don’t know how they’re thinking about this. I think it’s clear that they are enforcing much stricter scrutiny on the self-driving cars than they ever would on these terrifying murder machines that everybody drives around in all day. I hope that the situation gets solved quickly because where are the San Franciscans supposed to have sex now?
The safe street rebels have won. Like, this was the future liberals want. We are left without these cars. This particular accident is very controversial. My understanding is that the victim of this incident was hit by another car first.
The decision affects Cruise’s robot taxi services in Austin, Texas, and Phoenix. It’s also pausing non-commercial operations in Dallas, Houston, and Miami.” Now, this came after Cruise’s license to operate driverless fleets was suspended by the California DMV, citing an October 2 incident in which a Cruise vehicle dragged a San Francisco pedestrian for 20 feet after a collision.”
How is the Use of Artificial Intelligence in News Articles Ending in a Bad Way? An AI-Generated Poll for the Cause of a Woman’s Death
All right, and now, we actually want to poll our listeners. Who do you think is more at fault? Do you think it was the humans or the computer? Please vote on the AI-generated poll that will be underneath the article.
I love this story. Microsoft is being accused of damaging the reputation of The Guardians with an Artificial Intelligence poll about a woman’s death next to a news article. So this is very sad. “The Guardian” wrote a story about the death of Lily James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week. This was added to the Microsoft news aggregation. But because of Microsoft and the fact it has that artificial intelligence. Kevin came up with a poll.
It is so out of place. Oh, my god. Imagine being able to afford a decent life. You accomplish some things. Your obituary gets written up in a major newspaper. They have a poll made out of the computer intelligence known as Artificial Intelligence. Was Casey a good person? Sound off in the comments?
I have an idea as to why the use of generative AI in news always ends in a way that is not good. You know what I mean? Like, you have this idea and you think, oh, this is so cheap, and it’s so futuristic. We will show innovative we are if we put it into practice. In practice, that trend always goes toward crap. So, this is.
We know that they are very important on artificial intelligence right now. So maybe they’re slapping, like, AI sort of things around the stories that they’re aggregating. Do not do this for stories about deaths. That should be like a very easy no.
Oh, god, I am so angry. This sucks so much. I don’t know how this could have happened, right? Like, Microsoft runs at msn.com and maybe some other news aggregators. It pulls in stories from all over the place.
Next to this article, it put it. And the poll asked, what do you think is the reason behind the woman’s death? Readers were then asked to choose from three options — murder, accident, or suicide.
What Do We Need to Learn from the “Mission Impossible” Movie? It’s a Pedagogy, a Prompt Action
Yeah. I’m going to make a statement. I hope the next “Mission Impossible” movie is going to tell us how Congress was able to pass a law and inspire us to really do anything. It would be a great thing for this country.
He stopped and stated, “Forget your family.” It can fool you. He’s like — he’s like, he says, I look at these things, and I think, when the hell did I say that? That’s actually a direct quote.
It’s one of the things that these companies argue about, is we just make the tools. Users can use them as they please, but it can be illegal. We are protected either way. Is that a sound legal argument?
Yes, in general. And so some of my questions are about the tweaked models that create infringing material or people are making, say, to generate porn. But in general, they are taking the models, and then tweaking them themselves to do that. That is on them.
Can Artists Get a Flavor? The Casey Goes to the White House + The Copyright Battle Over Artificial Intelligence + HatGPT
The artists and writers have been living on the easy street for a long time. They are going to have to work for a living now that new technologies will bring them down a peg. I would like to apologize to the artists and writers out there.
You can’t solve a problem of economic structure by handing out rights to somebody who doesn’t actually have market power to exercise. Because the publisher is still going to say, well, if you want to publish with me, you’ve got to give me all the rights. I believe we need to talk about the way we pay artists, rather than thinking about how we can fix it with artificial intelligence.
Right. Well, interesting. I hope the courts do not change our fair use doctrine and push these companies out of business. But —
They had a jar of them in the old Facebook offices. And so whenever I would go down there, on the way in and out, I was always, like, grabbing a couple of Peppermint Patties.
Do you think they had a fake dossier on you, like, that “Platformer” star was in love with peppermint patties? Let’s get a big bowl out so he’ll be more favorable to us.
No, those places buy a lot of things. They don’t need to keep a paper trail. You walk in. They were asking, what is your favorite food? Is lobster bisque? Yeah, we have that.
Why is there a Hat GPT? Nothing Forever, but There is a Game for Its Generated To Keep Them Well
There is a hat GPT. We did also get some YouTube comments saying that this looked like a budget hat that was not professionally designed. I would like to tell you that you are correct. This is something I made in about five minutes on vistaprint.com. I think I paid $22 for it. So if anyone wants to make us a better Hat GPT hat, our inboxes are open.
Absolutely. And hopefully, the hat will become more and more elaborate and ornate over time, and that’s how you’ll know that the show is healthy and thriving.
Hat GPT, of course, is the game where we draw news stories about technology out of a hat, and we generate plausible-sounding language about them until one of us gets sick of the other one talking and says, stop generating.
Artificial intelligence is broken, maybe forever. This one’s from 404 Media. The episode of Seinfeld that has been running on a live stream for many months is known as “nothing forever.”
There is something beautiful about a show that was famously about nothing but being recreated as an Artificial Intelligence project that became more and more popular when it did.
[CLEARS THROAT]: What do you think about the “Blue Sea Frontier” platform? [LAUGHS]: Is it really a floating machine?
[CLEARS THROAT]: All right, Kevin. Dell has a brand new story called “Blue sea frontier compute cluster” which is related to something called the Dell Complex. Are you familiar with this type of platform?
[LAUGHS]: So I saw this going around on social media the other day. I think it’s an augmented reality corporation. I think it’s an art project but that’s what these people are saying, that we are so angry over the Biden administration’s executive order that they are going to build a floating machine.
I mean, it would have been the thrill of a lifetime if this happened. The project was canceled. I can not think of a reason. For a long time, I would smile if I heard the words “Google barge.” It made me so happy.
[LAUGHS]: OK. It would be like a movie where people are waving at the ships as they arrive but it would be just a giant store pulling up with new phones.
A Manifesto against Artificial Intelligence: Joe Biden, Deputy White House Chief of Staff, and the Drosophila in the Associated Press, when he wasn’t talking
You should stop generating. All right. This one says, “Joe Biden grew more worried about AI after watching ‘Mission Impossible: Dead Reckoning,’ says White House Deputy.” This is from Variety.
And this is apparently from Bruce Reed, the Deputy White House Chief of Staff, who told the “Associated Press” that Joe Biden had grown, quote, ”‘impressed and alarmed’ after seeing fake AI images of himself and learning about the terrifying technology of voice cloning.
It did not, despite him talking, he appeared to deviate from the script when he was giving his remarks. It could fool your family, because it was supposed to say something with a short clip of your voice.