Strollers and the anxiety of automation are related
What Artificial Intelligence Doesn’t Have to do at the End of an Essay: a Computer Scientist and Academic-Inclusive Research Integrity
Students have been able to hire people from third parties to write essays for them, and others think that the change is not such a game-changing one. “It doesn’t necessarily add much functionality that wasn’t available to students already if they knew where to look,” says Thomas Lancaster, a computer scientist and academic-integrity researcher at Imperial College London.
“At the moment, it’s looking a lot like the end of essays as an assignment for education,” says Lilian Edwards, who studies law, innovation and society at Newcastle University, UK. Dan Gillmor, a journalism scholar at Arizona State University, told The Guardian that he had fed a person. A homework question that he often assigns his students and an article in response, would have earned a student a good grade.
The firm Openai is based in San Francisco, California. The company unleashed GPT-3, a large language model, which is an artificial intelligence that creates text by looking at billions of words of training data. GPT 3 is the embodiment of a revolution in artificial intelligence, provoking questions about its limits, and prompting a host of potential applications, including assisting computer programmers. ChatGPT is fine-tuned from an advanced version of GPT-3 and is optimized to engage in dialogue with users.
Even if this is the end of essays as an assessment tool, it isn’t bad, says a computer scientist. He says that essays are used to test a student’s writing skills. “ChatGPT is going to make it hard to combine these two into one form of written assignment,” he says. But academics could respond by reworking written assessments to prioritize critical thinking or reasoning that ChatGPT can’t yet do. He thinks this could encourage students to think for themselves instead of responding to essay prompts.
Nature would like to find out how artificial-intelligence tools affect education and research integrity. Our poll can be taken here.
He says that despite the words being thrown about, these systems don’t have intelligence in the way humans think about it. “They’re trained to generate a pattern of words based on patterns of words they’ve seen before.”
How many are we? What will we learn in the next few years? Ask your friends and family what you think about chatbots and how they affect our lives
How necessary that will be depends on how many people use the chatbot. People tried it out in its first week. The current version of OpenAI is free, but it might not be free forever and some students may be turned off by paying.
She is hopeful that the education providers will adapt. “Whenever there’s a new technology, there’s a panic around it,” she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”
The friends and family who sent me pictures of themselves in front of the Supreme Court, or pushing strollers on the Brooklyn Bridge thought that I was living an adventuresome life with my children right alongside me. In my inbox I had photos of a fleet of UppaBaby Vista strollers outside the 92nd Street Y, a suburban garage filled not with cars but with strollers, movie clips of runaway prams, and, more than once, stories about self-driving strollers. A video clip from my husband’s cousin showed a woman jogging and swinging her arms next to a stroller while matching her pace. I replied quickly and said how much faster it would be to run without having to push my Double BOB.
That kind of casualness was a relic of a time before my inbox started to fill up with another flurry of emails, this time about ChatGPT. I have been teaching freshman composition for many years and the news about the new language models, and their role at the intersection of writing and teaching, often made friends and family think. Because everyone has a wealth of (often fraught) memories about their own high school years, and because many of my friends now have children around the age of the students my husband Sometimes we have a discussion about work in social contexts. Just how stressed out are the high school students enrolled in multiple AP classes? It would be alarming to say that our students’ weekends are more like what our own teenage parties were in the late 90s. What do we wish our students were better equipped to do? How do we keep them off their phones in class? And, most recently, as news about ChatGPT swept through increasingly wide rings of society, I began to get questions that were not so different than those that accompanied the emails about self-driving strollers: What are we going to do about life as we know it being changed by automation?
My initial response was to insist that there are important differences in how easily AI might produce work mimicking student code as opposed to essays. But what I couldn’t dismiss was a concern much broader than the assignments either of us might give or the implications for our specific students: the ethical and philosophical implications of the program itself. Instead of being built around if-then commands, Nick explained, ChatGPT is a neural network. Nick asked, “What is it that makes those neural networks different from our biological network of neurons?” Is it true that they are silicon instead of carbon-based? Why would a carbon-based network let consciousness develop while a silicon network wouldn’t? How, he asked, could eight extra protons make all the difference? Nick’s line of thinking was almost intolerable to me. Of course, I insisted, there is something beyond carbon—perhaps something we can’t put into words or even prove exists—that makes us human. And though I pointed to emotions and connections and relationships, I could not articulate quite what that human-making something is.
Why a student didn’t run 5 miles at a six-minute pace? A metaphor for how the gym turned into a walking treadmill
Unlike strollers, which I will happily discuss all day, I hate talking about ChatGPT, and yet I find myself doing so all the time, and often because I am the person who’s brought it up.
At the beginning of the spring semester, I posed a metaphor for my students to consider: Wasn’t using ChatGPT to complete a writing assignment (without acknowledging having done so) like going to the gym, setting the treadmill at 10 mph, letting it run for 30 minutes, taking a photograph of its display, and then claiming to have run 5 miles at a six-minute pace? It might appear that a student brought the illusion to life, but he or she wouldn’t be any faster or more fit than a student who had just started.