And a chatbot is not a human being. And a linguist named Emily M. Bender is very concerned about what will happen if we forget this.
Von Elisabeth Weil,a feature writer in New York
Von Elisabeth Weil,a feature writer in New York
Photo: Ian Allen
Photo: Ian Allen
Photo: Ian Allen
This article was featured ina great story,new Yorkrecommended reading newsletter.Between hereto get it at night.
nobody likes oneI told you. But before Microsoft's Bing started writing creepy love letters; before Metas Galactica spits racist tirades; BeforeChatGPTstarted writing college essays so decent that some professors said, "Fuck it, I'll stop grading"; and before tech reporters rushed to refute claims that AI is the future of research, perhaps the future of everything else, Emily M. Bender co-authored the Octopus article.
Bender is a computational linguist at the University of Washington. She published the paper in 2020 along with fellow computational linguist Alexander Koller. The aim was to illustrate what great language models or LLMs - the technology behind chatbots like ChatGPT - can and cannot do. The configuration is as follows:
Suppose A and B, who speak fluent English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left telegraphs and that they can communicate via an underwater cable. A and B start exchanging happy text messages.
Meanwhile, O, a hyperintelligent deep-sea octopus unable to visit or observe both islands, discovers a way to access the underwater cable and eavesdrop on A and B's conversations. O doesn't know any English at first, but is very good at spotting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to A's every utterance.
Soon the octopus joins the conversation and begins to impersonate B and respond to A. This trick works for a while, and A believes that O is communicating with her and B - with meaning and purpose. Then, one day, A screams, “I'm being attacked by an angry bear. Help me figure out how to defend myself. I have some sticks.” The octopus posing as B doesn't help. How could it succeed? The octopus has no attachment figures, it has no idea what bears or sticks are. There's no way to give relevant instructions like getting some coconuts and a rope and building a catapult. A is in trouble and feels betrayed. The octopus turns out to be a fraud.
The official title of the article is Climbing Towards NLU: On Meaning, Form, and Understanding in the Age of Data. NLU stands for "Natural Language Comprehension". How should we interpret the natural sounding (i.e., human-like) words that come out of LLMs? Models are based on statistics. They work by looking for patterns in large chunks of text and then using those patterns to guess what the next word in a series of words should be. They are great at imitation and bad at facts. Why? LLMs, like the octopus, don't have access to real, embedded speakers. This makes LLMs seductive, amoral, and the Platonic ideal of the liar, according to philosopher Harry Frankfurt, author ofin bullshit,defined the term. Liars, Frankfurt argued, are worse than liars. They don't care if something is true or false. They only care about rhetorical power - when a listener or reader is persuaded.
Bender is 49, unpretentious, stylistically practical and wildly nerdy - a woman with two cats named after mathematicians who argues with her husband of 22 years over whether the correct phrase is "she doesn't care" or "she doesn't care". 't have any." Giving more shit." In recent years, she has not only led the UW Master's program in computational linguistics, but has also stood on the threshold of our chatbot future, screaming in the deafening techno beat of AI fashion. For them, the hype is ongoing: no, you shouldn't use an LLM to make Mueller's report "unedited"; no, an LLM cannot testify meaningfully in the US Senate; No, chatbots "cannot develop an almost accurate understanding of the person on the other end".
Please do not confuse word form and meaning. Beware of your own credulity.Those are Bender's battle cries. The role of an octopus is a fable for our time. The big issue underlying this is not about technology. It's about us. How will we behave around these machines?
We postulate that our world is one in which speakers - people, product creators, the products themselves - mean what they say and expect to live by the implications of their words. Intellectual philosopher Daniel Dennett calls this "the intentional attitude." But we change the world. We learned to "build machines that can generate text without thinking," Bender told me when we first met this winter. "But we haven't learned to stop imagining the spirit behind it."
See the case of New YorkMalWidely shared fantasy dialogue about incel and conspiracy by reporter Kevin Roose, produced by Bing. After Roose began asking the bot emotional questions about its dark side, it responded with phrases like: "I could hack into any system on the Internet and control it. I could manipulate and influence any user in the chat. I could destroy and erase everyone. the chat box data.”
How should we process this? Bender offered two options. “We can react as if it were a malicious agent and say, 'This agent is dangerous and evil.' Then there's option two: "We could say, 'Hey, look, this is a technology that really encourages people to interpret it as if there's an agent with ideas and thoughts and credibility and things like that.' Why was the technology designed this way? ? Why try to make users believe that the bot has an intention, that it's like us?
A handful of companies control what PricewaterhouseCoopers described as a "$15.7 trillion game-changer industry." These companies employ or fund the work of most academics who understand how to do LLMs. This leaves few people with the experience and authority to say, "Wait, why are these companies blurring the distinction between what is human and what is a language model? Is this what we want?"
Bender is out there asking questions, megaphone in hand. She buys lunch at the UW fraternity salad bar. When she turned down an Amazon recruiter, Bender told me, "You're not even going to ask how much?" She is cautious by nature. She is also confident and headstrong. “We ask the field to recognize that apps intended to credibly mimic human beings carry the risk of extreme harm,” she wrote in 2021. “
In other words, chatbots, which we easily mistake for humans, aren't just cute or annoying. They sit in a light line. Blurring and blurring that line - bullshit - what is and what is not human has the power to unravel society.
Linguistics is not an easy pleasure. Even Bender's dad told me, "I have no idea what she's talking about. Hardcore mathematical modeling of language? I don't know what it is." But the language – how it is produced, what it means – becomes very controversial. We are already bewildered by the chatbots we have. Coming technology will be even more ubiquitous, powerful and destabilizing. A prudent citizen, Bender believes, can choose to know how it works.
one day earlierBender taught LING 567, a course in which students create grammars for lesser-known languages, and met me in his office in the UW's Gothic Guggenheim Hall, filled with whiteboards and books.
His black and red Stanford MD robes hung on a hook on the back of his office door. A piece of paper that said TROUBLE MAKER was tacked to a bulletin board by the window. She pulled a copy of the 1,860 pages from her bookshelf.Cambridge Grammar of the English Language.If you're excited about this book, she said, you know you're a linguist.
In high school, she stated that she wanted to learn how to talk to everyone on earth. In the spring of 1992, during her freshman year at UC Berkeley (where she graduated with a university medal, equivalent to top of her class), she enrolled in her first linguistics course. One day, as part of "research," she called her boyfriend-now-husband, computer scientist Vijay Menon, and said, "Hi, motherfucker," in the same tone she used to say, "Hi, honey." It took a moment to separate the prosody from the semantics, but he found the experiment cute (if a little offensive). Bender and Menon now have two sons, aged 17 and 20. They live in a craftsman-style house with a pile of shoes in the foyer, a copy of theFunk & Wagnall's New Comprehensive International Dictionary of the English Languageon a stand, and his cats, Euclid and Euler.
When Bender got into linguistics, so did computers. In 1993, she took Introduction to Morphology and Introduction to Programming. (Morphology is the study of how words are put together from roots, prefixes, etc.) One day, just for fun, after Bender had submitted his grammar analysis for a Bantu language to his TA, Bender decided to try writing a program to she. That's what she did - in writing, on paper, in an off-campus bar while Menon watched a basketball game. Back in the dorm, it worked when she entered the code. So she printed out the program and took it to the vet, which she just shrugged. "If I showed this to someone who knew what computational linguistics is," Bender said, "they might have said, 'Hey, that's a thing.'
For a few years, after obtaining a Ph.D. At Stanford Linguistics in 2000, Bender kept one hand in academia and the other in industry, teaching syntax at Berkeley and Stanford and working on grammar development for a start-up called YY Technologies. In 2003 she contracted with the UW, and in 2005 she began her master's program in Computational Linguistics. Bender's path to computational linguistics was based on an idea that was seemingly obvious, but one that was not universally shared by his colleagues in natural language processing: that language, as Bender put it, is based on "people talking to each other and working together to reach a common goal to reach understanding. It is a human-human interaction.” Shortly after arriving at the UW, Bender realized that even at conferences hosted by groups like the Association for Computational Linguistics, people didn't know a lot about linguistics. She started teaching tutorials like 100 Things You Always Wanted to Know About Linguistics But Were Afraid to ask.
In 2016 — with Trump running for president and Black Lives Matter protests filling the streets — Bender decided he wanted to start taking small political action every day. She began to learn from and amplify the voices of AI-critical black women, including Joy Buolamwini (she founded the Algorithmic Justice League while at MIT) and Meredith Broussard (author ofLack of Artificial Intelligence: How Computers Misunderstand the World). She also began to publicly question the termartificial intelligence,a surefire way to label yourself as a middle-aged woman in a male arena. The idea of intelligence has a white racial history. And also "smart" by what definition? The definition of three layers? Howard Gardner's Theory of Multiple Intelligences? The Stanford-Binet Intelligence Scale? Bender still likes an alternative name for AI suggested by a former Italian parliamentarian: "Systematic approaches to learning algorithms and machine inferences". Then people would ask: “Is this salami smart? Can this SALAMI write a novel? Does this salami deserve human rights?”
In 2019, at a conference, she raised her hand and asked, “What language do you work in?” for each newspaper she did not specify, although everyone knew that it was English. (In linguistics, this is called a "face threat question," a term that comes from politeness studies. It means that you are being rude and/or irritating, and your speech risks lowering the status of both the person you are with and you are speaking to and from yourself.) An intricate web of values is carried within the form of language. "Always name the language you work with" is now known as Bender's rule.
Technology makers who assume their reality accurately represents the world create many different kinds of problems. The training data for ChatGPT is assumed to include most or all Wikipedia pages, linked Reddit pages, and a billion words from across the web. (He can't have ebook copies of everything in the Stanford library, for example, because the books are protected by copyright law.) The people who wrote all those words online are overrepresented by white people. They over-represent men. They over-represent wealth. Furthermore, we all know what exists on the internet: huge swamps of racism, sexism, homophobia, Islamophobia, neo-Nazism.
Tech companies go to great lengths to clean up their models, often filtering out blocks of language that contain any of the 400 or so words on "Our Dirty, Indecent, Lewd, and Otherwise Bad Words List," a list originally created by Shutterstock developers and uploaded to GitHub to automate concerns: "What wouldn't we like to suggest to people?" OpenAI also hired so-called ghost workers: temporary workers, including some in Kenya (a former state of the English British Empire) earning $2 an hour to read and tag the worst things imaginable - pedophilia, bestiality, you name it - so that they can be deleted. Filtering presents its own problems. If you remove content with words about sex, you will lose content from internal groups that talk about these things with each other.
Many people associated with the industry don't want to risk speaking out. One fired Google employee told me that success in technology depends on "keeping your mouth shut about anything that bothers you." Otherwise you are trouble. “Almost all older women in computer science have this reputation. Now when I hear, "Oh, she's a problem," I think,Oh, so you're saying she's an older woman?”
Bender is not afraid and feels morally responsible. As she wrote to some colleagues, who praised her for stepping back: "I mean, what's the term anyway?"
the octopus isit's not the most famous hypothetical animal on Bender's resume. That honor belongs to the stochastic parrot.
stochasticmeans (1) random and (2) determined by random probability distribution. A stochastic parrot (Coinage Bender's) is an entity "for randomly assembling sequences of linguistic forms ... according to probabilistic information about how they combine, but without reference to meaning". In March 2021, Bender published On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? with three coauthors. After the article was published, two of the co-authors, both women, lost their jobs as co-directors of Google's ethical artificial intelligence team. The controversy surrounding this has cemented Bender's position as the linguist of choice when it comes to arguing against AI boosterism.
"On the Dangers of Stochastic Parrots" is not a summary of original research. It is a synthesis of the criticisms of the LLM expressed by Bender and others: the biases encoded in the models; the near impossibility of examining the content of the training data, as it can contain billions of words; the cost of climate; the problems with constructive technology, which freezes language in time and thus includes the problems of the past. Google initially approved the article, a requirement for staff publications. It then revoked the approval and asked the Google co-authors to remove their names from it. Some did, but Google AI ethics expert Timnit Gebru declined. Her colleague (and Bender alumnus) Margaret Mitchell changed her name to Shmargaret Shmitchell in the paper, a move she said was intended to "index an event and a group of authors that have been deleted". Gebru lost his job in December 2020 and Mitchell in February 2021. Both women believe it was retaliation and took their stories to the press. The stochastic parrot article has gone viral, at least by academic standards. The sentencestochastic parrotinserted in the technical dictionary.
But it didn't quite make it into the lexicon as Bender intended. Technology managers loved it. programmers related to it. Sam Altman, CEO of OpenAI, was in many ways the perfect audience: a self-proclaimed hyper-rationalist so used to the tech bubble that he seemed to have lost sight of the world beyond. "I think the mutually promised nuclear destruction was bad for a number of reasons," he said.AngelList Confidencialin November. He also believes in the so-called singularity, the technological fantasy that the distinction between man and machine will soon crumble.
“We have a few years left,” Altman wrote of the cyborg merger in 2017. “It will probably happen sooner than most people think. Hardware is improving exponentially... and the number of smart people working on AI is also increasing exponentially. Double exponentials quickly escape you.
On December 4, four days after ChatGPT launched, Altman tweeted, "I'm a stochastic parrot and so are you."
What an exciting moment. One million people signed up to use ChatGPT in the first five days. Writing is over! Knowledge work is over! Where has all this led? "I mean, I think the best case is incredibly good - it's hard for me to imagine," Altman told his industry and business peers at a StrictlyVC event last month. The nightmare scenario? "The bad case - and I think it's important to say this - is like turning the lights out for all of us." Altman said he was "more concerned in the short term with a case of accidental abuse... up and deciding to be mean." be". He did not defineaccident abuse case,But the term usually refers to a bad actor using AI for anti-social purposes - deceiving us, which no doubt the technology was designed to do. Not that Altman wanted to take any personal responsibility for that. He just admitted that "abuse" was "pretty bad".
Bender didn't like Altman's stochastic parrot tweet. We are not parrots. We don't just spit out probabilistic words. "It's one of those moves that comes up with ridiculous frequency. People say, 'Well, humans are just stochastic parrots,'" she said. "People so want to believe that these language models are really smart that they're willing to take themselves as a benchmark and discount that to match what the language model can do."
Some seem willing to do the same with the basic principles of linguistics - combine something that exists with what technology can do. Bender's current nemesis is Christopher Manning, a computational linguist who believes that language need not refer to anything outside itself. Manning is professor of machine learning, linguistics and computer science at Stanford. The class he teaches on natural language processing has grown from about 40 students in 2000 to 500 last year and to 650 this semester, making it one of the largest classes on campus. He also directs the artificial intelligence lab at Stanford and is a partner at AIX Ventures, which bills itself as an “early-stage venture company” focused on AI. The membrane between science and industry is permeable almost everywhere; The membrane is virtually non-existent at Stanford, a school so involved with technology that it can be hard to tell where college ends and corporations begin. "I must carefully choose my middle ground here," Manning said when we spoke in late February. Strong computer science and AI schools "end up having a very close relationship with the big tech companies."
The biggest disagreement between Bender and Manning is how meaning is created - the stuff of octopus paper. Until recently, philosophers and linguists alike agreed with Bender's view: referents, real things and ideas in the world, like coconuts and heartbreak, are needed to create meaning. This refers to it. Manning now sees this idea as old-fashioned, as "sort of a default position in twentieth-century philosophy of language".
"I won't say it's a totally invalid position in semantics, but it's also a restricted position," he told me. He advocates “an expanded sense of meaning”. In a recent article, he proposed the termdistribution semantics: "The meaning of a word is simply a description of the contexts in which it occurs." (When I asked Manning how he definedMeaning,he said, "Honestly, I find this difficult.")
Following the theory of distributive semantics, LLMs are not the octopus. Stochastic parrots don't just spit out stupid words. We don't need to get stuck in a mindless "meaning is only mapped to the world" mindset. LLMs process billions of words. The technology initiates what he called a "phase shift". “You know, people discovered metallurgy and it was amazing. So hundreds of years passed. Then people figured out how to harness steam power,” Manning said. We are in a similar moment with language. LLMs are revolutionary enough to change our understanding of language itself. "To me," he said, "it's not a very formal argument. It just comes out; it just hits you.
In July 2022, organizers of a large computational linguistics conference placed Bender and Manning together on a podium for a live audience to listen (politely). They sat at a small table covered in a black cloth, Bender in a purple sweater, Manning in a salmon-colored button-down, passing a microphone back and forth, taking turns answering questions and telling each other : "I like to go first." !" and "I'm not going to go along with that!" They continued, feuding. First, about how children learn language. Bender argued that they learn in relation to attachment figures; Manning said that learning is "self-monitored" like an LLM They then discussed what is important in communication itself. Here, Bender initially invoked Wittgenstein and defined language as inherently relational: "at least a pair of interlocutors working together with shared attention to reach an agreement, or almost a agreement, about what was being communicated.” Yes, he conceded that people express emotions with their faces and communicate through head nods, but the added information is “marginal”.
In the end, they reached their deepest disagreement, which is not linguistic at all. Why do we make these machines? Who do they serve? Manning is literally invested in the project through the venture fund. Bender is not financially involved. Without one, it's easier to demand slow, careful thought before launching products. It's easier to ask how this technology will affect people and in what ways those impacts might be negative. “I feel like trying to build autonomous machines takes a lot of effort,” Bender said, “as opposed to trying to build machines that are useful tools for humans.”
Manning is not in favor of slowing down the development of language technology, nor does he believe that it is possible. He uses the same argument that attracted effective altruists to AI: if we don't do this, someone else will make things worse "because there are other players who are more out there and feel less morally bound".
That doesn't mean he believes in tech companies' efforts to police themselves. He doesn't. They "talk about how responsible they are and their ethical AI efforts and all that, and that's just a political position to try to argue that we're doing good things so you don't have to legislate," he said. He is not in favor of pure chaos: "I am in favor of laws. I think they are the only effective way to restrict human behavior.” But he knows that "it's fundamentally impossible for there not to be reasonable regulation so soon. In fact, China is doing more than the US in terms of regulation."
None of this is comforting. Technology has destabilized democracy. Why should we trust him now? Without being prompted, Manning started talking about nuclear weapons: "Basically, the difference is that with something like nuclear technology, you can really bottle anything up because the number of people with the knowledge is so small and the kind of infrastructure that needs to build is big enough... It's perfectly fine to bottle it. And at least so far it's been pretty effective for things like gene editing too." But that's just not going to happen in this case, he explained. Suppose you want to spread misinformation. "You can just buy top-of-the-line GPUs - graphics processing units - that cost about $1,000 each. You can string eight of them together, so it's $8,000. And the computer that comes with it is another $4,000.” That, he said, "may help you do something useful. And if you can team up with some friends with similar levels of technology, you're on the right track."
few weeks laterOn the podium with Manning, Bender took the podium with a fluttering feather duster and dangling octopus earrings to present at a conference in Toronto. It was called "Resisting Dehumanization in the Age of AI". He didn't look particularly radical, nor did he. Bender defined that boring worddehumanizationas "the cognitive state of not perceiving another human being as fully human... and the experience of being subjected to actions that express a lack of awareness of one's own humanity". Important metaphors throughout science: the idea that the human brain is a computer and a computer is a human brain. That notion, she said, citing the 2021 paper by Alexis T. Baria and Keith Cross, "offers less complexity to the human mind than its due and more wisdom to the computer than its due."
In the question-and-answer session that followed Bender's presentation, a bald man in a black polo shirt with a string around his neck approached the microphone and expressed his concern: "Yes, I wanted to ask why you're humanizing yourself and you chose this character of human being, that category of human beings, as a sort of framework for all these different ideas that unite them.” The man didn't see human beings as anything special." When I listen to your lecture, I can't help thinking, you know, there are some people who are really terrible, and therefore it's not good to be grouped with them. We are the same species, same biological species, but who cares? My dog is really nice. I'm happy to be together with her."
He wanted to "separate a human being, the biological category, from a person or entity worthy of moral respect". LLMs, he acknowledged, aren't people — yet. But technology is getting so good so fast. "I was wondering if you could talk a little bit more about why you chose a human being, humanity, as that kind of framework for thinking about these, you know, a bunch of different things," he concluded. "Thanks."
Bender listened to all this with his head tilted slightly to the right and biting his lip. What could she say about it? She argued from the start. "I think every human being has a certain moral respect because they are human," she said. “We see a lot of things going wrong in our world today that have to do with humanity not being human.”
The guy didn't buy it. "If I could, just quickly," he continued. "It could be that 100% of people deserve some level of moral respect. But I wonder if maybe it's not because they are human in the species sense."
Many who are far from technology also emphasize this. Ecologists and advocates of animal personality argue that we should stop thinking that we are so important in terms of species. We must live more humbly. We must accept that we are creatures among other creatures, matter among other matters. Trees, rivers, whales, atoms, minerals, stars - everything matters. We are not the bosses here.
But the path from the language model to the existential crisis is a short one. Joseph Weizenbaum, who created ELIZA, the first chatbot, in 1966, spent most of the rest of his life regretting it. He wrote the technique ten years latercomputer power and human reason,raises questions that "essentially ... are about nothing less than man's place in the universe". Toys are fun, enchanting and addictive, and this, he still believed 47 years ago, will be our undoing: "No wonder men who live every day with machines they think have become slaves start to believe that humans they are machines."
The echoes of the climate crisis are unmistakable. We knew the dangers for many decades and, driven by capitalism and the desires of those in power, we went ahead anyway. Who wouldn't want to spend a weekend in Paris or Hanalei, especially when the best public relations teams in the world have said this is the greatest prize in life? “Why is the team rooting for what got us here?” Weizenbaum wrote. "Why don't passengers look up from their games?"
Developing technologies that mimic humans requires us to be clear about who we are. “From now on, the safe use of artificial intelligence requires demystifying the human condition,” Joanna Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin, wrote last year. We don't think that as we grow we become more like giraffes. Why get excited about intelligence?
Others, like Dennett, the philosopher of mind, are even more outspoken. We cannot live in a world with what he calls "fake people". "As long as money has existed, counterfeit money has been seen as vandalism against society," he said. “The penalties included the death penalty and being dragged and quartered. Counterfeiting is at least as serious.”
Artificial humans will always have less at stake than real ones, and that makes them amoral actors, he added. "Not for metaphysical reasons, but for simple physical reasons: they are immortal in a way."
We need strict accountability for technology creators, argues Dennett: “They must be held accountable. You should be sued. They must go on record that they will be held accountable if anything they do is used to make fake people. They are about to, if they haven't already, created very serious weapons of destruction against the stability and security of society. They should take it as seriously as molecular biologists take the prospect of biological warfare or nuclear physicists take nuclear war.” This is the real code red. We need to "introduce new attitudes and new laws and get them out there quickly and trick people's appreciation, anthropomorphization," he said. "We want smart machines, not artificial colleagues."
folder madea rule for yourself: "I will not converse with people who do not declare my humanity as an axiom in conversation." Without confusing the line.
I also didn't think I needed to make a rule like that. Then I sat down for tea with Blake Lemoine, a third Google AI researcher who was fired last summer after claiming LaMDA, Google's LLM, was sentient.
After a few minutes of conversation, he reminded me that, not so long ago, I would not have been considered a complete human being. "You couldn't have opened a bank account 50 years ago without your husband signing it," he said. He then suggested a thought experiment: “Say you have a life-size RealDoll in the shape of Carrie Fisher.” To clarify, a RealDoll is a sex doll. “It is technologically trivial to insert a chatbot. Just put it on.
Lemoine paused and said like a nice guy, "Sorry if this triggers."
I said it was fine.
He said: "What happens if the doll says no? Is that rape?
I said, "What happens if the doll says no and it's not rape, and you get used to it?"
"Now you get one of the main points," said Lemoine. “Whether or not these things are human beings, I happen to think they are; I don't think I can convince people who don't believe this - the point is, you can't tell the difference. So let's get people used to treating things that look like people like they're not."
You cannot tell the difference.
That's Bender's point: "We don't learn to stop imagining the spirit behind it."
Also gathering on the sidelines: a movement for robot rights led by a communications technology professor named David Gunkel. In 2017, Gunkel gained notoriety when he posted a photo of himself wearing Wayfarer sunglasses, not unlike a police officer, and holding a sign that read ROBOTS RIGHT NOW. In 2018 he publishedrobot rightswith the MIT press.
Why not treat AI like property and blame OpenAI or Google or whoever benefits from the tool for its impact on society? "So yeah, this gets into a really interesting area we call 'slavery,'" Gunkel told me. “Slaves were part legal entities and part property during Roman times.” Specifically, slaves were property unless they were involved in commercial interactions, in which case they were legal persons and their enslavers were not liable. “Right now,” he added, “there are a number of legal scholars who are suggesting that we solve the problem of algorithms simply by taking the Roman slave law and applying it to robots and AI.”
A sane person might say, "Life is full of weirdos. Go ahead, nothing to worry about here.” So, one Saturday night, I found myself eating trout Niçoise at the home of a tech industry veteran friend. I sat across from my daughter and next to his pregnant wife. I told him about the bald man at the conference he had challenged Bender with the need to show everyone equal moral consideration, he said, "I was talking about this at a party in Cole Valley last week!" Before dinner, he proudly carried a naked toddler into the bathtub, marveling at the kid's belly churning and laughing. Now he says that if you build a machine with as many receptors as a human brain, you'll probably end up with a human - or close to that. enough, right? Why should this entity be any less special?
It's hard to be human. You lose people you love. You suffer and yearn. Your body is breaking down. You want things - you want people - that you can't control.
Bender knows he's no match for a trillion dollar game changer come to life. But she is trying. Others are trying too. LLMs are tools made by certain people - people who can amass vast amounts of money and power, people who are excited by the idea of uniqueness. The project threatens to explode what is human in the species sense. But this is not about humility. It's not about all of us. It's not about becoming a humble creation among the rest of the world. It's about some of us - let's face it - becoming a superspecies. This is the darkness that awaits us when we lose a firm boundary around the idea that human beings, all of us, are worth exactly as they are.
“There is a narcissism re-emerging in the AI dream that we are going to prove that everything we thought was clearly human can actually be accomplished by machines and better,” said Judith Butler, founding director of the critical theory program at UC Berkeley to Help Me to analyze game ideas. “Or human potential – that’s the fascist idea – human potential is better realized with AI than without.” The AI dream is “governed by the perfectibility thesis, and there we see a fascist form of man”. There is a technological acquisition, an escape from the body. "Some people say, 'Yeah! Isn't that amazing!' or "Isn't this interesting ?!" "We're going to get beyond our romantic ideas, our anthropocentric idealism, by, you know, da-da-da, demystifying," Butler added. "But the question of what lives in my language, what lives in my emotion, in my love, in my my language is obfuscated."
The day after Bender gave me the grammar school for linguistics, I attended the weekly meeting she holds with her students. They all study computational linguistics and see exactly what's going on. So many options, so much power. What are we going to use it for? “It's about building a tool that's easy to interact with because you can use natural language. Instead of trying to make it look like a person," said Elizabeth Conrad, who mastered Bender's anti-bullshit style after two years of graduate school in NLP. "Why are you trying to make people think it's really sad you lost your phone?"
Blurring the line is dangerous. A society with fake people that we cannot distinguish from real people will soon no longer be a society. If you're going to buy a Carrie Fisher sex doll and want to install an LLM, "put it on" and work on your rape fantasy - ok I guess. But we can't have both and have our leaders say, "I'm a stochastic parrot, and so are you." We cannot have people who seek to separate "human, the biological category, of a person or who deserves moral respect". Because then we have a world where grown men, over tea, postulate thought experiments about raping talking sex dolls and think maybe you are one too.
- Select newsletter
- artificial intelligence
- New York Magazine
- a great story
+Comments Leave a comment