What I enjoy most about machine learning is how it neatly illustrates that engineers don’t know how people work. Take, for example, the large language model. I am told that they will take my job, make me redundant; that they are intelligent; They will plan the perfect itinerary for my trip to Paris with highlights about bars and restaurants sure are accurate and complete,
Inspired by a tweet about mayonnaiseI’m now ready to do a fun experiment with Google’s Bard.
I am choosing to do this for two reasons. First of all, this type of quiz is something you do with young children when you are teaching them to read. You get them to identify letters and the sounds they make. But second, I strongly suspect that this general activity is not captured in whatever data Bard is pulling from because it’s not the kind of thing you write to.
it’s obviously absurd, but it’s absurd because We can look at the word “ketchup” and clearly see the “e”. Bard can’t do that. It lives in a completely closed world of training data.
This kind of problem happens with LLM. Language is a very old human technology, but our intelligence predates it. Like all social animals, we have to keep track of relationships of status, which is why our brains are so big and weird. Language is a very useful tool – hi, I write for a living! – But this is not the same as knowledge. It floats on top of a bunch of other things that we take for granted.
I often think of Rodney Brooks’ 1987 paper, “intelligence without representationwhich is more relevant than ever. I’m not going to deny that language use and intelligence are linked – but intelligence precedes language. If you were to work with language in the absence of intelligence As we see with LLM, you get strange results. Brooks compares what’s happening with LLM to a group of early researchers trying to build an airplane by focusing on the seats and windows. doing.
I’m pretty sure he’s still right about that.
I understand the temptation to try to have a complex conversation with an LL.M. A lot of people want us to be able to build an intelligent computer. These fantasies often appear in science fiction, a genre widely read by nerds, and suggest a longing to know that we are not alone in the universe. This is the same impulse that drives our efforts to contact alien intelligence.
But trying to pretend that LLMs can think is a fantasy. You can interrogate a subconscious if you want, but you will get glares. there’s nothing there. I mean, Check out its efforts at ASCII art,
When you do something like this—a task at which your average five-year-old excels and a sophisticated LLM fails—you begin to see how intelligent intelligence is. In fact Works. Sure, there are people who believe that LLMs have consciousness, but they call me tragically antisocialUnable to understand or appreciate just how gifted ordinary people are.
Yes, Bard can cause glare. In fact, like most chatbots, it excels at autocomplete for marketing copy. This is probably a reflection of how much ad copy appears in its training data. Bard and his engineers might not see it that way, but what a devastating commentary on what happens online on our daily lives.
Advertising is one thing. But being able to create advertising copy is not a sign of intelligence. There are so many things that we don’t bother to write because we don’t have the need to write and other things that we know but can’t Write down – like how to ride a bike. We take a lot of shortcuts in talking to each other because people operate with the same baseline of information about the world at large. There’s a reason: we are all In World. Chatbot is not.
I’m sure someone will tell me that chatbots will improve and I’m just being mean. First of all: It’s vaporware until it ships, babe. But second, we don’t really know how smart we are or how we think. If there’s one real use for chatbots, it’s illuminating the things we take for granted about our own intelligence. Or, as someone wiser than me said: map is not an area, is the language map; Knowledge is the field.
There are many things that chatbots do not know and cannot know. The truth is, it doesn’t take much effort to fail the LLM Turing test, as long as you’re asking the right questions.










