Recently, I was listening to a podcas
I also think AI is kinda bullshit. I’ve been thinking about it; I think there’s some stuff that AI can do, but on the other hand it really is not ... we shouldn’t call it AI. Someone was making this point, that calling it “artificial intelligence” is kind of propaganda. It’s not really intelligent yet. It’s just like a word prediction algorithm, you know? You give it a topic— it doesn’t know what it’s saying. It’s ... it’s like an algorithm that predicts what th e— given any word or paragraph, it predicts what the next most likely word is, I think. I don’t think it really thinks ... I don’t think it’s artificial intelligence.
Of course, I put “hot take” in quotes because it’s not particularly hot: as David himself notes, other people have been making this observation for a while now, especially in relation to ChatGPT. I gave my own opinions of ChatGPT several months ago, and it’s only become more pervasive, and more useful, since then. Now, David’s assessment is not wrong ... but it’s also not complete, either. David’s not a tech guy. But I am. So I want to share my opinion with you on this topic, but, be forewarned: I’m going to ask a lot of questions and not necessarily provide a lot of answers. This is one of those topics where there aren’t any clear answers, and asking the questions is really the point of the exercise.
So, first let’s get the one minor detail that David is wrong about out of the way. What David is referring to here are the LLMs, like ChatGPT. To be pendantic about it, LLMs are just one form of AI: they just happen to be the one that’s hot right now, because it’s the one that’s shown the most promise. If you’ve had the opportunity to interact with ChatGPT or any of its imitators, you know what I mean. If not ... well, just take my word for it. LLMs are extremely useful and extremely promising, and the closest we’ve come so far to being to talk to a machine like a person.3 But they are not the totality of AI, and I’m sure there will be AI in the future that is not based on this technology, just as there was in the past.
But, forgiving that understandable conflation, what about this notion that an LLM is just a “predictive algorithm,” and it doesn’t actually think, and therefore it’s a misnomer to refer to it as “intelligence”? David goes on to cite (badly) the “Chinese room” thought experiment; if you’re unfamiliar, I encourage you to read the full Wikipedia article (or at least the first two sections), but the synopsis is, if a computer program could take in questions in Chinese and produce answers in Chinese, and do so sufficiently well to fool a native Chinese speaker, then a person who neither speaks, reads, nor understands Chinese could be operating that program, and taking in the questions, and passing back the answers. Obviously you would not say that the person could speak Chinese, and so therefore you can’t really say that the program speaks Chinese either. Analogously, a program which simulates intelligent thought isn’t actually intelligent ... right?
This immediately reminds me of another podcast that I listen to, Let’s Learn Everything. On their episode “Beaver Reintroductions, Solving Mazes, and ASMR,”4 Tom Lum asks the question “How does a slime mold solve a maze?” A slime mold is, after all, one of the lowest forms of life. It doesn’t even have any neurons, much less a brain. How could it possibly solve a maze? Well, it does so by extending its body down all possible pathways until it locates the food. Once it’s done that, it retracts all its psuedopods back into itself, leaving only the shortest path.
Now, the conclusion that Tom (as well as his cohosts Ella and Caroline) arrived at was that this isn’t really “solving” the maze. Tom also had some great points on whether using maze-solving as a measure of intelligence makes any sense at all (you should really check out the episode), but let’s set that aside for now. Presuming that being able to solve a maze does indicate something about the level of intelligence of a creature, isn’t it sort of sour grapes to claim that the slime mold did it the “wrong” way? We used our big brains to figure out the maze, but when a creature who doesn’t have our advantages figures out a way to do complete the task anyway, we suddenly claim it doesn’t count?
Let’s go a step further. If I give the maze to a person to solve, and they laboriously try every possible pathway until they find the shortest one, then are they really doing anything differently than the slime mold? And does that mean that the person is not intelligent, because they didn’t solve the maze the way we thought they should? I mean, just keeping track of all the possible pathways, and what you’ve tried already ... that requires a certain amount of intelligence, no? Of course we lack the advantages of the slime mol
Now let’s circle back to the LLMs. It is 100% true that all they’re doing is just predicting what the next word should be, and the next word after that, and so on. No one is denying that. But now we’re suddenly faced with deciding whether or not that counts as “intelligence.” Things that we’ve traditionally used to measure a person’s intelligence, such as SAT scores, are no problem for LLMs, which are now passing LSATs and bar exams in the top 10%. But that doesn’t “count,” right? Because it’s not really thinking. I dunno; kinda feels like we’re moving the goalposts a bit here.
Part of the issue, of course, is that we really don’t have the slightest idea how our brains work. Oh, sure, we can mumble on about electrical impulses and say that this part of the brain is responsible for this aspect of cognition based on what lights up during a brain scan, but, at the end of the day, we can’t really explain what’s going on in there when you can’t remember something today that you had no trouble with yesterday, or when you have a crazy idea out of nowhere, or when you just know that your friend is lying to you even though you can’t explain how you know. Imagine some day in the far future where scientists discover, finally, that the way most of our thinking works is that words are converted to symbols in our brains, and we primarily talk by deciding what the next logical symbol should be, given the current context of who we’re talking to and what we’re talking about. If that were to ever happen, seems like we’d owe these LLMs a bit of an apology. Or would we instead decide that that aspect of how we think isn’t “really” thinking, and that there must be something deeper?
Look, I’m not saying that ChatGPT (for example) actually is intelligent. I’m just pointing out that we don’t have a very clear idea, ourselves, what “intelligent” actually means. It’s like the infamous Supreme Court definition of obscenity: we can’t define intelligence, but we know it when we see it, and this ain’t it. But what I find to be a more interesting question is this: why does it matter?
An LLM like ChatGPT serves a purpose. Now, overreliance on it can be foolis
... the media has talked about how this is lawyers using ChatGPT and things going awry. But what it’s really revealing is that these lawyers just did an all around terrible job and it just happened to tangentially involve ChatGPT.
So you can talk to an LLM as if it were a person, it talks back to you as if it were a person, it can give you information like a person, and oftentimes more information that you can get from most of the persons you know, and you can rely it as exactly as much (or, more to the point, exactly as little) as you can rely on another person. But it’s not a person, and it’s not really “thinking” (whatever that means), so therefore it’s not “intelligent.” Is that all just semantics? And, even if it is, is this one of those cases where semantics is important?
I’ve got to say, I’m not sure it is. I think every person reading this has to decide that for themselve
And, if it is the case that AI won’t take over the world and enslave or destroy us, then what difference does it really make whether or not it’s “technically” intelligent? If it’s being useful, and if we can learn how to use it effectively without shooting ourselves in the foot, that’s good enough for me. Perhaps it can be good enough for you as well.
[For complete transparency, I must say that, while ChatGPT did not write any of the words in this post, it did come up with the title. Took it six tries, but it finally came up with something I felt was at least moderately clever. So, if you like it, it’s because I’m very good at prompting LLMs, and, if you hate it, it’s because ChatGPT is not very smart. This is one of the primary advantages of having an LLM as a contributor: I can hog all the credit and it will never be offended.]
1 If you’re not familia
2 Approximately 40 minutes in, if you want to follow along at home.
3 “LLM” stands for “large language model,” by the way, although knowing that is really unnecesssary to follow along on this topic.
4 Again, if you want to follow along at home, jump to about 44:45.