Sunday, October 12, 2025

Deeper Into the AI Wave


I watched two brilliant video podcasts this week about technology.  The first was all about LLMs (a.k.a. “AI”), and the second was about enshittification, but it did touch on AI a bit.  You should watch them both for yourself: Jon Stewart’s interview with Geoffrey Hinton, and Adam Conover’s interview with Cory Doctorow.  They’re long, but well worth your time.

Now, you might not know who Geoffrey Hinton is, so let me enlighten you: he’s a British computer scientist, now living in Canada, winner of the 2024 Nobel Prize in Physics for his work on neural networks, and is commonly known as “the Godfather of AI.”  So, you know: a guy who actually knows what the fuck he’s talking about.  And, while Jon was desperately attempting to get him to talk about the dangers of AI—which he eventually does—he seems determined to make Jon understand how LLMs work.  And it’s utterly brilliant.  Because it takes forever, and you can see Jon champing at the bit to get to the more “interesting” part of the discussion, but, in his slow, deliberate, professorial way, he keeps circling back to building up Jon’s knowledge, brick by brick.  And, at the end, Jon really does understand a bunch of things about AI that he just didn’t before.  And, as a result, he has a much firmer grasp on the positives of AI and the dangers.  That, to me, is valuable.

Of course, I also will admit to being thrilled that Hinton articulates (quite brilliantly) many of the same points I’ve tried to make in my posts about AI/LLMs.  In my more recent post on AI, I pointed out that we don’t really understand what “intelligence” means; Hinton goes further and says our concept of “sentience” is like to that of somone who believes the Earth is flat.  I said that perhaps in the far future, we would discover that our brains work the same way that LLMs do; Hinton goes further and says we already know that to be true (and it’s useful to understand that he started out with a degree in experimental psychology before transitioning to artificial intelligence).  So he and I are on the same page, but of course he explains things much better than I can.  I won’t say he’s smarter than I am, but he does have nearly 20 extra years’ experience and a PhD on me.  Also, he’s probably smarter than I am.

So here’s how he explains to Jon that claiming that AI isn’t actually “intelligent” isn’t as smart an observation as you think it is:

Geoffrey: Now, you just said something that many people say: “This isn’t understanding.  This is just a statistical trick.”

Jon: Yes.

Geoffrey: That’s what Chomsky says, for example.

Jon: Yes.  Chomsky and I, we’re always stepping on each other’s sentences.

Geoffrey: Yeah.  So let me ask you the question, well, how do you decide what word to say next?

Jon: Me?

Geoffrey: You.

Jon: It’s interesting; I’m glad you brought this up.  So what I do is I look for sharp lines and then I try and predict— no, I have no idea how I do that.  Honestly, I wish I knew.  It would save me a great deal of embarrassment if I knew how to stop some of the things that I’m saying that come out next.  If I had a better predictor, boy, I could save myself quite a bit of trouble.

Geoffrey: So the way you do it is pretty much the same as the way these large language models do it.  You have the words you’ve said so far.  Those words are represented by sets of active features.  So the word symbols get turned into big patterns of activation of features, neurons going ping—

Jon: Different pings, different strengths.

Geoffrey: —and these neurons interact with each other to activate some neurons that go ping, that are representing the meaning of the next word, or possible meanings of the next word.  And from those, you pick a word that fits in with those features.  That’s how the large language models generate text, and that’s how you do it too.

Which makes sense: LLMs were based on neural networks, and neural networks, as their name implies, were based on the way our brains actually work.  We designed these things to mimic our brains, but then we decided that our brains were “special,” somehow.  As they say later in the discussion:

Geoffrey: This idea there’s a line between us and machines: we have this special thing called “subjective experience” and they don’t—it’s rubbish.

Jon: So you’re s—so the misunderstanding is, when I say “sentience,” it’s as though I have this special gift, that of a soul, or of an understanding of subjective realities, that a computer could never have, or an AI could never have.  But, in your mind, what you’re saying is: oh, no, they understand very well what’s subjective.

We’ve just pre-determined that humans are different, somehow; that machines can’t possibly be as smart as we are, as creative as we are, as special as we are.  The number of times I’ve heard people use the word “obviously” when talking about how AIs will never write a song as good as a human can, or a poem as touching, or an essay as convincing ... look, I’m not saying that AIs can do those things.  I’m just saying that the word “obviously” doesn’t really apply.  Maybe one day we’ll actually figure out what it is that our brains can do that AI brains just can’t, for deep structural reasons.  But I’m pretty sure it won’t be obvious.  (Though of course this won’t stop some people from claiming they knew it all along ...)

The best part of these interviews, however, is how the people who know what they’re talking about gently correct the AI misgivings of their interviewers.  Here’s Jon and Geoffrey again.

Jon: ... my guess is, like any technology, there’s going to be some incredible positives.

Geoffrey: Yes: in health care, in education, in designing new materials, there’s going to be wonderful positives.

Jon: And then the negatives will be, because people are going to want to monopolize it because of the wealth, I assume, that it can generate, it’s going to change.  It’s going to be a disruption in the workforce.  The Industrial Revolution was a disruption in the workforce.  Globalization is a disruption in the workforce.  But those occurred over decades.  This is a disruption that will occur in a really collapsed time frame.  Is that correct?

Geoffrey: That seems very probable, yes.  ... my belief is the possibilities of good are so great that we’re not going to stop the development.  But I also believe that the development is going to be very dangerous.  And so we should put huge effort into saying, “it is going to be developed, but we should try and do it safely.”  We may not be able to, but we should try.

Jon is typically someone who thinks the benefits of AI are overstated, and it’s good to hear someone will some knowledge temper that.  This exact dynamic is mirrored in the Cory Doctorow interview; Adam is, if anything, even more of the opinion that AI is useless, while Cory, like Geoffrey, has a far more informed (and therefore more balanced) view.  Here’s a typical exchange from their conversation:

Adam: And you know what’s funny is, I’ve mentioned in in past episodes where we’re talking about AI, you know, that I find large language models pretty useless, but I’m like, “Oh, but I understand programmers find them useful.  It’s a labor-saving device for programmers.”  And I’ve had developers in my comments come in and say, “Actually, Adam, no, it’s useless to us, too.  Like, this is also a lie on the part of the companies that employ us.”

Cory: So, I got so fed up with having conversations about AI that went nowhere, that over the summer I wrote a book about AI called The Reverse Centaur’s Guide to AI that’s going to come out in 2026.  ... my thesis is that, so a centaur in automation theory is someone who’s assisted by a machine, okay?  And a reverse centur is someone who’s conscripted to be a peripheral for a machine.  So, you know, like I love Lucy where she’s got to get the chocolates into the chocolate box and the conveyor belts?  She’s only in the factory because the conveyor belt doesn’t have hands, right?  She is the, like, inconvenient, inadequate hands for the conveyor belt, and it works—it uses her up, right?  And I think that, you know, there’s plenty of senior devs who are like: oh, this routine task I can tell right away if the AI does it wrong.  It’s sort of time-consuming.  Like, one of the canonical examples is, I have this, like, one data file that’s in a weird format and I need to convert it to another format and I could, you know, do some regular expressions in Python and whatever and make it happen, or I could just ask—I could one-shot it with a chatbot, and then I can validate it really quickly, because I can check if the tabular data adds up or whatever.  And I hear from devs all the time who say this is great, and the thing is: they’re in charge of their work, right?  And this was like the thing the Writer’s Guild won in the AI strike, right?  We don’t have to use AI.  We don’t have to not use AI.  You can’t pay us less for not using or for using AI ... and we’re in charge.  ... but, like, if if there’s an artist out there who enjoys using AI in some part of their process, like, you do you, brother.  Like maybe it’ll be shitty art.  There’s lots of bad art out there.  It’s fine.  ... it’s the conscripting of people to assist a chatbot that I think is the thing that makes the difference.

Because here’s the thing: anyone who tells you that AI is completely useless—just a party trick to amuse stoned people by making Abraham Lincoln rap or having MLK drop the beat—is full of shit.  But anyone who tells you that AI is the future and is going to make everyone’s lives better is also full of shit (and likely trying to sell you something).  Somehow we decided that either this AI thing is all smoke and mirrors and sooner or later the bubble will collapse, or it’s inevitable and it will change the world.  ¿Por que no los dos?  Remember in the late 90s, when some people said that the Internet was inevitable, and sooner or later if you didn’t have a website your business was doomed?  And then other people said that all those Internet companies were losing money and they were all going to go bankrupt in spectacular fashion?  Now, with the benefit of hindsight, which camp was right?  Well, turns out both sides were right.  There was a bubble, and it burst, and a lot of people lost a lot of money.  Also, the Internet is now an integral part of everyone’s lives, and those companies who were slow to adopt—like Kodak, Toys “R” Us, and Borders—ended up filing for bankruptcy and/or getting gobbled up by their competitors.  And this is the lesson we need to internalize about AI as well.

The way that AI is currently expanding is absolutely unsustainable.  People are using it like a new buzzword and just jamming it into things where it can’t possibly be useful, or putting it into things where it might be useful, but doing so with such a poor understanding of how to use it that it will fail anyway.  None of the AI companies are making any money, and most have only the vaguest idea of how they will make money.  Like tulips or Beanie Babies, eventually the whole thing will come crashing down.  It’s inevitable.

But that doesn’t mean that AI isn’t actually useful, or that it won’t become an integral part of our lives.  Yes, I happen to be a senior developer, and, while I’m encouraged to use AI, I’m not required to by any means: I’m the “centaur” Cory was talking about, not the “reverse centaur” that has AI thrust upon them whether they like it or not.  So, since I get to decide whether to use it or not—and I get paid the same either way—I’m an AI proponent (mostly).  But this idea that people such as Adam are constantly espousing—that AI is only useful for developers—is just nonsense.  AI can help you choose the best air fryer to buy.  It can help you understand difficult concepts that you’re studying.  It can help you make the seating chart for your wedding.  Is it the case that you can’t always trust it?  Obviously.  You’re not trusting every web site that comes up in a Google search either, are you?  Hopefully not.  For that matter, you probably shouldn’t trust everything coming out of the mouths of all the actual human people in your life either.  Humans in the modern age have to become very good at sifting useful knowledge from bullshit, and nothing about that part changes with AI.  The big difference is, the AI can gather the data faster, can present it more relatably, and can help you integrate it into something approaching usefel, correct information.  That’s not just for developers.  That’s for everyone.

So I’m quite pleased to have some real experts here that I can refer to to back up my opinions.  Not that I felt like they needed backing up.  But it’s still nice to have some authoritative sources behind them.  And it’s especially nice to see Jon Stewart and Adam Conover, two people whose opinions I generally respect, learn some new perspectives in one of the few areas where they were annoyingly wrong.  Now let’s see if they can accept those perspectives and integrate them into their worldviews.









No comments:

Post a Comment