Sunday, February 19, 2023

Getting Chatty

I’m probably not the first person to tell you this, but there’s a new AI wunderkind taking the Internet by storm, and it’s called ChatGPT.  Everyone’s buzzing about it, and Microsoft is pumping money into it like crazy, and even boring old news outlets are starting to pick it up—heck, I just heard them mention it on this week’s episode of Wait Wait Don’t Tell Me.  If you’re late to the party, perhaps I can give you some insight into what’s going on, and, if you’ve been hearing all about it but not really knowing what “it” is, then perhaps I can provide some insight.*

AI has been undergoing a bit of a Renaissance here lately.  For a long time, AI development was focussed on “state machines,” which are like really fancy flow charts.  You’ve probably seen one of these on the Internet at some point: you know those web pages that try to guess what animal you’re thinking of (or whatever), and, if they can’t guess it, then they ask you to teach it a question that will distinguish your animal from the last animal it guessed, and then it adds that to its little database ... those amusing little things?  Well, those are very simple state machines.  If the answer is “yes,” it goes down one path, and if the answer is “no,” it goes down a different one, until it eventually hits a dead end.  State machines, as it turns out, are very useful in computer science ... but they don’t make good AI.  That’s just not the way humans think (unless you’re playing a game of 20 Questions, and even then a lot of people don’t approach it that logically).  So eventually computer scientists tried something else.

One way you can make a better AI than a state machine is doing something called “machine learning.” With this, you take a bunch of data, and you feed it into an algorithm.  The algorithm is designed to analyze the data’s inputs and outputs: that is, if humans started with thing A (the input), then they might conclude thing B (the output).  If you have a decent enough algorithm, you can make a program that will conclude basically the same things that a human will, most of the time.  Of course, not all humans will come up with the same outputs given the same inputs, so your algorithm better be able to handle contradictions.  And naturally the data you feed into it (its “training data”) will determine entirely how good it gets.  If you accidentally (or deliberately) give it data that’s skewed towards one way of thinking, your machine learning AI will be likewise skewed.  But these are surmountable issues.

Another thing you could do is to create a “language model.” This also uses training data, but instead of examining the data for inputs and outputs, the algorithm examines the words that comprise the data, looking for patterns and learning syntax.  Now, “chatbots” (or computer programs designed to simulate a person’s speech patterns) have been around a long time; Eliza, a faux therapist, is actually a bit older than I am (and, trust me: that’s old).  But the thing about Eliza is, it’s not very good.  It only takes about 5 or so exchanges before you start to butt up against its limitations; if you didn’t know it was an AI when you first started, you’d probably figure it out in under a minute.  Of course, many people would say that Eliza and similar chatbots aren’t even AIs at all.  There’s no actual “intelligence” there, they’d point out.  It’s just making a more-or-less convincing attempt at conversation.

Still, the ability to hold a conversation does require some intelligence, and it’s difficult to converse with a thing without mentally assessing it as either smart, or dumb, or somewhere in between.  Think of Siri and other similar “personal assistants”: they’re not really AI, because they don’t really “know” anything.  They’re just capable of analyzing what you said and turning it into a search that Apple or Google or Amazon can use to return some (hopefully) useful results.  But everyone who’s interacted with Siri or her peers will tell you how dumb she is.  Because she often misunderstands what you’re saying: sometimes because she doesn’t hear the correct words, and sometimes because her algorithm got the words right but failed to tease out a reasonable meaning from them.  So, no, not a “real” AI ... but still something that we can think of as either intelligent or not.

Language models are sort of a step up from Siri et al.  Many folks are still going to claim they’re not AI, but the ability they have to figure out what you meant from what you said and respond like an actual human certainly makes them sound smart.  And they’re typically built like machine learning models: you take a big ol’ set of training data, feed it in, and let it learn how to talk.

Of course the best AI of all would be a combination of both ...

And now we arrive at ChatGPT.  A company called OpenAI created a combined machine learning and language model program which they referred to a “generative pre-trained transfomer,” or GPT.  They’ve made 3 of these so far, so the newest one is called “GPT-3.” And then they glued a chatbot-style language model on top of that, and there you have ChatGPT.  GPT-3 is actually rather amazing at answering questions, if they’re specific enough.  What ChatGPT adds is primarily context: when you’re talking to GPT-3, if it gives you an answer that isn’t helpful or doesn’t really get at the meaning, you have to start over and type your whole question in again, tweaking it slightly to hopefully get a better shot at conveying your meaning.  But, with ChatGPT, you can just say something like “no, I didn’t mean X; please try again using Y.” And it’ll do that, because it keeps track of what the general topic is, and it knows which tangents you’ve drifted down, and it’s even pretty damn good at guess what “it” means in a given sentence if you start slinging pronouns at it.

Now, many news outlets have picked up on the fact that Microsoft is trying to integrate ChatGPT (or something based off of it) into their search engine Bing, and people are speculating that this could be the first serious contender to Google.  I think that’s both wrong and right: while I personally have started to use ChatGPT to answer questions that Google really sucks at answering, so I know it’s better in many situations, that doesn’t mean that Microsoft has the brains to be able to monetize it sufficiently to be a threat to Google’s near-monopoly.  If you want to watch a really good breakdown of this aspect of ChatGPT, there’s a really good YouTube video which will explain it in just over 8 minutes.

But, the thing is, whether or not Microsoft succesfully integrates a ChatGPT-adjacent AI into Bing, this level of useful AI is likely going to change the Internet as we know it.  ChatGPT is smarter than Eliza, or Siri, or Alexa, or “Hey Google.” It’s more friendly and polite, too.  It can not only regurgitate facts, but also offer opinions, advice, and it’s even got a little bit of creativity.  Don’t get me wrong: ChatGPT is not perfect by any means.  It will quite confidently tell you things that are completely wrong, and, when you point out its mistake, completely reverse direction and claim that it was wrong, it was always wrong, and it has no idea why it said that.  It will give you answers that aren’t wrong but are incomplete.  If asked, it will produce arguments that may sound convincing, but are based on faulty premises, or are supported by faulty evidence.  It’s not something you can rely on for 100% accuracy.

But, here’s the thing: if you’ve spent any time searching the Internet, you already know you can’t rely on everything you read.  Half of the shit is made up, and the other half may not mean what you think it means.  Finding information is a process, and you have to throw out as much as you keep, and at the end of it all you hope you got close to the truth ... if we can even really believe in “truth” any more at all.  So, having an assistant to help you out on that journey is not really a bad thing.  I find ChatGPT to be helpful when writing code, for instance: not to write code for me, but to suggest ideas and algorithms when I can then refine on my own.  Here’s the thing: ChatGPT is not a very good programmer, but it is a very knowledgeable one, and it might know a technique (or a whole language) that I never learned.  I would never use ChatGPT code as is ... but I sure do use it as a jumping-off point quite a bit.

And that’s just me being a programmer.  I’m also a D&D nerd, and ChatGPT can help me come up with character concepts or lay out what I need to do to build one.  If I can’t figure out how to do something on my Android phone, I just ask ChatGPT, and it (probably) knows how to do it.  Networking problem? ChatGPT.  Need to understand the difference between filtering water and distilling it? ChatGPT.  Need help choosing a brand of USB hub? ChatGPT.  Want to know what 1/112th the diameter of Mercury is? ChatGPT (it’s 43.39km, by the way, which is 26.97 miles).

But you needn’t take my word for it.  The Atlantic has already published an article called “The College Essay Is Dead” (because, you know, students in the future will just get an AI to write their essays for them).  A Stanford professor gave an interview about how it will “change the way we think and work.” YouTuber Tom Scott (normally quite a sober fellow) posted a video entitled “I tried using AI. It scared me.” The technical term for what these folks are describing is “inflection point.” Before Gutenberg’s printing press, the concept of sitting down of an evening with a book was unheard of.  Before Eli Whitney built a musket out of interchangeable parts, the concept of mass production was ludicrous.  Before Charles Birdseye figured out how to flash-freeze peas, supermarkets weren’t even possible.  And there is an inevitable series of points, from the invention of the telphone to the earliest implementation of ARPANET to the first smartphone, that fairly boggles the mind when you try to imagine life before it.  My youngest child will not be able to conceive of life without a phone in her pocket; my eldest can’t comprehend life before the Internet; and even I cannot really fancy a time when you couldn’t just pick up the phone and call a person, even if they might not be home at the time.  Will my children’s children not be able to envision life before chatty AIs?  Perhaps not.  I can’t say that all those friendly, helpful robots that we’re so familiar with from sci-fi books and shows are definitely in our future ... but I’m no longer willing to say they definitely won’t be, either.

The future will be ... interesting.



__________

* Note: This is not designed to be a fully, technically correct explanation, but rather a deliberate oversimplification for lay people.  Please bear that in mind before you submit corrections.











No comments:

Post a Comment