This week, I really enjoyed The Weekly Show, where Jon Stewart interviewed billionaire Mark Cuban. Cuban is famous for supplanting Warren Buffet as “the good billionaire”: a billionaire who seems to want to do some good in the world instead of just screwing over everyone else. When Stewart talks about how people only rail againt billionaires on the “other” sid
I noted this one for a couple of places where he seemed to be agreeing with some of my prior posts. For instance, when talking about AI, he said this:
Cuban: Then there’s using AI. And so there are things like NEPA, which go into the environmental protection stuff to try to find out if there’s a little frog or whatever before something’s built. In my opinion, in the conversations I had with some of the Harris folks, is that’s where AI really, really can apply. Because there’s a process designed for the people in NEPA who go through and determine what should be approved and what data is required and what friction should be added or what friction should be removed. Artificial intelligence is great for that. All the rules that the individuals on those councils and boards that make those determinations, they have rules that they follow. They have guidebooks that they follow.
Stewart: It’s onerous.
Cuban: Yeah, it’s onerous. There’s tons of bureaucracy, but tons of data there. You put that into artificial intelligence,
into a large language model, and you use that to train the large language model. And then when a new project comes along, you set up agents which then feed the questions and the answers, and the answers to the responses to that new organization, whatever it is they may be building.
Which is exactly correct: this is using AI in how it’s meant to be used. There’s a process, and it takes a long time for humans to complete that process, but a computer can do it faster. Up until now, whenever that process involved people weighing different abstract factors and trying to figure out what’s the best approach, you just couldn’t use a computer to speed things up, because computers can’t do that. But AI
But Jon comes back with this:
Stewart: Does that abdicate our autonomy? ...And Cuban was, surprisingly, not able to mount a cogent response to this. I, however, am.
Cuban: ... The challenge is, who makes that decision? When it’s obvious, it’s easy. When it’s not so obvious, it’s far more difficult. And so that’s where the AI comes in and large language models. Because across the breadth, however many instances of evaluations that need to take place across the country, you don’t want individuals having to make those decisions.
Stewart: But I thought that’s the whole point. I thought the whole point of people running for office is that they’ve got a vision and they earn our trust, as opposed to AI. And this, again, may be more of the Luddite’s view of not understanding ... AI. I’m nervous about abdicating that. At least with people, there is a certain regime of accountability that we can bring through. I can’t vote out a large language model.
- You can’t vote out the dozen
s— sometimes hundred s— of people in the EPA or whichever bureaucracy we’re talking about who are making the decisions about how to navigate all those regulations either. You can vote out the guy at the top, maybe, but they’re just the person who either approved or rejected the work of all those faceless bureaucrats. How is that different from the AI example? Asking the AI to help make a decision doesn’t automatically mean that there’s not someone at the end of the day who will either approve or reject the AI’s plan. - Jon says it would be better to just get rid of all the red tape. Well, duh. Of course that would be better. Sadly, the “science fiction” plan of replacing the work of all those bureaucrats with AI is more feasible (and likely) than any plan to reduce the current bureaucracy of our governmental agencies.
- Jon also says that people can cut through the red tape too, like Pennsylvania governor Josh Shapiro did when fixing the collapse of Interstate 95 in Philadelphia. Cuban points out that humans can do things quickly when the answer is easy, but not so much when the answer is harder. This is vaguely correct, but it doesn’t explain things well enough. He was closer when he talked about the “little frog.” There are always going to be cases where the “right” decision means weighing environmental factors vs economic ones (to take a simple example), and for the most part we have a tendency to devolove into camps. There are people who are always going to take the side of the environment, regardless of the cost, and there are people who are always going to take the side of the business, regardless of the impact on the planet. But an AI doesn’t have a predetermined agenda. It can weigh factors, given all the background context, and make a dispassionate decision on where to draw the balance.
- And, despite the fact that people are predisposed (by scifi thrillers, mostly) to believe that AIs are black boxes and we can never understand how they arrive at their decisions, the truth is that, at least for the LLMs that are currently what we mean when we say “AI,” we actually do know exactly how they arrive at those decisions. LLMs use something called “chain of thought reasoning” (usually abbreviated in LLM literature as CoT), which basically means that LLMs “think out loud” so that humans can review all their logic and make sure it’s sound.
- Which also knocks down Jon’s other objection (which is expanded upon in the show’s closing segments, where his producers talk about the very real cases of people losing their jobs to AI): that this process will eliminate people’s jobs. Sure, in business this is a very real problem: many businesses look at AI as a way to save money, and eliminating jobs is one way to do that. But that ain’t an AI problem. Why do so many companies lay off people right around Christmas? Because it makes their bottom line look better at end-of-year. Companies killing jobs to improve their bottom lines ain’t an AI problem: it’s a shitty company problem. But government is different. Most government workers have protection from being downsized in this fashion. Plus, the whole reason all this red tape takes forever is that the government is constantly understaffed. Having AI make decisions which are then reviewed by experts doesn’t in any way reduce how many people you need to get the thing accomplished: it only reduces the amount of time those people have to devote to the job.
Anyway, another place Cuban appeared to agree with me is on the topic of insurance, which I broached in a long tangent last week.
Cuban: But it gets worse. It gets worse. And so now these providers, the hospitals and doctors, they negotiate with the big insurance companies. And it’s fascinating. If you walk into a hospital to pay for an MRI, as an example, and you don’t mention your insurance, you just say, I want a cash price, they’ll probably say it’s $350 to $450, depending on where you live. That same hospital will negotiate with what they call the BUCAs, the big insurance companies. For that same thing, they’ll negotiate a price of $2,000.
Stewart: What?
Cuban: Yeah. So you would think that big insurance company negotiating with the hospital and that insurance company
covers millions of lives. They insure or deal with—
Stewart: Why wouldn’t they negotiate that if it’s a bulk thing to $100? Why would it be higher?
Cuban: Because the hospital needs the insurance company as a sales funnel to bring patients in so they can pay their bills. And the insurance company wants that price to be higher, particularly for things like the ACA, because the ACA requires for all the plans they cover that they spend up to 85%.
Again, I’m not sure Cuban is explaining it particularly well, but remember how I put it last week: “companies couldn’t charge that much for medical care if the insurance companies weren’t picking everyone’s pockets ... insurance is enabling the whole cycle.”
Anyway, that’s my too-long review of the Mark Cuban interview. I’ll just ding Stewart one last tim
Well, you guys know this. I get annoyed at myself for being a little high-horsey. And you get a little of the sanctimony in there. So I try to relax sometimes on the certainty of my opinions.
Oh, you get a little sanctimonious, do you? You mean, like you did a few seconds ago? Work harder on the relaxation part, my man.
But, all that aside, still a great host, very incisive, very trenchant. Looking forward to more shows next year.
No comments:
Post a Comment