Sunday, December 15, 2024

Doom Report (Week -6)


This week, I really enjoyed The Weekly Show, where Jon Stewart interviewed billionaire Mark Cuban.  Cuban is famous for supplanting Warren Buffet as “the good billionaire”: a billionaire who seems to want to do some good in the world instead of just screwing over everyone else.  When Stewart talks about how people only rail againt billionaires on the “other” side—basically, that whatever billionaires do is fine, but only if they’re “our” billionaires—the billionaires he’s talking about as being “ours” are Cuban, Buffet, and perhaps George Soros, who of course has long been the boogeyman billionaire of Fox “News,” where they constantly trot him out to cover for the much more sinister billionaires behind the curtain that are propping them up, like Murdoch and the Koch Brothers and Harlan Crow (most notable for being the “emotional support billionaire,” as the ladies of Strict Scrutiny put it, of at least one Supreme Court justice).  One gets the idea that Stewart is vaguely saying that all billionaires are probably bad, though he typically has a good enough time talking to Cuban that he doesn’t want to go quite that far explicitly.  And, if you’d like to hear a well-reasoned rant on why all billionaires are bad, Adam Conover has you covered, and, if you’d like to hear specifically why George Soros is the target of so many right-wing conspiracies, and which ones may actually have some foundation in reality, the Some More News team has got you covered there too.  But, in general, I think that people like Cuban—and maybe even only Cuban, since I’ve never heard Buffet or Soros talk as openly about their philosophies—are a fascinating mix of good billionaire and bad billionaire.  Many people (such as Ingrid Robeyns and, to a lesser extent, Bernie Sanders) have argued that you can’t be both a billionaire and a good person, and I think there’s some grain of truth to that.  Certainly there are times when I’ve listened to Cuban and thought, ah, there’s the coldness and moral apathy that earned him those billions.  But there are also times when he says things that are both articulate and progressive.  So I always have a fun time listening to his interviews.

I noted this one for a couple of places where he seemed to be agreeing with some of my prior posts.  For instance, when talking about AI, he said this:

Cuban: Then there’s using AI.  And so there are things like NEPA, which go into the environmental protection stuff to try to find out if there’s a little frog or whatever before something’s built.  In my opinion, in the conversations I had with some of the Harris folks, is that’s where AI really, really can apply.  Because there’s a process designed for the people in NEPA who go through and determine what should be approved and what data is required and what friction should be added or what friction should be removed.  Artificial intelligence is great for that. All the rules that the individuals on those councils and boards that make those determinations, they have rules that they follow. They have guidebooks that they follow.

Stewart: It’s onerous.

Cuban: Yeah, it’s onerous.  There’s tons of bureaucracy, but tons of data there.  You put that into artificial intelligence,
into a large language model, and you use that to train the large language model.  And then when a new project comes along, you set up agents which then feed the questions and the answers, and the answers to the responses to that new organization, whatever it is they may be building.

Which is exactly correct: this is using AI in how it’s meant to be used.  There’s a process, and it takes a long time for humans to complete that process, but a computer can do it faster.  Up until now, whenever that process involved people weighing different abstract factors and trying to figure out what’s the best approach, you just couldn’t use a computer to speed things up, because computers can’t do that.  But AIs—more specifically, LLMs—can.  (You can read more of my thoughts on the current crop of AIs in my post questioning is AI intelligent and several other posts: just click “technology” in the “Things about Things” box over there to the left.)

But Jon comes back with this:

Stewart: Does that abdicate our autonomy?  ...

Cuban: ...  The challenge is, who makes that decision?  When it’s obvious, it’s easy.  When it’s not so obvious, it’s far more difficult.  And so that’s where the AI comes in and large language models.  Because across the breadth, however many instances of evaluations that need to take place across the country, you don’t want individuals having to make those decisions.

Stewart: But I thought that’s the whole point.  I thought the whole point of people running for office is that they’ve got a vision and they earn our trust, as opposed to AI. And this, again, may be more of the Luddite’s view of not understanding ... AI. I’m nervous about abdicating that.  At least with people, there is a certain regime of accountability that we can bring through.  I can’t vote out a large language model.

And Cuban was, surprisingly, not able to mount a cogent response to this.  I, however, am.
  • You can’t vote out the dozens—sometimes hundreds—of people in the EPA or whichever bureaucracy we’re talking about who are making the decisions about how to navigate all those regulations either.  You can vote out the guy at the top, maybe, but they’re just the person who either approved or rejected the work of all those faceless bureaucrats.  How is that different from the AI example?  Asking the AI to help make a decision doesn’t automatically mean that there’s not someone at the end of the day who will either approve or reject the AI’s plan.
  • Jon says it would be better to just get rid of all the red tape.  Well, duh.  Of course that would be better.  Sadly, the “science fiction” plan of replacing the work of all those bureaucrats with AI is more feasible (and likely) than any plan to reduce the current bureaucracy of our governmental agencies.
  • Jon also says that people can cut through the red tape too, like Pennsylvania governor Josh Shapiro did when fixing the collapse of Interstate 95 in Philadelphia.  Cuban points out that humans can do things quickly when the answer is easy, but not so much when the answer is harder.  This is vaguely correct, but it doesn’t explain things well enough.  He was closer when he talked about the “little frog.” There are always going to be cases where the “right” decision means weighing environmental factors vs economic ones (to take a simple example), and for the most part we have a tendency to devolove into camps.  There are people who are always going to take the side of the environment, regardless of the cost, and there are people who are always going to take the side of the business, regardless of the impact on the planet.  But an AI doesn’t have a predetermined agenda.  It can weigh factors, given all the background context, and make a dispassionate decision on where to draw the balance.
  • And, despite the fact that people are predisposed (by scifi thrillers, mostly) to believe that AIs are black boxes and we can never understand how they arrive at their decisions, the truth is that, at least for the LLMs that are currently what we mean when we say “AI,” we actually do know exactly how they arrive at those decisions.  LLMs use something called “chain of thought reasoning” (usually abbreviated in LLM literature as CoT), which basically means that LLMs “think out loud” so that humans can review all their logic and make sure it’s sound.
  • Which also knocks down Jon’s other objection (which is expanded upon in the show’s closing segments, where his producers talk about the very real cases of people losing their jobs to AI): that this process will eliminate people’s jobs.  Sure, in business this is a very real problem: many businesses look at AI as a way to save money, and eliminating jobs is one way to do that.  But that ain’t an AI problem.  Why do so many companies lay off people right around Christmas?  Because it makes their bottom line look better at end-of-year.  Companies killing jobs to improve their bottom lines ain’t an AI problem: it’s a shitty company problem.  But government is different.  Most government workers have protection from being downsized in this fashion.  Plus, the whole reason all this red tape takes forever is that the government is constantly understaffed.  Having AI make decisions which are then reviewed by experts doesn’t in any way reduce how many people you need to get the thing accomplished: it only reduces the amount of time those people have to devote to the job.

Anyway, another place Cuban appeared to agree with me is on the topic of insurance, which I broached in a long tangent last week.

Cuban: But it gets worse.  It gets worse.  And so now these providers, the hospitals and doctors, they negotiate with the big insurance companies.  And it’s fascinating.  If you walk into a hospital to pay for an MRI, as an example, and you don’t mention your insurance, you just say, I want a cash price, they’ll probably say it’s $350 to $450, depending on where you live.  That same hospital will negotiate with what they call the BUCAs, the big insurance companies.  For that same thing, they’ll negotiate a price of $2,000.

Stewart: What?

Cuban: Yeah.  So you would think that big insurance company negotiating with the hospital and that insurance company
covers millions of lives.  They insure or deal with—

Stewart: Why wouldn’t they negotiate that if it’s a bulk thing to $100?  Why would it be higher?

Cuban: Because the hospital needs the insurance company as a sales funnel to bring patients in so they can pay their bills. And the insurance company wants that price to be higher, particularly for things like the ACA, because the ACA requires for all the plans they cover that they spend up to 85%.

Again, I’m not sure Cuban is explaining it particularly well, but remember how I put it last week: “companies couldn’t charge that much for medical care if the insurance companies weren’t picking everyone’s pockets ...  insurance is enabling the whole cycle.”

Anyway, that’s my too-long review of the Mark Cuban interview.  I’ll just ding Stewart one last time—and I really do love Jon Stewart, don’t get me wrong, but he’s not always right, and I’m not afraid to call him out on it—on some lack of self awareness.  In the wrap-up with his producers, he reiterates his skepticism on using AI that I talked about above: he refers to it as “dystopian” and then extrapolates to “hey, just so we’re clear here, you’re saying that the computer controls the entire hospital and decides what oxygen to turn on and turn off through analytics?” Then, less than a minute later, he answers a listener question about what he thinks his mistakes were for the year.

Well, you guys know this.  I get annoyed at myself for being a little high-horsey.  And you get a little of the sanctimony in there.  So I try to relax sometimes on the certainty of my opinions.

Oh, you get a little sanctimonious, do you?  You mean, like you did a few seconds ago?  Work harder on the relaxation part, my man.

But, all that aside, still a great host, very incisive, very trenchant.  Looking forward to more shows next year.









No comments:

Post a Comment