Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, January 14, 2024

GPT FTW

This week I’ve been fighting my computer curse again.  Still, despite the fact that the computer gods still really hate me, I’ve managed to accomplish a few things.  I’ve managed to get the version history from my Syncthing replicating to my Backblaze B2 account, I’ve updated the OS and a bunch of packages on my Synology NAS, fixed a long-standing annoyance with my use of NoMachine, and I started building my first custom GPT.  And all that was made much easier with the use of ChatGPT.

Perhaps this deserves a longer post—and perhaps that’ll be what I put up next week—but I’m still seeing a lot of AI skepticism out there.  Last night I saw an interview with a tech reporter who agreed that, yeah, AI might be useful for helping developers with their coding, but beyond it wasn’t good for much.  And, hey: it’s true that trying to make it useful for searching the Internet is tough (though not impossible), and trying to make it work for handling things like customer service is just a horrifyingly bad idea.  But that doesn’t make it useless.  In point of fact, for things like helping you integrate different software packages together, configure your hardware, or design a solution to an ongoing problem, things like ChatGPT are actually pretty useful.  And I think it’s only going to get more useful as time goes on.  Once they figure out how to integrate ChatGPT (or one of its competitors) into something like Alexa or “Hey Google” (as it’s called in our house), the utility of “smart devices” is going to go way up.  Because our smart devices are actually kinda stupid right now, so they could really use that AI boost.

Anyhow, I don’t think I want to turn this blog into an AI evangelism vehicle or anything, but ... damn, ChatGPT shore is useful.

That’s all I really wanted to say.









Sunday, January 7, 2024

Discordia, discordiae [f.]: A Misunderstanding


I don’t understand the appeal of Discord.

Oh, sure: I understand it for things like gaming.  The few times that I’ve run D&D games with remote participants, I happily used Discord, and found it to be excellent for that purpose.  Nowadays, there are fancier platforms for such purposes—Alchemy, Owlbear Rodeo, or even things like Roll20 and Fantasy Grounds, which have been around so long they’re starting to show their age—but honestly I might just stick to something like Discord for its simplicity.

The thing I don’t understand is that it seems to have become the flavor of the decade for hosting online communities.  Web forums are considered passé nowadays: downright old-fashioned, some would even say.  How many times have I heard lately “if you have a question, just pop into our Discord”?  People are actually using it for product support, and it just makes no sense to me.

Now, on the one hand, you might say: well, that makes perfect sense—Discord is primarily popular among Zoomers, while you are very old.  And, sure, I can’t argue the first part, and while I might protest the second one a bitI’m not a freakin’ Boomer (I am in fact, an elder Gen-Xer, if one believes in those sorts of things*)—I’m not going to deny that it’s a fair observation.  But I have one foolproof argument that absolutely proves that this has nothing to do with my age: IRC.

Because, in exactly the same way that Reddit is just Usenet reborn, Discord is 100% just the second coming of IRC.  And IRC was invented in 1988, and by the time I was in the age range that Zoomers occupy now—the upper age range, granted, but still: within the range—it was the way that cool tech people communicated.  And I didn’t understand the appeal of it then either.

See, Discord (just like IRC before it) has several fundamental problems that make it really bad for online support in particular, and long-lived online communities in general.  And please don’t think I’m trying to bring back webforums here: I always thought they were pretty awful too, at least compared to the interface of something like Usenet.  But it’s pretty easy to look good when you’re put up against something as terrible as Discord.  And, as much as I’ve always hated webforums, I’ve had some experience with them: I’ve been the moderator a popular Heroscape website for coming up on two decades now.  Of course, most of the younger fans (such as they are for a game that’s been discontinued for years now**) have moved to YouTube and, I suppose, Discord, but please don’t imagine that I’m upset about that.  Being a moderator of a forum whose traffic is declining means I have less work to do, so I’m all for everyone moving on to other venues.  But my point is, I have a bit of experience not only participating, but even managing, a long-running online community.  So I’m not just talking out of my ass here.

So, what can a webforum do that Discord can’t?  Well, first off, the organization is just better.  A webforum has forums, which have threads.  The vast majority of them also have dedicated areas for file uploads, and often a separate one for images.  Many have blogs or something similar attached to them.  Threads can be moved to another forum when they’re posted in the wrong place by a clueless user, or split apart when they get too crowded, or merged when people are trying to have the same conversation in multiple places at once.  Discord has ... channels.  That’s pretty much it.  There are a couple of different types of channels, but (as near as I can tell, in any event) that has more to do with the method of communication than anything else (e.g. text channels, voice channels, video channels, etc).  So, channels are the only way to organize things, so everything is sort of forced uncomfortably into that model.

A bigger problem, which Discord shares with IRC, is that it’s all real-time.  If I show up on a webforum, I can post a question, then sign off and check back in a few hours (or the next day) for an answer.  On Discord, I post a question, and if someone is there who can answer the question, I get the answer instantly, which is certainly nice.  But if there isn’t anyone there at that exact moment, I just don’t get an answer at all.  I guess some people do go back in time to read all the messages that came in since the last time they were online, but that’s not easy to do, and it might be way too many messages anyway, if the community is large, and even if the person sees the question and knows the answer, they’re probably not going to post it because the conversation has moved on since then so now their answer has no context, and even if the person makes it through all that and actually posts the answer, then I very well might not be online to receive it.  It is quite possibly the worst possible model for customer support that could be imagined in this reality or any other.

But the biggest problem with Discord is that it’s very difficult to search.  At least IRC had logging: most IRC chats were saved and posted to web pages, where you could do minimal, primitive, Ctrl-F-type searches.  A webforum, on the other hand, typically has sophisticated searching: I can find all threads in a certain group of forums that have posts from a given user that contain 2 or more words, not necessarily adjacent.  Not to mention I can use Google to search instead if that’s somehow advantageous.  Meanwhile, searching in Discord is a miserable affair, and can only be done on Discord.  I can set up my own Discord server, but I can’t log those messages to a separate location, because it’s not really my server: it’s just a virtual server controlled by Discord.  And the inability to locate old messages easily means that people just ask the same questions over and over, and people have to spew out the same answers over and over, which everyone no doubt gets sick of doing, and I can tell you from experience that everyone definitely gets sick of reading them.  Lack of easy and versatile search means that the community has no history ... no memory.  And a community with no memory is cursed to just do the same things over and over, not even expecting a different result: just expecting no result whatsoever.  Which is exactly what it gets.

So I don’t see the appeal of Discord, just as I didn’t see the appeal of IRC.  Personally, I was happy to see the latter fade in popularity, though of course there are still corners of the Internet where you can still find IRC communities, presumably inhabited by gray-bearded programmers of COBOL and Ada reminscing about the good ol’ days of JCL and PDP-11s.  But everything that fades comes around again.  AIM is gone, but now we have WhatsApp.  Usenet is (mostly) gone, but now we have Reddit.  And here’s Discord, with the exact same interface that didn’t work with IRC, trying to make it work again.  Honestly, Reddit has the best user interface, I think: subreddits are like forums, threads are threads, and the conversations are displayed heirarchically, so that a response to a given message goes with that message rather than just being tacked on at the end (as they would be in a webforum thread).  This is exactly how Usenet worked (and Slashdot, for that matter), and I still think it’s the superior way to display and store community conversations.  But Reddit has its own issues, which are eerily similar to Usenet’s: it has a reputation for being a cesspool, which certain parts of it deserve, and it often makes it easy for misinformation to thrive and multiply.  Perhaps that’s because the moderation tools for webforums are better ...

Or perhaps it’s because each webforum was run by its own community.  They owned the servers and they set the rules.  Usenet and IRC were like that too: very decentralized, with each community having near complete autonomy.  But Reddit is a company, as is Discord; in fact, it’s very rare these days for a comunity of any type to set up its own servers and run its own software.  You set up virtual servers at Amazon or Microsoft, web sites at Squarespace and WordPress, you put your photos on Instagram and your blogs on Tumblr.  Well, assuming you even bother with blogs at all: these days, it’s more common to just tweet, which of course means you’re using Elon Musk’s personal dumpster fire.  Each one is its own company, with its own goals, and none of those goals are to help your online community thrive, unless of course your thriving can line their pockets in the process.  And obviously the un-decentralization of the Internet is a much broader topic than this meager blog post can address, but I do think Discord is symptomatic of that issue.

So I continue not to “get” Discord, even though I occasionally use it, because often there just isn’t another option.  But it’s always an option of last resort.  Unless, as I noted initially, I’m gaming online.  It’s still pretty good at what it was originally intended for.  I just feel like, somewhere along the way, they got a bit lost trying to be everything to all people.  That hardly ever works.



__________

* And one mostly shouldn’t.  Personally, while I think it is bullshit to imagine you know what any given person is going to do or say based on an arbitrary “generation” label assigned by the Pew Research Center, I do think it’s okay to use the labels as a convenient shorthand for talking about demographic differences between age groups, which are absolutely a thing that exists.

** But is now officially making a comeback, for what it’s worth.











Sunday, July 23, 2023

CollabGPT

This week I’ve been concentrating on setting up my file synchronization and versioning system.  For this, I’ve mainly been consulting with ChatGPT.  I originally wanted to hire an actual person to help me design and set this up, but I couldn’t find anyone who was both willing to work on my home system—what with me not being a business—and who seemed trustworthy (no shortage of shady characters, of course, but working on your network necessarily involves giving out your password, which is probably a Bad Thing to do in the case of randos on the Internet).  So I eventually decided to just ask ChatGPT and do whatever it said.

Well, perhaps not whatever it said, but, if you’re willing to put in a bit of effort to chastise it when it says something stupid and challenge it when it says something unlikely, you can actually get quite a lot out of it.  And it’s useful both in the design phase as well as the implementation phase.  Just about the only downside is that you have to start every chat fresh from ground zero (though there’s a new experimental feature which helps with that, a little).  And you can’t get around that by just staying in the same chat forever, because ChatGPT has a limited number of “tokens” (roughly equivalent to words) that it can remember before it starts forgetting the earliest parts of the conversation.

Still, you can get quite a lot accomplished even so.  Thanks to ChatGPT, I now have a system whereby I use Syncthing to handle synchronization across computers, and also provide versioning so that I can go back to the previous version of any file.  Now I’m working on getting that backed up to the cloud.

One of the fun things about ChatGPT’s limited memory is that sometimes you have to explain to it what it already figured out.  Today I started a new chat on this topic, as the old one was getting pretty full.  I told it:

I’ve designed a file synchronization and versioning system that works like this:
* There are seven shares, each of which is a filesystem with gigabytes of files.
* Haven and Avalir each have copies of six of the seven shares (there is one share that is not needed on each of those machines).
* Files are bidirectionally synchronized between Haven and Avalir using Syncthing.  No file versioning is used for these two servers.
* Zadash has a copy of all seven shares, with full file versioning (by which I mean keeping a copy of all versions forever).
* Editing of the files is done on Haven or Avalir only.  Since Syncthing only stores versions for remote edits, not local edits, if I edited files on Zadash, it would keep the previous version.  I do mount the shares from Haven to Zadash JIC I need to edit on Zadash, but the Syncthing copy is for archival purposes only.

To which ChatGPT responded:

You’ve outlined quite an advanced file synchronization system, ...

And I’m like, bitch, this was all your idea!

Anyways, that’s how my week went.  Next week, a longer topic, hopefully.









Sunday, July 16, 2023

Of Waterfalls, Pigs, and Red Red Tape

Once upon a time we used to develop software via something known as the “waterfall model.” When trying to envision this, don’t think about something like Angel Falls, where the water just falls off a cliff.  Picture instead something along the lines of Detian Falls in Vietnam: a series of steps as the water drops, level by level, to its final destination.  See, back in those olden times (which, honestly, were mostly before I got into the industry, though there was some residual inertia even in the mid-80s, when I came along), back then, as I say, the specifications (or “specs”) were developed by one team, who then passed them along to the design team, who designed the whole system in a very abstract way, and then passed it along to the programming team, who did all the hard coding work, and then passed it along to the QA team, who verified that the functionality matched the original specs, and then they passed it on the customer and then it’s done.  The primarily analogy to a waterfall for this development model is that the water only flows one way: for the real-world waterfall, that’s due to gravity, and for the waterfall model, it’s because going backwards—“upstream,” if you will—is expensive.  You really want to get each phase just perfect, because if you find a mistake, you essentially have to start over ... and that costs the company money.  Sometimes, with this model, starting over was so expensive that they just didn’t.  The greatest stories of software development debacle were due to the sunk cost fallacy: too expensive to try get back up to the top of the waterfall, so we just gotta make due with whatever horror show we’ve ended up with.

So throughout the 80s and 90s software developers started saying there had to be a better way.  In 1986 the “spiral model” was proposed: it was built into the system that, instead of planning out the whole system at the beginning, you’d spec out just an initial prototype, then design that, code it, test, then go back to the spec stage and tack on more features.  Starting over was no longer a bug, but a feature.  Instead of losing a bunch of money because we had to start everything from scratch, we were only starting the next thing from scratch ... and, if we needed to tweak some stuff from the first iteration, well, we already had the mechanisms in place for specifying, desiging, coding, and testing.  Those phases were in our past, true: but they were also in our future.

Of course, the spiral model is a very abstract concept.  How do you actually implement such a thing?  That is, what are the actual processes that you put into place to make sure the company and its employees follow the model and achieve the goals of iterative design?  For that, we needed to move beyond models and into methodologies.  Enter Agile.

Agile software development practices, usually just referred to as “Agile,” were a way to concretize the spiral model abstraction.  Sometimes they would propose tweaks to the model, sure, but the main thing was, no going back to waterfall.  And, to distance themselves from those crusty old waterfall methodologies—many of which were by this point formalized as standards, such as the DOD’s 2167A—they all had cool new names: RAD (“Rapid Application Development”) and Scrum and Crystal Clear and Extreme Programming (if you didn’t just hear a Bill and Ted’s-style guitar lick, you’re doing it wrong).  This last one, usually abbreviated to “XP” (no relation to the Windows version) was not the first agile methodology to come along ... but it was the first one I was ever exposed to, and they say you never forget your first.

Kent Beck, author of “Extreme Programming Explained,” presented to me a perspective that literally changed my (software development) life.  He pointed out that, in order for the waterfall model to work, you have to be able to predict the future.  The whole thing is predicated on predicting what problems will happen, anticipating them, and building them into the plan.  If you fail to predict something, then everything falls apart.  Except ... humans really suck at predicting the future.  When we say “predict,” what we really mean is “guess.” And we usually guess wrong.  As Kent so succinctly put it:

The problem isn’t change, per se, because change is going to happen; the problem, rather, is the inability to cope with change when it comes.

Stop trying to keep change from happening: it’s a fool’s errand.  Rather, create a better methodology which says “yeah, things change: so what? we got that covered.”

Agile is all about being flexible.  Hell, the reason it’s called “agile” is because the old waterfall methodologies were ponderous and slow to course-correct.  It’s common for business people to talk about agility in terms of responding to changes in the market: the creators of the Agile Manifesto (one of whom was Beck himself) wanted to capitalize on that perception.  Our development practices can make your company more agile, and that makes you quicker to respond, and that helps you beat your competitors.

And yet ... it’s kind of strange that we need all these procedures and guideliness and principles and a whole friggin’ manifesto to perform something for which the entire purpose is to be flexible.  The thing I never liked about XP, despite all its merits (and the aforementioned life-changing-ness), was that it had all these notes about how, if you’re not following every single rule, then you’re not “doing” XP.  You’re just playing at it.  I always found that inherent dichotomy cognitively dissonant: so I have to do things exactly according to these rules so that I can break the rules? I have rigidly fit into the straitjacket so that I can have the flexibility to move freely? I have to precisely walk the straight line so that I have the freedom to jump in any direction?  Surely you see the contradiction.

And XP is certainly not alone in this strange philosophy.  I’m not sure we can claim any of the Agile methodologies to have “won,” but in my experience Scrum has made the most extensive inroads into corporate culture.  And it is chock full of prescriptive little strictures: mandatory stand-up meetings with strict time limits and precisely defined cycles called “sprints” and detailed reporting pathways between developers and business owners.  Maybe all this red tape is why business people have embraced it more than the other Agile practices.  But it presents a weird, oxymoronic message to the developers: we want to you to be free, we want you have flexibility, but you have to all these things, just so.  And sometimes the business owners can get very upset if you question this.  Because they’ve been trained, you see?  They’ve taken courses in “how to do Agile” and “how to run Scrum” and all that, and (of course) all those courses stressed that you have to do everything perfectly or else it will all fall apart, so as soon as the developer suggests that maybe we should change this one thing because it’s actually making our lives harder ... well, it won’t be pretty, let me tell you.

One of things I always liked about Scrum was that they made clear the difference between involvement vs commitment.  The traditional explanation for this is via the fable of the pig and the chicken.  Now, these days Agile folks will tell you not to use that story to explain things any more.  The first reason they cite is that people will take offense: calling someone a pig implies they’re greedy, or dirty; calling them a chicken implies that they’re cowardly.  These are, of course, human metaphors that we’ve placed on those particular animals, and also they have nothing to do with the actual story.  But people won’t hear the message, they point out, if they’re hung up on the words used to deliver it.  I would probably say that people will look to any excuse to get offended, especially if it gets them out of following rules, but I’m a bit more of a cynic.

The story points out that, in the context of preparing a breakfast of eggs and bacon, the chicken is involved, but the pig is committed.  This is a very simple concept to grasp, and the analogy illustrates it perfectly, but, yes, yes: let us not offend anyone.  I would be fine if this first reason were the only reason that modern Agile advocates had dropped the pig and chicken story: that would just mean that they had replaced it with a different analogy that perhaps involved more noble animals, or fruits, or something.  But, no: they’ve started to question the whole concept.  See, the original point of pigs and chickens was to point out to the business people that it wasn’t particularly fair (or, you know, sensible) for them to set deadlines for how long something would take.  They weren’t the ones doing it.  The developers have to actually accomplish the thing, and they know how long it should take (even if they’re bad at estimating that for other reasons, which are happily addressed by other Agile practices).  The business owners are involved, but the developers are committed.  This not only stresses to the business folks that they don’t get to say how long something takes, but it also stresses to the developers that, once they say how long it will take, they’ve made a commitment to getting it done in that timeframe.  These are all good things.

But not so, says the Updated Scrum Guide.  Those poor business people shouldn’t be made to feel like they can’t dictate timelines.  “In some cases these people were key project sponsors or critical system matter experts. These are individuals who, while possibly needing some education and guidance from a Scrum Master, can be critical to the success of a project.” If you’re not familiar with how to translate business bullshit back into English, this means “we want the business people to feel important, and they don’t like it when we try to put restrictions on them, and if I say it this way it’ll make you think that you developers are actually gaining something, rather than that we’re just caving in and letting the business people run roughshod over all we’ve built.” The thing I always liked about the Agile practices was that they were pretty balanced in terms of business vs development.  They said to the developers “we want you to feel respected and like your creativity is valued, and you should be in control of what you produce and the quality of your work.” But they also said to the business side “we want you to feel respected and like your market acumen is valued, and you should be in control of what gets produced and how viable it is as a product.” See? everybody is respected equally.  When you start breaking down that balance, bad things happen.

And maybe business people feel empowered to just set some processes up because it sounds good, or because it looks official, or maybe, like most other humans in workplaces, they just like telling other people what to do.  And maybe those processes actually slow things down instead of making them more efficient.  And maybe the developers feel like they can’t speak up any more—it’s no longer the case that they’re the ones who are committed because everyone’s committed now—or maybe they do speak up but they’re ignored because setting up workflow processes ... that’s above your paygrade, don’t you know.  And gradually, slowly, everything goes back to the bad old ways when we spent way more time talking about getting things done than actually doing things.

Doesn’t feel very agile, does it?









Sunday, July 2, 2023

Springs Eternal

Several months ago, my work machine started flaking out, so I got the folks at work to order me a new one.  It came, and I was able to use it to work around my immediate problems, but I never got the chance to completely switch over to using it—too much work shit going on.  Well, a few weeks ago, my laptop started flaking out.  And, as of Friday, my new laptop has arrived, and now I have two machines to set up and get configured and transfer hundreds of thousands of files to.  Lucky me.

So, even if I had planned to do a full post this week (which I had not), I simply wouldn’t have the time: I’ve got a laptop to configure, a desktop to configure, gigabytes to copy, filesystems to share, my wife to murder, and Guilder to frame for it—I’m swamped.  Still, there’s some hope that, after this process (as difficult as it may be), things will be better.  Honestly, I’d be happy if they were just not worse than they were before all this madness started, but I suppose I can’t help but hope for better.  Call me foolish if you must: I won’t deny it.  We’ll see how it goes.









Sunday, June 11, 2023

Do Androids Dream of IQ Tests?

Recently, I was listening to a podcast—it happened to be Election Profit Makers, with the lovely and talented David Rees.1  In this particular episode,2 David offers this “hot take”:

I also think AI is kinda bullshit.  I’ve been thinking about it; I think there’s some stuff that AI can do, but on the other hand it really is not ... we shouldn’t call it AI.  Someone was making this point, that calling it “artificial intelligence” is kind of propaganda.  It’s not really intelligent yet.  It’s just like a word prediction algorithm, you know?  You give it a topic—it doesn’t know what it’s saying.  It’s ... it’s like an algorithm that predicts what the—given any word or paragraph, it predicts what the next most likely word is, I think.  I don’t think it really thinks ... I don’t think it’s artificial intelligence.

Of course, I put “hot take” in quotes because it’s not particularly hot: as David himself notes, other people have been making this observation for a while now, especially in relation to ChatGPT.  I gave my own opinions of ChatGPT several months ago, and it’s only become more pervasive, and more useful, since then.  Now, David’s assessment is not wrong ... but it’s also not complete, either.  David’s not a tech guy.  But I am.  So I want to share my opinion with you on this topic, but, be forewarned: I’m going to ask a lot of questions and not necessarily provide a lot of answers.  This is one of those topics where there aren’t any clear answers, and asking the questions is really the point of the exercise.

So, first let’s get the one minor detail that David is wrong about out of the way.  What David is referring to here are the LLMs, like ChatGPT.  To be pendantic about it, LLMs are just one form of AI: they just happen to be the one that’s hot right now, because it’s the one that’s shown the most promise.  If you’ve had the opportunity to interact with ChatGPT or any of its imitators, you know what I mean.  If not ... well, just take my word for it.  LLMs are extremely useful and extremely promising, and the closest we’ve come so far to being to talk to a machine like a person.3  But they are not the totality of AI, and I’m sure there will be AI in the future that is not based on this technology, just as there was in the past.

But, forgiving that understandable conflation, what about this notion that an LLM is just a “predictive algorithm,” and it doesn’t actually think, and therefore it’s a misnomer to refer to it as “intelligence”?  David goes on to cite (badly) the “Chinese room” thought experiment; if you’re unfamiliar, I encourage you to read the full Wikipedia article (or at least the first two sections), but the synopsis is, if a computer program could take in questions in Chinese and produce answers in Chinese, and do so sufficiently well to fool a native Chinese speaker, then a person who neither speaks, reads, nor understands Chinese could be operating that program, and taking in the questions, and passing back the answers.  Obviously you would not say that the person could speak Chinese, and so therefore you can’t really say that the program speaks Chinese either.  Analogously, a program which simulates intelligent thought isn’t actually intelligent ... right?

This immediately reminds me of another podcast that I listen to, Let’s Learn Everything.  On their episode “Beaver Reintroductions, Solving Mazes, and ASMR,”4 Tom Lum asks the question “How does a slime mold solve a maze?” A slime mold is, after all, one of the lowest forms of life.  It doesn’t even have any neurons, much less a brain.  How could it possibly solve a maze?  Well, it does so by extending its body down all possible pathways until it locates the food.  Once it’s done that, it retracts all its psuedopods back into itself, leaving only the shortest path.

Now, the conclusion that Tom (as well as his cohosts Ella and Caroline) arrived at was that this isn’t really “solving” the maze.  Tom also had some great points on whether using maze-solving as a measure of intelligence makes any sense at all (you should really check out the episode), but let’s set that aside for now.  Presuming that being able to solve a maze does indicate something about the level of intelligence of a creature, isn’t it sort of sour grapes to claim that the slime mold did it the “wrong” way?  We used our big brains to figure out the maze, but when a creature who doesn’t have our advantages figures out a way to do complete the task anyway, we suddenly claim it doesn’t count?

Let’s go a step further.  If I give the maze to a person to solve, and they laboriously try every possible pathway until they find the shortest one, then are they really doing anything differently than the slime mold?  And does that mean that the person is not intelligent, because they didn’t solve the maze the way we thought they should?  I mean, just keeping track of all the possible pathways, and what you’ve tried already ... that requires a certain amount of intelligence, no?  Of course we lack the advantages of the slime mold—being able to stretch our bodies in such a way as to try all the pathways at once—but we figured out a way to use our brains to solve the problem anyhow.  I wonder if the slime mold would snort derisively and say “that doesn’t count!”

Now let’s circle back to the LLMs.  It is 100% true that all they’re doing is just predicting what the next word should be, and the next word after that, and so on.  No one is denying that.  But now we’re suddenly faced with deciding whether or not that counts as “intelligence.” Things that we’ve traditionally used to measure a person’s intelligence, such as SAT scores, are no problem for LLMs, which are now passing LSATs and bar exams in the top 10%.  But that doesn’t “count,” right?  Because it’s not really thinking.  I dunno; kinda feels like we’re moving the goalposts a bit here.

Part of the issue, of course, is that we really don’t have the slightest idea how our brains work.  Oh, sure, we can mumble on about electrical impulses and say that this part of the brain is responsible for this aspect of cognition based on what lights up during a brain scan, but, at the end of the day, we can’t really explain what’s going on in there when you can’t remember something today that you had no trouble with yesterday, or when you have a crazy idea out of nowhere, or when you just know that your friend is lying to you even though you can’t explain how you know.  Imagine some day in the far future where scientists discover, finally, that the way most of our thinking works is that words are converted to symbols in our brains, and we primarily talk by deciding what the next logical symbol should be, given the current context of who we’re talking to and what we’re talking about.  If that were to ever happen, seems like we’d owe these LLMs a bit of an apology.  Or would we instead decide that that aspect of how we think isn’t “really” thinking, and that there must be something deeper?

Look, I’m not saying that ChatGPT (for example) actually is intelligent.  I’m just pointing out that we don’t have a very clear idea, ourselves, what “intelligent” actually means.  It’s like the infamous Supreme Court definition of obscenity: we can’t define intelligence, but we know it when we see it, and this ain’t it.  But what I find to be a more interesting question is this: why does it matter?

An LLM like ChatGPT serves a purpose.  Now, overreliance on it can be foolish—just check out the case of the lawyers who tried to use ChatGPT to write their legal briefs for them.  As the Legal Eagle points out in that video, their idiocy was not so much the use of an LLM in the first place, but rather the fact that they never bothered to double check its work.  So you can’t always rely on it 100% ... but isn’t that true of people as well?  Honestly, if you’re a lawyer and you get a person to do your work, you’re still responsible for their mistakes if you sign your name at the bottom and submit it to a judge.  An incisive quote from the video:

... the media has talked about how this is lawyers using ChatGPT and things going awry.  But what it’s really revealing is that these lawyers just did an all around terrible job and it just happened to tangentially involve ChatGPT.

So you can talk to an LLM as if it were a person, it talks back to you as if it were a person, it can give you information like a person, and oftentimes more information that you can get from most of the persons you know, and you can rely it as exactly as much (or, more to the point, exactly as little) as you can rely on another person.  But it’s not a person, and it’s not really “thinking” (whatever that means), so therefore it’s not “intelligent.” Is that all just semantics?  And, even if it is, is this one of those cases where semantics is important?

I’ve got to say, I’m not sure it is.  I think every person reading this has to decide that for themselves—I’m not here to provide pat answers—but I think it’s worth considering why we’re so invested in things like LLMs not being considered intelligent.  Does it threaten our place up here at the top of the food chain?  (Or perhaps that should be “the top of the brain chain” ...)  Should we seriously worry that, if an AI is intelligent, that it poses a threat to the existence of humanity?  Many of the big tech folks seem to think so.  I personally remain unconvinced.  The Internet was proclaimed to be dangerous to humanity, as were videogames, television, rock-and-roll ... hell, even books were once considered to be evil things that tempted our children into avoiding reality and made them soft by preventing them from playing outside.  Yet, thus far, we’ve survived all these existential threats.  Maybe AI is The One which will turn out to be just as serious as people claim.  But probably not.

And, if it is the case that AI won’t take over the world and enslave or destroy us, then what difference does it really make whether or not it’s “technically” intelligent?  If it’s being useful, and if we can learn how to use it effectively without shooting ourselves in the foot, that’s good enough for me.  Perhaps it can be good enough for you as well.




[For complete transparency, I must say that, while ChatGPT did not write any of the words in this post, it did come up with the title.  Took it six tries, but it finally came up with something I felt was at least moderately clever.  So, if you like it, it’s because I’m very good at prompting LLMs, and, if you hate it, it’s because ChatGPT is not very smart.  This is one of the primary advantages of having an LLM as a contributor: I can hog all the credit and it will never be offended.]



__________

1 If you’re not familiar—and can figure out where to stream it—you should check out his Going Deep series.  It’s excellent.

2 Approximately 40 minutes in, if you want to follow along at home.

3 “LLM” stands for “large language model,” by the way, although knowing that is really unnecesssary to follow along on this topic.

4 Again, if you want to follow along at home, jump to about 44:45.











Sunday, February 19, 2023

Getting Chatty

I’m probably not the first person to tell you this, but there’s a new AI wunderkind taking the Internet by storm, and it’s called ChatGPT.  Everyone’s buzzing about it, and Microsoft is pumping money into it like crazy, and even boring old news outlets are starting to pick it up—heck, I just heard them mention it on this week’s episode of Wait Wait Don’t Tell Me.  If you’re late to the party, perhaps I can give you some insight into what’s going on, and, if you’ve been hearing all about it but not really knowing what “it” is, then perhaps I can provide some insight.*

AI has been undergoing a bit of a Renaissance here lately.  For a long time, AI development was focussed on “state machines,” which are like really fancy flow charts.  You’ve probably seen one of these on the Internet at some point: you know those web pages that try to guess what animal you’re thinking of (or whatever), and, if they can’t guess it, then they ask you to teach it a question that will distinguish your animal from the last animal it guessed, and then it adds that to its little database ... those amusing little things?  Well, those are very simple state machines.  If the answer is “yes,” it goes down one path, and if the answer is “no,” it goes down a different one, until it eventually hits a dead end.  State machines, as it turns out, are very useful in computer science ... but they don’t make good AI.  That’s just not the way humans think (unless you’re playing a game of 20 Questions, and even then a lot of people don’t approach it that logically).  So eventually computer scientists tried something else.

One way you can make a better AI than a state machine is doing something called “machine learning.” With this, you take a bunch of data, and you feed it into an algorithm.  The algorithm is designed to analyze the data’s inputs and outputs: that is, if humans started with thing A (the input), then they might conclude thing B (the output).  If you have a decent enough algorithm, you can make a program that will conclude basically the same things that a human will, most of the time.  Of course, not all humans will come up with the same outputs given the same inputs, so your algorithm better be able to handle contradictions.  And naturally the data you feed into it (its “training data”) will determine entirely how good it gets.  If you accidentally (or deliberately) give it data that’s skewed towards one way of thinking, your machine learning AI will be likewise skewed.  But these are surmountable issues.

Another thing you could do is to create a “language model.” This also uses training data, but instead of examining the data for inputs and outputs, the algorithm examines the words that comprise the data, looking for patterns and learning syntax.  Now, “chatbots” (or computer programs designed to simulate a person’s speech patterns) have been around a long time; Eliza, a faux therapist, is actually a bit older than I am (and, trust me: that’s old).  But the thing about Eliza is, it’s not very good.  It only takes about 5 or so exchanges before you start to butt up against its limitations; if you didn’t know it was an AI when you first started, you’d probably figure it out in under a minute.  Of course, many people would say that Eliza and similar chatbots aren’t even AIs at all.  There’s no actual “intelligence” there, they’d point out.  It’s just making a more-or-less convincing attempt at conversation.

Still, the ability to hold a conversation does require some intelligence, and it’s difficult to converse with a thing without mentally assessing it as either smart, or dumb, or somewhere in between.  Think of Siri and other similar “personal assistants”: they’re not really AI, because they don’t really “know” anything.  They’re just capable of analyzing what you said and turning it into a search that Apple or Google or Amazon can use to return some (hopefully) useful results.  But everyone who’s interacted with Siri or her peers will tell you how dumb she is.  Because she often misunderstands what you’re saying: sometimes because she doesn’t hear the correct words, and sometimes because her algorithm got the words right but failed to tease out a reasonable meaning from them.  So, no, not a “real” AI ... but still something that we can think of as either intelligent or not.

Language models are sort of a step up from Siri et al.  Many folks are still going to claim they’re not AI, but the ability they have to figure out what you meant from what you said and respond like an actual human certainly makes them sound smart.  And they’re typically built like machine learning models: you take a big ol’ set of training data, feed it in, and let it learn how to talk.

Of course the best AI of all would be a combination of both ...

And now we arrive at ChatGPT.  A company called OpenAI created a combined machine learning and language model program which they referred to a “generative pre-trained transfomer,” or GPT.  They’ve made 3 of these so far, so the newest one is called “GPT-3.” And then they glued a chatbot-style language model on top of that, and there you have ChatGPT.  GPT-3 is actually rather amazing at answering questions, if they’re specific enough.  What ChatGPT adds is primarily context: when you’re talking to GPT-3, if it gives you an answer that isn’t helpful or doesn’t really get at the meaning, you have to start over and type your whole question in again, tweaking it slightly to hopefully get a better shot at conveying your meaning.  But, with ChatGPT, you can just say something like “no, I didn’t mean X; please try again using Y.” And it’ll do that, because it keeps track of what the general topic is, and it knows which tangents you’ve drifted down, and it’s even pretty damn good at guess what “it” means in a given sentence if you start slinging pronouns at it.

Now, many news outlets have picked up on the fact that Microsoft is trying to integrate ChatGPT (or something based off of it) into their search engine Bing, and people are speculating that this could be the first serious contender to Google.  I think that’s both wrong and right: while I personally have started to use ChatGPT to answer questions that Google really sucks at answering, so I know it’s better in many situations, that doesn’t mean that Microsoft has the brains to be able to monetize it sufficiently to be a threat to Google’s near-monopoly.  If you want to watch a really good breakdown of this aspect of ChatGPT, there’s a really good YouTube video which will explain it in just over 8 minutes.

But, the thing is, whether or not Microsoft succesfully integrates a ChatGPT-adjacent AI into Bing, this level of useful AI is likely going to change the Internet as we know it.  ChatGPT is smarter than Eliza, or Siri, or Alexa, or “Hey Google.” It’s more friendly and polite, too.  It can not only regurgitate facts, but also offer opinions, advice, and it’s even got a little bit of creativity.  Don’t get me wrong: ChatGPT is not perfect by any means.  It will quite confidently tell you things that are completely wrong, and, when you point out its mistake, completely reverse direction and claim that it was wrong, it was always wrong, and it has no idea why it said that.  It will give you answers that aren’t wrong but are incomplete.  If asked, it will produce arguments that may sound convincing, but are based on faulty premises, or are supported by faulty evidence.  It’s not something you can rely on for 100% accuracy.

But, here’s the thing: if you’ve spent any time searching the Internet, you already know you can’t rely on everything you read.  Half of the shit is made up, and the other half may not mean what you think it means.  Finding information is a process, and you have to throw out as much as you keep, and at the end of it all you hope you got close to the truth ... if we can even really believe in “truth” any more at all.  So, having an assistant to help you out on that journey is not really a bad thing.  I find ChatGPT to be helpful when writing code, for instance: not to write code for me, but to suggest ideas and algorithms when I can then refine on my own.  Here’s the thing: ChatGPT is not a very good programmer, but it is a very knowledgeable one, and it might know a technique (or a whole language) that I never learned.  I would never use ChatGPT code as is ... but I sure do use it as a jumping-off point quite a bit.

And that’s just me being a programmer.  I’m also a D&D nerd, and ChatGPT can help me come up with character concepts or lay out what I need to do to build one.  If I can’t figure out how to do something on my Android phone, I just ask ChatGPT, and it (probably) knows how to do it.  Networking problem? ChatGPT.  Need to understand the difference between filtering water and distilling it? ChatGPT.  Need help choosing a brand of USB hub? ChatGPT.  Want to know what 1/112th the diameter of Mercury is? ChatGPT (it’s 43.39km, by the way, which is 26.97 miles).

But you needn’t take my word for it.  The Atlantic has already published an article called “The College Essay Is Dead” (because, you know, students in the future will just get an AI to write their essays for them).  A Stanford professor gave an interview about how it will “change the way we think and work.” YouTuber Tom Scott (normally quite a sober fellow) posted a video entitled “I tried using AI. It scared me.” The technical term for what these folks are describing is “inflection point.” Before Gutenberg’s printing press, the concept of sitting down of an evening with a book was unheard of.  Before Eli Whitney built a musket out of interchangeable parts, the concept of mass production was ludicrous.  Before Charles Birdseye figured out how to flash-freeze peas, supermarkets weren’t even possible.  And there is an inevitable series of points, from the invention of the telphone to the earliest implementation of ARPANET to the first smartphone, that fairly boggles the mind when you try to imagine life before it.  My youngest child will not be able to conceive of life without a phone in her pocket; my eldest can’t comprehend life before the Internet; and even I cannot really fancy a time when you couldn’t just pick up the phone and call a person, even if they might not be home at the time.  Will my children’s children not be able to envision life before chatty AIs?  Perhaps not.  I can’t say that all those friendly, helpful robots that we’re so familiar with from sci-fi books and shows are definitely in our future ... but I’m no longer willing to say they definitely won’t be, either.

The future will be ... interesting.



__________

* Note: This is not designed to be a fully, technically correct explanation, but rather a deliberate oversimplification for lay people.  Please bear that in mind before you submit corrections.











Sunday, October 16, 2022

That's a big pile of ...

To say that the computer gods have shat on me would only be an accurate assessment if the pile of shit you’re imagining is the one from Jurassic Park.  There was a point last night when I was pretty sure my $work computer was complete toast and would have be reinstalled from scratch.  But I managed to find some advice on the Internet that helped me figure out how to fix it.

So now I’m mostly back to normal, but there’s still several lingering issues that I’m going to have to deal with over the next few days.  On the plus side, I jumped my operating system forward not one, but two fulll versions.  Which should eliminate several of the problems I’ve been experiencing lately (and, to be fair, will definitely introduce a few more).  It remains to be seen if, on balance, I come out ahead.  Given my history, it seems unlikely, but I remain ever optimistic.  Foolishly.









Sunday, October 2, 2022

The light at the end of the tunnel (which is hopefully not an oncoming train)

This weekend I’ve been taking advantage of the lack of a long post and being mostly caught up on $work to finally make some headway on that computer issue I was bitching about ever so long ago.  I really don’t seem to be able to actually fix it, apparently, so I’ve been reduced to coming up with a workaround.  And even that is somewhat sticky, but I’ve been making progress, actually, which is more than I’ve been able to say over the past few months.  So that’s nice.  I’m not out of the woods yet, mind you, but moving forward is better than standing still, I think.  Progress, not perfection, as they say!

So I think I shall get back to my computer work and leave you, dear reader, until next week, when there will most likely be a much longer post than this meager fare.  Till then.









Sunday, June 26, 2022

Cursed of the Gods

This week was another of those “the computer gods hate me” weeks.  I found a corrupted file, so I went to look at my backups, only to find that things aren’t really set up the way I thought they were.  So I have three recent versions (all of which were corrupted), and a version from January, and another from March.  So I restored it as best I could, sort of merging the newer parts that weren’t corrupted with the older parts that were outdated, but at least it gave me a full set of data.  Then I went trolling through scrollback buffers looking for any bits that I could use to update the old data to get it as close to what I had before as possible.

And, of course, after all that, I’m still going to have to fix my backups so they make this easier next time it happens.  I’m still not entirely sure how I’m going to do that, but I can’t even deal with it right now.  You ever have one of those weeks where everything you try to do just leads you to another thing you have to do first?  Yeah, that.

Anyway, enough bitching.  Next week there should be a longer post.  Tune in then!









Sunday, October 31, 2021

Kickstarter: To Be or Not to Be (Over It All)

Lately I’ve been reexamining my relationship to Kickstarter.  Like all websites that suddenly take off, how you used it when it first arrived on the scene will inevitably give way over time to something new and different.  Remember how you used to interact with Aamzon when it first appeared? or Google?  Things are very different now.  But I think the most relevant analog to Kicstarter may be eBay.

Once upon a time, I used to enjoy going to flea markets with my parents.  There would be aisles and aisles of complete junk, and, every once in a great while, you’d find a treasure.  Only it was a treasure that nobody but you realized was a treasure: everyone else thought it was just more junk.  And so the price was ridiculously low.  And you would buy the treasure, and take it home, and marvel at how you got such a magnificent thing for such an amazing proce.  It was fun.

In the early days of eBay, that’s what it was like: an online flea market that served up treasures disguised as junk, but from the whole world.  What could possibly be better?  But, over time, things changed.  Once upon a time eBay was where you went to find great prices on stuff you couldn’t even find anywhere else.  Now it’s where you go to see what ridiculous price some idiot might give you for the junk you have in your attic.  Once a few years back I asked my father if he was still going to flea markets, and, if so, could he look out for something for me.  He said, “son, you can just get all that on eBay.” I said, sure, but who can afford that?

And so my relationship with eBay changed.  There’s so much stuff that I can’t possibly spend time just browsing.  And the stuff that I do want, everyone else now knows it’s not junk, and I can’t afford to pay those prices.  Basically, the only times I’ve been on eBay in the past ten years, I’d say, was when I was doing image searches to see what something looked like so I could decide if I wanted to buy it from some other site with much better prices.

And I think I’m reaching a similar point with Kickstarter.  When it first started up, you could go there and find artists and inventors who were creating things and needed your help to fund their ideas.  If you didn’t pledge, that creation would most likely never get made.  Of course, even if you did pledge, there was no guarantee: those early days are full of stories of creators whose ideas were much bigger than their ability to deliver, or who simply misjudged how much things were going to cost.  But you took your chances, and every now and again you got screwed, but mostly you got what you were buying into, even if you often had to wait an inordinately long time to receive it.

But things are different now.  The financial stuff is much clearer these days: people understand that Kickstarter is going to take their cut, and that the tax man is going to get his, and they understand not to include shipping in the pledge amount.  They’ve also figured out that you need to ask for way less than you really want so that you can guarantee you’ll hit the goal; what used to be considered core deliverables are now “stretch goals.” The initial funding goals of most Kicstarters are so low that they often fund within the first day—sometimes even within the first hour or two—and then the creators proudly put up some flashy graphic (“funded in 2.5 hours!!”) and you look at it and go: yeah, yeah ... in other news, water is wet.

There are now “Kickstarter consultants” to help you run a smooth campaign, and even multiple different companies whose entire raison d’être is help you fulfill the rewards.  There’s a site (Kicktraq) to graph your campaign’s progress and project your final numbers.  There are people who treat Kickstarter just like Amazon: they don’t actually need your money to pay for their product, because the product is already completed; they just want the Kickstarter buzz, and they know they can make more money this way.  As Kickstarter creators get more and more savvy, Kickstarter consumers get more and more demanding.  I read a post from someone on an Internet forum recently saying that they wouldn’t even consider backing a project unless the product were already ready to deliver.  And I thought: doesn’t that defeat the whole purpose?

But things are different now, I have to admit.  Maybe it’s only in the few industries I tend to follow, but I suspect this is a general trend.  And the end result is, I often find myself wondering why I should bother to back a Kickstarter campaign at all.

Certainly not because the creator needs my support: it’s super rare these days for me to find any Kickstarter that hasn’t already met its goal.  Once the campaign ends, they will either struggle to get the physical product put together, or they’ll deliver it quickly because it was mostly (or even completely) done.  Either way, why wouldn’t I just wait and see how the thing comes out before committing to buy it?  It’s possible that they might charge more for it after the campaign is over, but that actually hasn’t been my experience: it seems to happen far more often that, in the absence of the exposure provided by Kickstarter, they drop the prices to attract those people who thought the Kickstarter pledges were too high.  So, for a chance at a lower price, I’m locking in my money for anywhere from months to years, and risking getting nothing at all at the end of the day?  What sense does that make?

Okay, fine: I miss out on some “Kickstarter exclusives” for many campaigns.  But, in exchange for that, I get to see whether the final product will materialize at all, and, if it does, if it will live up to the hype.  Once the product is done, I can actually read a review of it and decide if it really is worth the price.  If it is, I can buy it then.  If not ... then I just saved myself some money, and also however much time I would have spent fuming that the project was behind schedule.

For the past several years, for campaigns that in years past I would have immediately backed, about half the time I’ve just been making a note of the Kickstarter URL and setting myself a reminder to check up on it on whatever date it claims it will be ready.  I almost always have to give it a few more months after that date comes around.  Eventually, though, the product will come out (usually), and I can read a few reviews and decide if I want to get it.  I’ve done this a couple of dozen times now, and so far I’ve only found one product that I decided to purchase.  In every other case, I’ve said to myself, whew! dodged a bullet there.

And the other half of the time?  Well, I’ve had a lot of disappointments.  A lot of waiting around and wondering if my rewards will ever get fulfilled.  I have one project that funded over two years ago, and I did get the main product, and it was awesome, but I’m still waiting on one final “Kickstarter exclusive” to get done.  That company has done another five campaigns since then (and I’ve been waiting a year for the main product of one of those too), and I’m starting to just accept the fact that I’m never going to see that last item.  So even the promise of getting extra bits for backing is losing to lose its luster.  I keep thinking, if I hadn’t bothered to back the Kickstarter, but just waited for damn thing to be orderable online, I wouldn’t have spent any more money, and I also wouldn’t have the thing that I still don’t have, but I wouldn’t have had to be constantly whining about not having it to the company.  Just seems like it would have been better for everyone that way.

So lately I’ve been wondering: am I “over” Kickstarter?  Not exactly.  I think Kickstarter will continue to prove to be a valuable future-product discovery service.  Which is quite different from how it started out, but no less useful.  Well ... perhaps a little less useful.  But still handy.  I just think that my days of excitedly backing creators and looking forward to their creations are mostly over.  Perhaps a very few known, trusted creators may get my dollars at the time.  Perhaps some will win me over with their exclusive rewards.  Perhaps I’ll still find the occasional campaign that seems like it might not make its goal if I don’t pitch in.  But I think I’m taking all that with a grain of salt these days, and there will be a lot less of my dollars ending up in Kickstarter’s pocket, because that post-Kickstarter product’s price will go straight to the creator.  And, at the end of the day, I think we’ll all be happier about it.

Except Kickstarter.  But I suspect they’ll be okay.









Sunday, March 7, 2021

A Spreadsheet Story

The main reason you won’t get a proper blog post this week is that it’s my middle child’s birthday weekend, and I’m at their beck and call.  But there’s another possibly vaguely (probably not really) interesting reason as well, so I thought I’d share it with you.

For most of my life, I’ve been one of those annoying OCD-but-disorganized people.  All my CDs had to be alphabetized just so, and the bills in my money clip had to be facing the same way, but all my workspaces were a horrible mess and I rarely had any firm concept of what I was supposed to be working on next.  A few years back I made a conscious decision to get myself organized: as we get older, it’s not so much that our brains lose the ability to juggle all those myriad of things we’re supposed to be remembering that we have to do, it’s more that we finally realize how terrible we were at it all along and that it’s only getting worse with age.  So I settled on a Method™ and ran with it.

The one I chose was Getting Things Done (sometimes referred to by its fans as GTD), and I learned a lot from it.  Which is not to say that I embraced it fully: the biggest issue I have with it is that David Allen, being about 20 years older than me (he’s actually about halfway between the ages of my mother and father), loves paper.  There’s lots of writing things on paper and filing paper and moving paper around.  I don’t do paper.  But of course the system can be adapted to computer software, and there are many GTD programs out there.  But part of the issue with being all OCD-y and a programmer is that I can’t adapt my way of working to someone else’s software: I gotta write my own.

So I created a massive Google Sheets spreadsheet with oodles of code macros (in Javascript, which I really don’t like to program in) and, whenever it does something I don’t like, I change it.  I can’t really say that it’s a proper implementation of GTD, but I’m sure that anyone familiar with GTD would recognize most of what’s going on in there.  I didn’t take GTD as a blueprint for exactly how to organize my shit, but I absorbed many of its lessons ... maybe that should be a whole blog post on its own.  But for now, I have to admit one thing.  A fuck up I made.

Back when I was originally designing my GTD-spreadsheet-monstrosity, I made a fateful decision.  When I complete a task, I don’t actually delete it ... I just mark it completed (by adding a date in the “Completed” column) and then it disappears from my “shit you need to do today” view.  But it’s still there.  Partially I did this because, as a programmer who mainly works with databases, I’ve had many years of conditioning that you never delete data because you always regret it later, and partially because I thought it would be cool to have a record of everything I’d accomplished (so now my todo list is also my diary).  Sounded perfectly rational at the time.

Now, I’m not going to go into all the details of how GTD works, but one of its main concepts is that you track everything. EVERYTHING.  This gives you a lot of confidence that you haven’t forgotten anything, because, you know ... you track everything.  I’m coming up on my 4-year anniversary of tracking everything in my spreadsheet and I’ve accumulated over 15 thousand items: tasks, longer blocks of time for projects, things I was waiting on other people to get back to me on, etc etc etc.  It works out to about 4 thousand a year, and I shouldn’t be surprised if it’s actually increasing over time and I’m soon to hit 5 grand.  Now, if you’re a big spreadsheet person (as many people are these days, in many different areas of business) you may have heard technogeeks tell you not to use a spreadsheet as a database.  Being a technogeek myself, I knew this perfectly well ... and I did it anyway.  I did it advisedly, for reasons of expediency.  Because I didn’t want to spend months trying to develop my own application from scratch, putitng me even further behind on getting organized.  The point was to get up and running quickly, which I did.  But now I’m paying the price.

This weekend, while sitting around waiting for my child to inform me of the next videogame I was drafted into playing or the next meal I was conscripted into obtaining, I had a brainstorm about how to make this system way more efficient.  It’s not a proper fix, but it would radically decrease the time I currently spend sitting around waiting for my spreadsheet to respond, so I figured I better do it.  I thought: this won’t be too hard to do.  Of course, it was harder than I thought—it’s always harder than you think—and I haven’t gotten things completely back to normal yet (and I stayed up way too late last night), but I made some really great strides, and I’m seeing an even bigger speed-up than I thought.  So I’m pretty pleased.  Even though I’ll probably be fucking with it for the next several weeks.

So that’s why I have no time to make a proper post.  Except mainly the birthday thing.  Next week will be better, I’m sure.









Sunday, September 22, 2019

That fresh new operating system smell ...


So, this weekend, I finally upgraded my laptop’s operating system, a disagreeable task that I’ve been putting off for about 4 months now.  Many of my friends and coworkers are no doubt wondering what the big deal is: just do it already.  Some of you may even be thinking that I was avoiding it just because it would involve rebooting my computer.  But my computer was crashing every few weeks anyway, which is why I agreed to this unpleasantness in the first place.  No, it’s not the pain of rebooting—don’t get me wrong: that’s a very real painit’s the massive time suck.  For the past several months, I’ve been working on some tricky stuff at $work, and the thought of being without a computer for a big chunk of the weekend was just a non-starter.

And, in case you’re thinking that my assessment of the amount of time it would take to upgrade my OS as “a big chunk of the weekend” is an exaggeration, I’ve now completed the task and I can tell you: it’s around 8 hours.  That’s soup-to-nuts, of course ... starting with trying to back everything up (upgrading your OS shouldn’t delete all your files, but it’s one of those things that you really don’t want to take any chances on), upgrading all the packages to the latest versions before starting, doing the actual upgrade, then trying to reconfigure whatever was deconfigured by being upgraded against your will.  But, still: 8 friggin’ hours.  It’s a major chore.

But the good news is that I completed the second of my 3 simultaneously ongoing major projects on Friday, so I had some free time, and I figured, what the hell.  So now it’s done.  It’s too early to say for sure, but I’m cautiously optimistic that the laptop situation is improved.  Maybe not entirely fixed, but at least better.  Probably.

It’s a short week this week, so this is all you get.  Tune in next week for something more substantial.









Sunday, April 29, 2018

Waiting for a new vista


This is the weekend my office is moving from Santa Monica to Playa Vista.  Happily, I was not required to do much personally, but one of the things I did have to do was shut my computer down.  This is a big pain, you see, because I never shut my computer down.  In order for me to start my computer from cold, I have to do the following things:

  • Enter my 56-character password to unlock my hard drive encryption.
  • Enter my much shorter user password to log into the machine.
  • Fire up a temporary terminal window.
  • Run a command which will start up my actual terminal windows (2) and my music player.
  • Enter my 43-character password to unlock my SSH key.
  • Close the temporary terminal window.
  • Fire up Firefox.
  • Restore my Firefox session, with its 7 windows and 219 total tabs.
  • Run another command which starts up my 1 Thunderbird window and Pidgin with its 12 IM windows.
  • Fire up my 2 Chrome windows (in 2 different profiles).
  • Start up all the other apps that I can’t remember right now.
  • Move all the windows onto their proper desktops.

So, as you can see, I don’t shut my computer down very often, because it’s such a giant pain in the ass to get it going again.  So I’m not particularly looking forward to having to do that.  But it seems like the new office won’t be ready tomorrow anyway, so I’ve got an extra day.  Which is good, because I blew out a tire on the way to work last week and I have to buy new tires anyway.

So ... yeah, fun times.  As usual, longer post next week.









Sunday, February 11, 2018

R.I.P. John Perry Barlow


A long time ago—in 1994, the Internet tells me—I read an article by one John Perry Barlow, who my subsequent reearch infomed me was one of the founders of the Electronic Frontier Foundation, or EFF.  At that time, I didn’t really know who Barlow was, or what the EFF was, even.  But the article (which you can still read online) piqued my interest, as did the entire concept of the EFF, which is a non-profit organization devoted to Internet civil liberties—that is, they fight to keep the Internet free, for everyone.  I’ve never forgotten that article, or the EFF, whose name has popped up more and more often in the intervening years.  And I’ve never forgotten about John Perry Barlow, from whom I read many more articles and statements, and who is an articulate, passionate, ardent freedom fighter for a thankless cause for which he will never receive proper recognition.

Or, at least, he was.  John Perry Barlow died this week, at age 70.  I never had the pleasure of meeting him, although I have met a few folks who knew him personally, and by all accounts he was exactly what he projected in his writings.  In the EFF’s obituary, executive director Cindy Cohn wrote:

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity’s problems without causing any more.  As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth.  Barlow knew that new technology could create and empower evil as much as it could create and empower good.  He made a conscious decision to focus on the latter: “I knew it’s also true that a good way to invent the future is to predict it.  So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls ‘turn-key totalitarianism.’”


So the man was not only articulate, passionate, and ardent, but also crazy optimistic.  I’m not even entirely sure I realized how much I admired this guy until I found out he had passed away.  So tonight I say, rest in peace, John Perry Barlow.  The world will miss you, even though it will probably never quite figure out why.









Sunday, September 10, 2017

Of All My Monkey Memories ...


I don’t really have time for a full post this week, as we’re in the midst of another Virgo birthday season—my eldest is now 19, which is always a bit of a brainfuck.  Realizing you have a kid old enough to go to college when you were just in college yourself, like, yesterday, can feel surreal in a very fundamental way.  But, as Twain once said: “It is sad to go to pieces like this, but we all have to do it.”

But I feel like I need to leave you with something to read this week.1  So let me tell you a story, then I’ll drop you a link.

I’ve mentioned before that I’m a technogeek, and you’ve probably been able to work out that I’m a bit, shall we say, older.  And while I haven’t had the most interesting technogeek career or anything, I’ve had my fair share of interesting jobs throughout the roughly three decades I’ve been at this.  And one of my favorites was working for ThnkGeek.

Now, I don’t want to get into whether ThinkGeek is still as cool these days as it used to be.2  But I don’t think there can be much argument that it was the height of cool back in the day.  And, just to be clear, I’m not trying to take any credit for that: it was already plenty cool when I got there, and that’s primarily thanks to the four founders,3 who put in the mental effort and sweat equity to make it so.  It was as a wonderful a place to work as it was a wonderful place to shop, and I loved almost all of my time there.  And, while I’m not making any claim that I made any major contributions to the great and storied history of ThinkGeek, there are a couple of things I could brag about.  You know, if I were so inclined.

You probably already know that the creature most in charge of ThinkGeek is a monkey named Timmy.  And you may know (or at least suspect) that a geek-centered company like TG gets all sorts of wacky emails from customers.  And I bet you can easily guess that wacky customer emails often get forwarded around so that all the employees can share in the wackiness.  At some point, I started “responding” to some of these emails (internally only, of course!) as Timmy.  This was strictly to entertain my fellow employees, and, at that time, there were few enough of those that I knew them all personally and knew what they would find amusing.4  After a few rounds of that, somebody came up with the bright idea to turn this into something we could put on the website.5  I always referred to it as “Ask Timmy”—still do, whenever I talk about it—but I guess it was technically called “Dear Timmy” on the site.6  It didn’t last long: I did 7 installments of the column over the course of perhaps a year.  Somebody else picked the questions, and I answered them, using the “voice” of Timmy.  Timmy was wise and knew just about everything, and he was always right, even when he was wrong.  Since it was pretty much a marketing tool, I did take a few opportunities in there to do some product placement, but mainly I was just having fun.  Let me give you a taste:

Dear Timmy,

I was watching Star Wars the other night, and began to wonder something. Stormtroopers are clones of Jango Fett. Boba Fett is also a clone of him. Given that, why is it that stormtroopers can’t manage to hit anything when they shoot, but Boba can?

Sincerely,
Mat
Woodend, Victoria, Australia, Earth


Dear Mat,

This is simply a case of good-guy-physics vs. bad-guy-physics. Good guys always hit what they aim at, often with a minimum number of shots, and bad guys can’t hit the broad side of a barn (particularly if the barn contains good guys). To demonstrate the truth of this, take a look at Attack of the Clones. In this movie, the stormtroopers are good guys, and they hit large quantities of Count Dooku’s allies. Once they have been co-opted by Sidious and Vader, however, they immediately begin to suck, and by the time they get around to chasing Luke and Han down the corridors of the Deathstar, they regularly have difficulty hitting the walls.

Now, Boba Fett is a different case, which requires the application of an entirely separate branch of bad-guy-physics. This branch is roughly equivalent to fluid dynamics in that chaos theory is a factor. Bad guys who have proper names can sometimes hit what they aim at, depending on complex laws governed by butterfly wings in China, which side of a paleobotanist’s hand a drop of water will roll down, and most importantly, the desired plot outcome. Just as apparently random events can be mapped to form beautiful patterns known as fractals, the hit ratio of bad guys with proper names will, when viewed from far enough away, form a pattern (in this case, George Lucas’ scripts, which may or may not be considered a beautiful thing, depending on your age at the time Episode IV was released and how you feel about Jar Jar Binks).

As an interesting side note, the Star Wars movies demonstrate several other principles of bad-guy-physics, including the Law of Conservation of Evil (which is why one Sith Lord always has to die before you can get another one), and temporal anomalies (cf. Han Shot First).

Hope that clears it up!

    — Timmy


So, it was a lot of fun, and I probably would have kept on doing it for a while if I hadn’t left the company.  Of all the geeky things I’ve done, this may be the one I’m proudest of.

The column archive is no longer on the ThinkGeek site, but, since the Internet is forever, you can find all the old Ask Timmy installments on the Wayback Machine.  So hop on over and read the rest of the columns ... hopefully you’ll enjoy reading them as much as I did writing them.



__________

1 Honestly, I’m not sure why.  Normally I don’t care that much.  But I’m feeling generous today.  Or something.

2 Although I have a definite opinion about that.

3 That would be Willie, Jen, Scott, and Jon.

4 Which I suppose is my way of saying, don’t try this at home kids, especially if your company has more than a couple dozen employees.  Nobody likes that guy who hits reply-all on the company emails and spams a few hundred people, no matter how funny they think they are.

5 Probably Willie.  He was TG’s primary idea machine at the time.

6 Again, I blame Willie.  But then again I blame Willie like Matt Stone and Trey Parker blame Canada.









Sunday, June 11, 2017

Curse of the Computer Gods 2: Double Trouble


About 8 days shy of six years ago, I made a post that started with these words:

You know, the hardware gods hate me.


And it’s still as true today as it was back then.

The bulk of my weekend has been spent wrestling with reinstalling Linux on both my laptops.  So far, I’ve been mostly successful on one.  Still a long way to go.  And not enough time or attention to stop and make any nifty blog posts for you.  But, hey: at least your life doesn’t suck as much as mine.  You can always comfort yourself with that.










Sunday, February 5, 2017

A Lifelong Quest


This is not exactly a technology post, and it’s not exactly a gaming post, and it’s not exactly a (personal) history post, but in a way it’s all of those things rolled into one.  Let me start by telling you a little story.

When I was somewhere in the neighborhood of 15 years old, our family got a new computer: a Commodore 64, which was, at that time, state of the art.  I always thought that we bought it specifically for me, but my father corrected me a few years back, telling me that he originally bought it for himself, but he couldn’t really figure out how to work it, so he figured he’d see if I had any better luck.  I did, as it turned out, and it was the beginning of my programming career.  I think that pretty much anything you do as a career (as opposed to just a job) has to start out with you doing something for fun.  Otherwise you’re just in it for the paycheck.

The first program I ever wrote (which was in BASIC) looked like this:
10 PRINT “MY NAME” 20 GOTO 10
The second program I ever wrote was a D&D character generator.

Now, I tell you this story to let you know exactly how long I’ve been trying to program a D&D character sheet.  My obsession has carried me across 35 years of technology, and it’s driven many of my decisions as to what to learn.  I quickly learned I had to give up on BASIC (too slow), so I taught myself assembly.1  I dove very deep into the formula languages of first Lotus 1-2-3, then Excel, and now Google Sheets, so that I could do spreadsheet-based character sheets, and I taught myself VBA when that wasn’t enough, and now I’m almost sort of proficient in Javascript for the same reason.2  The first database I ever learned—dBase III, that would have been—I didn’t learn for the purpose of making character sheets, but it was the thought that it might be used for that purpose that drove me ever deeper into the language.  Same with SQL.  I’ve done very little GUI programming, but most of what little I have done—Delphi, and wxWindows, and Django, and Gantrywas mined for what it could teach me about how to make interfaces for D&D players.  I’ve written DSLs for dice-rolling, and extensions to Template Toolkit, and I even tried to write a “better” spreadsheet in Perl once, all so I could program the perfect character sheet.  If I ever get around to writing my SQL-language-extension, which will probably be done in Perl 6, one of the first things I’ll do with it is integrate classes with DB tables for aspects of D&D characters.

And, the sad part is, I’ve been doing this over and over again for 35 years, and it’s never worked properly.  There are a myriad of reasons for this.  A character sheet is a huge quantity of interrelated numbers with complex interdependencies, which make it almost perfect to render as a spreadsheet.  But the rules are just baroque and irregular enough to make it a breeze for the first 50% and practically impossible for the last 25%.  Contrariwise, the amount of dependent recalculation means that it’s a giant pain in the ass to do in a general programming language, unless you fancy trying to reinvent the spreadsheet wheel.3  The amount of data that needs to be stored, as well as the number of set operations necessary, mean that a database solution (such as SQL) is pretty attractive, for certain aspects.  But trying to do that much recalculation in a database language is even more terrifying than trying to do it in Perl or C++, and most of the parts Excel can’t handle, SQL is even worse at.

The thing that makes a database application or language really attractive, though, is the place where spreadsheets really fall down: separation of code and data.  If I write a program in a general language, I have code and then, elsewhere, I have data.4  In a database application, the line may be a bit blurrier, but the separation is there, and the proof is, I can give you updated code, and that doesn’t change your data a whit.  Not so with spreadsheets.  With those, the code and the data are one piece.  If I give you an updated spreadsheet, it comes with its own data (which is always blank).  But say you’ve already got a character sheet: it’s full of your data—you know, for your character.  Hell, the reason you wanted the upgrade in the first place was no doubt that you found a bug in my code, or maybe I just added a new feature that you really need.  But now there’s no way for you to migrate that data out of the old sheet and into the new.

Now start multiplying that problem.  If you’re a D&D player, you probably have lots of characters.  And how many people are using this spreadsheet thingy anyways?  My very first fully functional Excel spreadsheet was only used for one character each by 3 players (i.e. the 3 players in that particular campaign I was running)—and myself as the GM, of course—and it was a nightmare every time I updated the sheet.  A D&D character is not a huge amount of data, especially not when compared to big data or even the database of a middling-sized business, but it’s also pretty much nothing but data.  You don’t want to have to re-enter all of it every time I fix a bug.  To use the appropriate technobabble, this is a separation of concerns issue, and more specifically having to do with the separation of code vs data.  Of course, it’s quite fashionable these days (among technogeeks, anyway) to argue that code and data are the same thing, but I can only suppose that the people making those arguments never had to release code updates to users.5  I only had three users and I was going crazy trying to figure out how to separate my code from my data.

(To delve a bit deeper into the technical side of the problem, what I really want is for someone to invent a spreadsheet that’s actually just an interface into a database.  The spreadsheet programmer “ties” certain cells to certain columns of certain tables in the database, and the spreadsheet user is only allowed to enter data into those specific cells.  There could be multiple rows in the spreadsheet, corresponding to multiple rows in the table, and it would be easy to add a new one.  Sorting or filtering the rows wouldn’t affect the underlying data.  The database back-end might need some tweaking as well—what if the user enters a formula into a data cell instead of a constant?—but ideally it could use a standard datastore such as MySQL.  Somebody get on inventing this right away, please.  I don’t ask for any financial consideration for the idea ... just make sure I’m your first beta tester.)

But the problems with realizing the perfect computerized character sheet aren’t all technical.  A lot of it has to do with house rules.  If you’re not familiar with D&D, this may not make sense.  You may think house rules are simple little things, like getting cash when you land on Free Parking in Monopoly.  But RPGs (of which D&D is the grandaddy of them all) have a whole different relationship to house rules.  House rules can change anything, at any time, and the rulebooks actively encourage you to use them.  “GM fiat” is a well-entrenched concept, and that includes pretty much everything involved in character creation.  2nd edition D&D said only humans could be paladins, but many GMs threw that rule out.  3rd edition said multiclassed characters had to take an experience point penalty, but a lot of groups never enforced that.  What if a GM wants to change the value of some bonus granted by some feature? what if they want to raise the maxima for something? or lift the restrictions on something else?  What if they want to change the frequency of something, like feats gained, or ability score increases?

The complexity—but, more importantly, the prevalenceof house rules is death on a character sheet program.  In a fundamental way, programming is codifying rules, and if the rules aren’t fixed ...  Even when I’m noodling around with designing a character sheet that will only be useful for me and my friends, I still hit this problem, because we don’t all agree on what the house rules should be, and we’re constantly changing our minds.  Imagine how much more difficult it is to come up with something that will be useful to all gamers: there’s a reason that D&D has been around for over 40 years and no one has yet solved this problem.  Oh, sure: there are lots of attempts out there, some done with spreadsheets, some as database front-ends, and some as general programs.  But this is not a solved problem, by any means, and all of them have some area where they fall down.  Again, the prevalence of house rules in roleplaying is a crucial thing here, because it means that you can’t just say, “well, I’ll just make a program that works as long as you’re not using any house rules at all, and that’ll be better than nothing,” because now your userbase is about 4 or 5 people.  It’s hardly worth the effort.

So it’s not an easy problem, although I often feel like that’s a pretty feeble excuse for why I’ve been working on what is essentially the same program for 35 years and never managed to finish it.  But I’m feeling pretty good about my latest approach, so, if you’ll indulge me in a bit (more) technobabble, I’ll tell you basically how it works.

First, after a long hiatus from the spreadsheet angle, I’m back to it, but this time using Google Sheets.  Although I’ve already hit the complexity wall6 with ‘Sheets, it took much longer to get to than with Excel.7  Plus it has a number of things I never had with Excel:8 you can sort and filter in array formulae, and you have both unique and join.  Much more intelligent handling of array formulae is the biggest win for me with Google Sheets; in many other areas (particularly cell formatting) it still trails Excel, to my annoyance.  But it mainly means that I never have to program extensions, as I did with Excel.  Plus, when I do decide to use some extensions (mainly to make complex/repetitive tasks easier), I get to program in Javascript, which is almost a tolerable languaage, as opposed to VBA, which is decidedly not.  I still have the code/data problem, but I’ve come up with a moderately clever solution there: all my “input cells” (which I color-code for ease-of-use) don’t start out blank, but rather with formulae that pull data from a special tab called “LoadData,” which is itself blank.  Then there’s another tab called “SaveData,” which contains a bunch of formulae that pull all the data from the input cells: every input cell has a corresponding row on the “SaveData” tab.  When you want to upgrade your sheet, you can rename the existing sheet, grab a new (blank) copy of the upgraded sheet, go to “SaveData” on the old sheet, select-all, copy, go to “LoadData” in the new sheet, then paste values.9  (And again: I coded up a little Javascript extension for the sheet that will do all that for you, but you still could do it manually if you needed to for any reason.)  Now, this isn’t perfect: the biggest downside is that, if you happen to know what you’re doing and you actually stick a formula into an input cell, that’s going to get lost—that is, it’ll silently revert to the actual current value—when you upgrade your sheet.  But that’s moderately rare, and it works pretty awesomely for the 95% of other cases where you need to transfer your data.  I still miss the ability to do database ops (e.g. SQL),10 and I absolutely miss the ability to make classes and do inheritance, but so far I haven’t found any problem that I can’t solve with enough applications of match and offset, hidden columns, and tabs full of temporary results.  (To be fair, I’ve postponed solving several problems, and I have a lot of “insert arbitrary bonus here” input cells, but those actually help out in the presence of house rules, so I don’t mind ’em.)

So I feel like I’m closer now than I ever have been before.  Sure, this one will only work for D&D, and only for one edition of D&D,11 but if I can make it work for pretty much any such character, that’ll still be the closest to fulfilling my dream that I’ve achieved thus far.  I’ve got a lot more testing to do before I can make that claim, and several more character types to flesh out (I haven’t done very much with spellcasters at all, and monks are alwyays a giant pain in the ass), but it looks promising, and I’m starting to get just a little bit excited about it.  Which is why I wanted to share it with you.  And also because it’s been consuming a fair amount of my free time lately, so I thought it might be good to get some details out there for posterity.  Maybe one day, if you’re a D&D player, you’ll be using a version of my character sheet on your laptop at the gaming table.

Or maybe I’ll still be working on it in the nursing home.  Either way, it should be fun.



__________

1 For the 6510, this would have been.  Although I didn’t really have any concept of that at the time; in fact, I really only know it now because Wikipedia just told me so.
2 That is, because Javascript is how you write extensions for Google Sheets, just as VBA was how you wrote them for Excel.
3 Which, as I mentioned, I actually tried to do once.  I didn’t fancy it.
4 Let’s pretend that where “elsewhere” is is not really important for a moment.  The truth, of course, is that it’s vitally important.  But these are not the droids you’re looking for.
5 Which is not unheard of.  A lot of code out there in the world doesn’t really have data entered by a user, and quite a chunk of it doesn’t even have “users” at all.  And a lot of programmers work exclusively on such code.  For those folks, this is an interesting philosophical debate as opposed to a self-obvious truth.
6 By which I mean the point at which a spreadsheet fails to recalculate certain cells for no apparent reason.  Generally if you just delete the formula and re-enter it, then everything works.  But it’s nearly always intermittent, and thus useless to complain about or report.  Every spreadsheet I’ve ever worked with has a complexity wall, and the character sheet app always manages to hit it eventually.
7 To be fair to Excel, that was a decade or two ago.  It might be better now.  But I bet it’s not.
8 Again, it’s possible that Excel may have one or more of these features by now.
9 Well, except that Google Sheets currently has a bit of a bug with trying to paste values from one sheet to another.  But there’s a simple workaround, which is again a perfect reason to have a little code extension to do the steps for you.
10 Google Sheets has a query function that sort of lets you do pseudo-SQL on your data tables, but you can only refer to columns by letter, not name, so I consider it fairly useless.
11 Specifically, 5e, which I’ve talked about before on this blog.