Sunday, June 25, 2023

Shadowfall Equinox VIII

"Oceans of Storm Clouds"

[This is one post in a series about my music mixes.  The series list has links to all posts in the series and also definitions of many of the terms I use.  You may wish to read the introduction for more background.  You may also want to check out the first volume in this multi-volume mix for more info on its theme.

Like all my series, it is not necessarily contiguous—that is, I don’t guarantee that the next post in the series will be next week.  Just that I will eventually finish it, someday.  Unless I get hit by a bus.]


Last volume I noted that Shadowfall Equinox was catching up to Salsatic Vibrato in terms of number of volumes.  And, with this latest one, they’re officially tied.  Realistically, I think SfE may hit a volume IX before SVb does.  We shall see.

As I said last time, the primary reason is that Equinox is what I use for background music when I work, and this volume is no exception to that.  And, as usual when getting to these large numbers of volumes, the challenge is to bring something fresh to the mix without abandoning the dependable artists that have been with us on every volume.  Let’s see how we did.

In the category of repeating artists, there’s one who has been on every volume, and two who have been on every volume but one, and they’re all three here.  The inimitable Jeff Greinke is certainly back, with an album we haven’t heard from yet on this mix: Winter Light.1  “Mountain in the Clouds” is the same drifting, ethereal ambient that we’ve come to expect from Greinke, but this album has more of a brittle, crisp feeling, as the seasonal reference in its title implies.  Still, I feel this particular track works in a more autumnal setting, which is what this mix is all about.  As for pianist Kevin Keller,2 “Stillness” is a melancholy, cello-heavy piece that’s pretty perfect for the mix.  And, finally, darkwave masters Black Tape for a Blue Girl3 also provide a cello-heavy piece, “Fitful.” This is a particularly ambient track for Rosenthal, with the occasional crescendo of what might be brass (or just synth), and the gentle, almost unnoticeable, wordless vocals of an uncredited female singer.

Other returning artists include Ruben Garcia (seen on volumes IV, V, and VII) and Ludovico Einaudi (seen on volume VI), whom I paired back to back so that Einaudi’s spare piano on “In Principio” could highlight Garcia’s departure from that style with some fuller, synthy work on “Five Dreams from Yesterday” (which really sounds more like Greinke than Garcia’s normal output); Dead Can Dance and Loreena McKennitt (who I paird on volume V), here again with a touch of worldmusic: on V, I used McKennitt followed by DCD as an opener, whereas here I’ve followed DCD’s somber “Agape” with McKennitt’s beautiful “Tango to Evora” as our closer; and, last but not least, cellist Jami Sieber (seen on volume IV).

Cello, in fact, is a pretty common instrument for this mix: we’ve heard not only from Sieber before, but also cellist David Darling and groups like Amber Asylum and Angels of Venice who feature full-time cellists.  Plus various guest cellists: Martin McGarrick on This Mortal Coil tracks, Audrey Riley on Hope Blister tracks, and Mera Roberts on several Black Tape for a Blue Girl songs.  Here, I’ve put together a solid block of cello music as our centerpiece: 5 songs in a row, and I kick it off with Eugene Friesen.  He’s a recent find for me, which explains why we haven’t seen him here yet, but he’s been around since the 80s, and I think he may become a regular here.  For his debut on this mix, I’ve chose the title track from his 2005 album In the Shade of Angels, a very spare, not-quite-melancholy, ultimately gorgeous instrumental to kick off the block.  From there we go into the melancholy track from the Kevin Keller Ensemble (including Clarice Jensen on cello), and then to Colm McGuinness, who we’ve mostly seen in this series as a purveyor of gaming music: his “Welcome to Wildemount” is the explosive opener of Eldritch Ætherium II, and he has one more track there as well as one on the following volume.  But he’s also an excellent cellist (as well as playing many other instruments) and “Koala” is a sweeping yet still tenebrous track that is perfect for the midpoint of this block.  Then we hit Sieber, who is surely my favorite cellist of all time, with “The Burning Dawn” from 2013’s Timeless.  It’s an anticipatory track, though it’s not clear exactly what the listener is waiting for.  But it carries us sedately to the block closer, BTfaBG’s “Fitful.” Frequent contributor Mera Roberts plays the cello here, and the light, wordless vocals may well be Roberts herself, who provides vocals as well as cello for her other two projects.4  She’s very talented, and lifts this BTfaBG track to a level of sublime I don’t think it could otherwise achieve.

And, speaking of blocks of tracks, I close out the mix with a fun triad of worldmusic, starting with Thievery Corporation’s “Indra.” The DC-based Corporation is normally too upbeat for this mix: we normally see them in places like Smokelit Flashback (volumes III and V), Paradoxically Sized World (volumes I and IV), and Apparently World.  Still, we also heard from them on Zephyrous Aquamarine and even once on Numeric Driftwood (volume IV), so we know they can do mellow when the mood calls for it.  And “Indra,” while it maintains a decently strong hip-hop beat, really brings the dreamy trip-hop with some Middle Eastern flair.  Then to “Agape,” continuing the Middle Eastern theme with what is probably an oud and a qanun, layered with more of Lisa Gerrard’s powerful vocals, singing in a language which might be Earthly or might be just Gerrard’s glossolalia.  And we close with McKennitt’s “Tango to Evora,” which starts out with a simple flamenco-style guitar and then layers on violin, harp, and finally McKennitt’s angelic wordless vocals.  A gentle, soothing track which makes for an amazing closer.

Once again, we’re quite short on lyrics to draw a volume title from, so I used the now-typical method in such situations (that is, I plucked words from various song titles and glued them together).  I actually really like this particular one.



Shadowfall Equinox VIII
[ Oceans of Storm Clouds ]


“For the West Coast Dark Ambient Bedroom Warriors” by the Mountain Goats, off Goths
“Oceans of Change” by Stray Theories, off Oceans, Volume 1 [EP]5
“Tanaris” by Tracy W. Bush, off World of Warcraft Soundtrack [Videogame Soundtrack]
“Aquarium” by Casino Versus Japan, off Whole Numbers Play the Basics
“Stay with Me” by Clint Mansell, off The Fountain [Soundtrack]
“In Principio” by Ludovico Einaudi, off Nightbook
“Five Dreams from Yesterday” by Ruben Garcia, off Lakeland
“Riders on the Storm” by Yonderboi [Single]
“In the Shade of Angels” by Eugene Friesen, off In the Shade of Angels
“Stillness” by Kevin Keller, off In Absentia
“Koala” by Colm R. McGuinness [Single]
“The Burning Dawn” by Jami Sieber, off Timeless
“Fitful” by Black Tape for a Blue Girl, off Remnants of a Deeper Purity
“Mountain in the Clouds” by Jeff Greinke, off Winter Light
“Seelenlos” by Scabeater, off Necrology
“Indra” by Thievery Corporation, off The Mirror Conspiracy
“Agape” by Dead Can Dance, off Anastasis
“Tango to Evora” by Loreena McKennitt, off The Visit
Total:  18 tracks,  80:11



Clint Mansell’s beautiful if haunting score for The Fountain makes its first appearance here; “Stay with Me” is a slow, synthy track that seems to have ghostly tones in its background.  The World of Warcraft soundtrack also makes its first appearance outsdide Eldritch Ætherium, where I used two of Jason Hayes’ tracks on volume III.  This is a Tracy W. Bush composition, “Tanaris,” which also has a very haunted quality, as well as sounding somewhat oceanic.  I thought it might be a bit too much to put those two back to back, so I broke them up with an interesting track I found while looking for different versions of Saint-Saëns’ “Aquarium.”6  This track of the same name by Casino Versus Japan (the musical moniker of Wisconsin electronica artist Erik Paul Kowalski) has nothing to do with the piece from Le Carnaval des Animaux, but it’s a great, underwatery ambient/downtempo piece that I’m glad to have stumbled onto by accident.

For the rest, there’s nothing too unexpected here.  Stray Theories is a cinematic and electronica project by New Zealand artist Micah Templeton-Wolfe; “Oceans of Change” is a gorgeous ambient piece that flows insanely well off of our opener and sets us up for the more cinematic tracks to come.  That opener, of course, is the exquisitely named Mountain Goats’ track “For the West Coast Dark Ambient Bedroom Warriors,” which is, as the Brits would say, exactly what it says on the tin.  John Darnielle’s long-running (since 1994) project is musically eclectic, and was originally a one-man affair, though by the time of 2017’s Goths, he was opening up to more long-term bandmates.  This amazingly spare track is, as its name suggests, the epitome of what this mix is all about, so the second I heard it I knew it had to be a volume opener.  It’s a bit of a departure for the Mountain Goats, but then you can say that about most of their songs, so it starts to become meaningless after a while.

And that just leaves us with a small bridge from Scabeater, a band not only so obscure that neither AllMusic nor Wikipedia know they exist—which, you may recall, are my criteria for “really obscure band”—but even Discogs says “hunh??” when you ask about them.7  I found Scabeater on Jamendo, and their Skinny-Puppy-adjacent brand of industrial-flavored goth is certainly not for everyone—hell, a lot of it isn’t even for mebut they hit a winner every once in a while, and the 46 seconds of strings-backed piano simplicity that is “Seelenios” is just sublime.  For the longest time, “Mountain in the Clouds” just butted directly up against “Indra,” and it wasn’t working for me, but I couldn’t figure out what to do about it, until I remembered this perfect little bridge.

And that just leaves us with perhaps the oddest choice, Hungarian producer László Fogarasi Jr., better known as Yonderboi, who here graces us with an instrumental, jazzy-to-the-point-of-being-loungy version of “Riders on the Storm” by the Doors.  I love the original track (it is almost certainly my favorite Doors song), and something about this offbeat cover really caught my ear.  It takes the song in a completely different direction (as all the best covers do) and is somehow faithful to its inspiration while also being a completely new song.  I’ve drug it around through several volumes of this mix, never quite finding the perfect placement for it, until it finally managed to land here.  Its Hammond-organ-style melody flows beautifully off the fading synth of Garcia’s “Five Dreams,” and it serves as the perfect palate cleanser before we leap into the 5-cello block of Friesen / Keller / McGuinness / Sieber / BTfaBG.  I’m glad I finally found it a home.


Next time, we’ll look at some more creativity-inducing gaming music.


Shadowfall Equinox IX




__________

1 Although I used “Orographic” from that album on Mystical Memoriam.

2 Seen on every volume except the first.

3 Seen on every volume except IV.

4 Mera is half of Mercurine, a third-wave goth band that occupies the same space between goth and industrial as Faith and the Muse, and all of Oblivia, a cello-driven dark ambient project reminiscent of Amber Asylum, but with more vocals.  Both are relatively unknown, and both undeservedly so.

5 You guys know how much I hate to link to YouTube, but I can’t find anywhere else to get this song.

6 I used one version on Classical Plasma I and one on Phantasma Chorale I.

7 I may have to invent a new term ... super duper obscure band, perhaps?











Sunday, June 18, 2023

Dinner and a Show

Today was Father’s Day, and we took the whole family out for a teppan yaki lupper.  If you don’t know what “lupper” is, it’s a meal about halfway between lunch and supper, in the same way that “brunch” is halfway between breakfast and lunch.  Of course, according to the terminology I was raised with, “lupper” is still dinner, despite the odd timing, because “dinner” means “the biggest meal of the day, no matter what time you eat it.” But that’s a technicality.

If you don’t know what “teppan yaki” is, it’s the Japanese cuisine where they cook on the table (which is apparently called a “teppan,” although most of us Americans just say “hibachi”).  Where I’m from (the DC-VA-NC East Coast corridor), we typically just called it ”Benihana,” because that was the only such place there was.  Well, at least that’s the way it was when I was growing up, which admittedly was a long time ago.

But, here in Southern California (and/or here in the aftertimes), we had a whole bunch of options, of which Benihana was only one (and not even the best one, apparently).  We went with a place called Musashi, which, going by their website, used to have 3 locations, but is now down to just one (the pandemic was not kind to most restaurants, but for teppanyaki restaurants in particular—where more than half the point is the showmanship of the meal preparation, so take-out isn’t as enticing—I’m guessing it was devastating).  Anyway, Musashi has been around since 1981, which is one of those years that seems ancient to my children but doesn’t seem that long ago to me.  But, it was 42 years ago, which is at least long ago enough that it seems like these folks know what they’re doing.  So, I don’t really want to tell you how much it cost us, but the food was excellent, and the kids seemed to enjoy the show (and, honestly, that was the main reason I wanted to go).  So I call it a success.

Next time, a longer post, assuming all goes well.









Sunday, June 11, 2023

Do Androids Dream of IQ Tests?

Recently, I was listening to a podcast—it happened to be Election Profit Makers, with the lovely and talented David Rees.1  In this particular episode,2 David offers this “hot take”:

I also think AI is kinda bullshit.  I’ve been thinking about it; I think there’s some stuff that AI can do, but on the other hand it really is not ... we shouldn’t call it AI.  Someone was making this point, that calling it “artificial intelligence” is kind of propaganda.  It’s not really intelligent yet.  It’s just like a word prediction algorithm, you know?  You give it a topic—it doesn’t know what it’s saying.  It’s ... it’s like an algorithm that predicts what the—given any word or paragraph, it predicts what the next most likely word is, I think.  I don’t think it really thinks ... I don’t think it’s artificial intelligence.

Of course, I put “hot take” in quotes because it’s not particularly hot: as David himself notes, other people have been making this observation for a while now, especially in relation to ChatGPT.  I gave my own opinions of ChatGPT several months ago, and it’s only become more pervasive, and more useful, since then.  Now, David’s assessment is not wrong ... but it’s also not complete, either.  David’s not a tech guy.  But I am.  So I want to share my opinion with you on this topic, but, be forewarned: I’m going to ask a lot of questions and not necessarily provide a lot of answers.  This is one of those topics where there aren’t any clear answers, and asking the questions is really the point of the exercise.

So, first let’s get the one minor detail that David is wrong about out of the way.  What David is referring to here are the LLMs, like ChatGPT.  To be pendantic about it, LLMs are just one form of AI: they just happen to be the one that’s hot right now, because it’s the one that’s shown the most promise.  If you’ve had the opportunity to interact with ChatGPT or any of its imitators, you know what I mean.  If not ... well, just take my word for it.  LLMs are extremely useful and extremely promising, and the closest we’ve come so far to being to talk to a machine like a person.3  But they are not the totality of AI, and I’m sure there will be AI in the future that is not based on this technology, just as there was in the past.

But, forgiving that understandable conflation, what about this notion that an LLM is just a “predictive algorithm,” and it doesn’t actually think, and therefore it’s a misnomer to refer to it as “intelligence”?  David goes on to cite (badly) the “Chinese room” thought experiment; if you’re unfamiliar, I encourage you to read the full Wikipedia article (or at least the first two sections), but the synopsis is, if a computer program could take in questions in Chinese and produce answers in Chinese, and do so sufficiently well to fool a native Chinese speaker, then a person who neither speaks, reads, nor understands Chinese could be operating that program, and taking in the questions, and passing back the answers.  Obviously you would not say that the person could speak Chinese, and so therefore you can’t really say that the program speaks Chinese either.  Analogously, a program which simulates intelligent thought isn’t actually intelligent ... right?

This immediately reminds me of another podcast that I listen to, Let’s Learn Everything.  On their episode “Beaver Reintroductions, Solving Mazes, and ASMR,”4 Tom Lum asks the question “How does a slime mold solve a maze?” A slime mold is, after all, one of the lowest forms of life.  It doesn’t even have any neurons, much less a brain.  How could it possibly solve a maze?  Well, it does so by extending its body down all possible pathways until it locates the food.  Once it’s done that, it retracts all its psuedopods back into itself, leaving only the shortest path.

Now, the conclusion that Tom (as well as his cohosts Ella and Caroline) arrived at was that this isn’t really “solving” the maze.  Tom also had some great points on whether using maze-solving as a measure of intelligence makes any sense at all (you should really check out the episode), but let’s set that aside for now.  Presuming that being able to solve a maze does indicate something about the level of intelligence of a creature, isn’t it sort of sour grapes to claim that the slime mold did it the “wrong” way?  We used our big brains to figure out the maze, but when a creature who doesn’t have our advantages figures out a way to do complete the task anyway, we suddenly claim it doesn’t count?

Let’s go a step further.  If I give the maze to a person to solve, and they laboriously try every possible pathway until they find the shortest one, then are they really doing anything differently than the slime mold?  And does that mean that the person is not intelligent, because they didn’t solve the maze the way we thought they should?  I mean, just keeping track of all the possible pathways, and what you’ve tried already ... that requires a certain amount of intelligence, no?  Of course we lack the advantages of the slime mold—being able to stretch our bodies in such a way as to try all the pathways at once—but we figured out a way to use our brains to solve the problem anyhow.  I wonder if the slime mold would snort derisively and say “that doesn’t count!”

Now let’s circle back to the LLMs.  It is 100% true that all they’re doing is just predicting what the next word should be, and the next word after that, and so on.  No one is denying that.  But now we’re suddenly faced with deciding whether or not that counts as “intelligence.” Things that we’ve traditionally used to measure a person’s intelligence, such as SAT scores, are no problem for LLMs, which are now passing LSATs and bar exams in the top 10%.  But that doesn’t “count,” right?  Because it’s not really thinking.  I dunno; kinda feels like we’re moving the goalposts a bit here.

Part of the issue, of course, is that we really don’t have the slightest idea how our brains work.  Oh, sure, we can mumble on about electrical impulses and say that this part of the brain is responsible for this aspect of cognition based on what lights up during a brain scan, but, at the end of the day, we can’t really explain what’s going on in there when you can’t remember something today that you had no trouble with yesterday, or when you have a crazy idea out of nowhere, or when you just know that your friend is lying to you even though you can’t explain how you know.  Imagine some day in the far future where scientists discover, finally, that the way most of our thinking works is that words are converted to symbols in our brains, and we primarily talk by deciding what the next logical symbol should be, given the current context of who we’re talking to and what we’re talking about.  If that were to ever happen, seems like we’d owe these LLMs a bit of an apology.  Or would we instead decide that that aspect of how we think isn’t “really” thinking, and that there must be something deeper?

Look, I’m not saying that ChatGPT (for example) actually is intelligent.  I’m just pointing out that we don’t have a very clear idea, ourselves, what “intelligent” actually means.  It’s like the infamous Supreme Court definition of obscenity: we can’t define intelligence, but we know it when we see it, and this ain’t it.  But what I find to be a more interesting question is this: why does it matter?

An LLM like ChatGPT serves a purpose.  Now, overreliance on it can be foolish—just check out the case of the lawyers who tried to use ChatGPT to write their legal briefs for them.  As the Legal Eagle points out in that video, their idiocy was not so much the use of an LLM in the first place, but rather the fact that they never bothered to double check its work.  So you can’t always rely on it 100% ... but isn’t that true of people as well?  Honestly, if you’re a lawyer and you get a person to do your work, you’re still responsible for their mistakes if you sign your name at the bottom and submit it to a judge.  An incisive quote from the video:

... the media has talked about how this is lawyers using ChatGPT and things going awry.  But what it’s really revealing is that these lawyers just did an all around terrible job and it just happened to tangentially involve ChatGPT.

So you can talk to an LLM as if it were a person, it talks back to you as if it were a person, it can give you information like a person, and oftentimes more information that you can get from most of the persons you know, and you can rely it as exactly as much (or, more to the point, exactly as little) as you can rely on another person.  But it’s not a person, and it’s not really “thinking” (whatever that means), so therefore it’s not “intelligent.” Is that all just semantics?  And, even if it is, is this one of those cases where semantics is important?

I’ve got to say, I’m not sure it is.  I think every person reading this has to decide that for themselves—I’m not here to provide pat answers—but I think it’s worth considering why we’re so invested in things like LLMs not being considered intelligent.  Does it threaten our place up here at the top of the food chain?  (Or perhaps that should be “the top of the brain chain” ...)  Should we seriously worry that, if an AI is intelligent, that it poses a threat to the existence of humanity?  Many of the big tech folks seem to think so.  I personally remain unconvinced.  The Internet was proclaimed to be dangerous to humanity, as were videogames, television, rock-and-roll ... hell, even books were once considered to be evil things that tempted our children into avoiding reality and made them soft by preventing them from playing outside.  Yet, thus far, we’ve survived all these existential threats.  Maybe AI is The One which will turn out to be just as serious as people claim.  But probably not.

And, if it is the case that AI won’t take over the world and enslave or destroy us, then what difference does it really make whether or not it’s “technically” intelligent?  If it’s being useful, and if we can learn how to use it effectively without shooting ourselves in the foot, that’s good enough for me.  Perhaps it can be good enough for you as well.




[For complete transparency, I must say that, while ChatGPT did not write any of the words in this post, it did come up with the title.  Took it six tries, but it finally came up with something I felt was at least moderately clever.  So, if you like it, it’s because I’m very good at prompting LLMs, and, if you hate it, it’s because ChatGPT is not very smart.  This is one of the primary advantages of having an LLM as a contributor: I can hog all the credit and it will never be offended.]



__________

1 If you’re not familiar—and can figure out where to stream it—you should check out his Going Deep series.  It’s excellent.

2 Approximately 40 minutes in, if you want to follow along at home.

3 “LLM” stands for “large language model,” by the way, although knowing that is really unnecesssary to follow along on this topic.

4 Again, if you want to follow along at home, jump to about 44:45.











Sunday, June 4, 2023

Puzzle Progress

Well, I finally kicked off my baby girl’s birthday campaign, and I think it started off pretty well.  She (and my eldest’s partner) seemed to enjoy it at any rate.  The other two kids ... well, let’s just say that they more of the “I don’t have patience with anything I can’t kill” school of D&D.  Still, they’re contributing, and I think they may come around.  And, if they don’t ... welll, it isn’t their birthday game.

Longer post next time, most likely.