Viewing a single comment thread. View all comments

RamsesThePigeon t1_jds0kei wrote

> I'm actually a programmer and at least know the basics of how machine learning works

Then you know that I'm not just grasping at straws when I talk about the fundamental impossibility of building comprehension atop an architecture that's merely complicated instead of complex. Regardless of how much data we feed it or how many connections it calculates as being likely, it will still be algorithmic and linear at its core.

>It can extract themes from a set of poems I've written.

This statement perfectly represents the issue: No, it absolutely cannot extract themes from your poems; it can draw on an enormous database, compare your poems with things that have employed similar words, assess a web of associated terminology, then generate a response that has a high likelihood of resembling what you had primed yourself to see. The difference is enormous, even if the end result looks the same at first glance. There is no understanding or empathy, and the magic trick falls apart as soon as someone expects either of those.

>It wasn't long ago we said a computer could never win at Go, and it would make you a laughing stock if you ever claimed it could pass the Bar exam.

Experts predicted that computers would win at games like Go (or Chess, or whatever else) half a century ago. Authors of science fiction predicted it even earlier than that. Hell, we've been talking about "solved games" since at least 1907. All that victory requires is a large-enough set of data, the power to process said data in a reasonable span of time, and a little bit of luck. The same thing is true of passing the bar exam: A program looks at the questions, spits out answers that statistically and semantically match correct responses, then gets praised for its surface-level illusion.

>The goalposts just keep shifting.

No, they don't. What keeps shifting is the popular (and uninformed) perspective about where the goalposts were. Someone saying "Nobody ever thought this would be possible!" doesn't make it true, even if folks decide to believe it.

>You're going really against the grain if you think it's not doing anything impressive.

It's impressive in the same way that a big pile of sand is impressive. There's a lot of data and a lot of power, and if magnitude is all that someone cares about, then yes, it's incredible. That isn't how these programs are being presented, though; they're being touted as being able to write, reason, and design, but all they're actually doing is churning out averages and probabilities. Dig into that aforementioned pile even a little bit, and you won't find appreciation for your poetry; you'll just find a million tiny instances of "if X, then Y."

Anyone who believes that's even close to how a human thinks is saying more about themselves than they are about the glorified algorithm.

1