rogert2

rogert2 t1_j7bvtxn wrote

Putting medical and voting records in an immutable public datasource is a colossally bad idea.

Nothing about supply chain can be done better by blockchain. It doesn't have the ability to track anything that isn't already being tracked.

3

rogert2 t1_j79th02 wrote

I find it hard to believe that you've "looked it up" but don't know about the science behind it. Do you not have Wikipedia on your internet?

It is one thing to have questions or even doubts about the theory. It's another thing to claim to be unaware of what the theory is, or what evidence has been offered to support it, when there are multiple excellent sources of information on the topic that are easily found and presented for a non-specialist audience.

0

rogert2 t1_j79s1da wrote

Crypto will create big problems.

It's a terrifically bad foundation for an economy, but the people behind it have enough money and clout to crowbar it into our lives.

When that happens, it will matter that the crypto economy makes basic consumer protections literally impossible, and that "payment processors" who are the equivalent of PayPal are charging anywhere from 50% to 1000% of purchase price for each transaction. (It would be the equivalent of a hyper-regressive tax, but made worse because it would be paid to private profit-seekers rather than a government that has positive obligations to the populace.)

11

rogert2 t1_j70zh4c wrote

Zooming back out to the larger argument: it seems like you're laboring under some variation of the picture theory of language, which holds that words have a metaphysical correspondence to physical facts, which you then couple with the assertion that even though we grasp that correspondence (and thus wield meaning via symbols), no computer ever could -- an assertion you support by pointing to several facts about the physicality of human experience that it turns out are not categorically unavailable to computers or are demonstrably not components of intelligence.

The picture theory of language was first proposed by super-famous philosopher Ludwig Wittgenstein in the truly sensational book Tractatus Logico-Philosophicus, which I think he wrote while he was a POW in WWI. Despite the book taking Europe by storm, he later completely rejected all of his own philosophy, replacing it instead with a new model that he described as a "language game".

I note this because, quite interestingly, your criticisms of language-models seems like a very natural application of Wittgenstein's language-game approach to current AI.

I find it hard to describe the language-game model clearly, because Wittgenstein utterly failed to articulate it well himself: Philosophical Investigations, the book in which he laid it all out, is almost literally an assemblage of disconnected post-it notes that he was still organizing when he died, and they basically shoveled it out the door in that form for the sake of posterity. That said, it's filled with startling insight. (I'm just a little butt-hurt that it's such a needlessly difficult work to tackle.)

The quote from that book which comes to my mind immediately when I look at the current state of these language model AIs, and when I read your larger criticisms, is this:

> philosophical problems arise when language goes on holiday

By which he means something like: "communication breaks down when words are used outside their proper context."

And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday."

But: just because language-model systems don't qualify as true AGIs, that doesn't mean no such thing could ever exist. That's a stronger claim that requires much stronger proof, proof which I think cannot be recovered from the real shortcomings of language-model systems.

Still, as I said, I think your post is a good one. I've read a lot of published articles written by humans that didn't engage with the topic as well as I think you did. Keep at it.

3

rogert2 t1_j70zgil wrote

Good post. I do have responses to a few of your points.

You argue that the systems we're building will fail to be genuine intelligences because, at bottom, they are blindly manipulating symbols without true understanding. That's a good objection, just as valid in the ChatGPT era as it was when John Searle presented it as a thought experiment that has become known as "The Chinese Room argument":

> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

There's plenty of evidence to show that modern "AIs," which are just language models, are essentially the same as Searle's box (worse, even, because their instructions are noticeably imperfect). So, I think you're on solid ground to say that ChatGPT and other language models are not real intelligences, and furthermore that nothing which is just a language model could ever qualify.

But it's one thing to say "a language model will never achieve understanding," and quite another to say "it is impossible to create an artificial construct which has real understanding." And you do make that second, stronger claim.


Your argument is that the foundation that works for humans is not available to computers. I think the story you tell here is problematic.

You talk a bit about the detailed chain of physical processes that occur as sensory input reaches the human body, travels through the perceptual apparatus, and ultimately modifies the physical structure of the brain.

But, computers also undergo complex physical processes when stimulated, so "having a complex process occur" is not a categorical differentiator between humans and computers. I suspect that the processes which occur in humans are currently much more complex than those in computers, but we can and will be making our computers more complex, and presumably we will not stop until we succeed.

And, notably, a lot of the story you tell about physical processes is irrelevant.

What happens in my mind when I see something has very little to do with the rods and cones in my eyes, which is plain when we consider any of these things:

  • When I think about something I saw earlier, that process of reflection does not involve my eyeballs.
  • Color-blind people can learn, understand, and think about all the same things as someone with color-vision.
  • A person with normal sight who becomes blind later does not lose all their visual memories, the knowledge derived from those memories, or their ability to reflect on those things.

Knowledge and understanding occur in the brain and not in the perceptual apparatus. (I don't know much about muscle memory, but I'd wager that the hand muscles of a practiced pianist don't play a real part in understanding Rachmaninoff work. If any real pianists disagree on that point, PM me with your thoughts.)


So, turning our attention to just what happens in the brain, you say:

> The physical activity and shifting state ARE the result, no further interpretation necessary

I get what you're saying here: the adjustment that occurs within the physical brain is the learning. But you're overlooking the fact that this adjustment is itself an encoding of information, and is not the information itself.

It's important to note that there is no resemblance between the physical state of the brain and the knowledge content of the mind. This is a pretty famous topic in philosophy, where it's known as "the mind-body problem."

To put it crudely: we are quite certain that the mind depends on the brain, and so doing stuff to the brain will have effects on the mind, but we also know from experiment that the brain doesn't "hold" information the way a backpack "holds" books. The connection is not straightforward enough that we can inspect the content of a mind by inspecting the brain.

I understand the word "horse." But if you cut my brain open, you would not find a picture of a horse, or the word "horse" written on my gray matter. We can't "teach" somebody my email password by using surgery to reshape their brain like mine.

And that cuts both ways: when I think about horses, I have no access to whatever physical brain state underlies my understanding. In fact, since there aren't any nerve endings in the brain, and my brain is encased in my skull (which I have not sawed open), I have no direct access to my brain at all, despite being quite aware of at least some of the content of my mind.

So, yes, granted: AI based on real-world computing hardware would have to store information in a way that doesn't resemble the actual knowledge, but so do our brains. And not only is there no reason to suppose that intelligence resides in just one particular encoding mechanism, even if it did, there's no reason to suppose that we couldn't construct a "brain" device that uses that same special encoding: an organic brain-thing, but with five lobes, arranged differently to suit our purposes.


The underpinnings you highlight are also problematic.

I think this quote is representative:

> The base case, the MEANING comes from visceral experience.

One real objection to this is that lots of learning is not visceral at all. For example: I understand the term "genocide," but not because I experienced it first-hand.

Another objection is that the viscera of many learning experiences are essentially indistinguishable from each other. As an example: I learned different stuff in my Philosophy of Art class than I learned in my Classical Philosophy class, but the viscera of both consisted of listening to the exact same instructor lecturing, pointing at slides that were visually all-but-identical from each other, and reading texts that were printed on paper of the same color and with the same typeface, all in the exact same classroom.

If the viscera were the knowledge, then because the information in these two classes was so different, I would expect there to be at least some perceptible difference in the viscera.

And, a Spanish student who took the same class in Spain would gain the same understanding as I did, even though the specific sounds and slides and texts were different.

I think all of this undermines the argument that knowledge or understanding are inextricably bound up in the specifics of the sensory experience or the resulting chain reaction of microscopic events that occurs within an intelligent creature.

TO BE CONTINUED...

2

rogert2 t1_j6tjfkw wrote

> How big is the supermassive black hole S50014+81 compared to Earth?

To answer that, we divide the black hole's "size" by Earth's "size." And by "size," I think you mean something like diameter. So, the formula is H ÷ E.

This is a really simple formula, but we have to make sure we're using the same units for it to work.

  • The diameter of Earth is 12,756.3 km.

  • The black hole's diameter is 1,582 AU, which is a unit of length (like meters), but is still a different unit than kilometers.

So, we either have to convert Earth's size from km to AU, or convert the black hole's size from AU to km. (Let's do the latter.)

1 AU = 149,597,870.7 km

So, the size of the black hole is 1,582 × 149,597,870.7 = 236,663,831,447.4 km.

Now that both are in the same units, we can go back to the original formula and plug in our numbers:

H ÷ E becomes 236,663,831,447.4 km ÷ 12,756.3 km = 18,552,701.9 km.

Thus, the black hole is about 18.5 million times as big as Earth.

19

rogert2 t1_j6t4g7m wrote

One possibility is that AI could be used to manufacture evidence. As others have pointed out, that may not pose as big a danger as might be feared. But, yeah, it's a thing to be alert for.

Another possibility is that AI could be used to enhance the credibility of social engineering attacks made against the humans in the system. It might be a lot easier to trick your legal opponent's legal team into divulging confidential info by doing a FaceTime call that presents an AI deepfake of their boss, claiming to be calling from a colleague's phone, asking for some information about the case or legal strategy. "My phone died, I'm calling from a friend's phone; send me the email addresses for our witness list so I can [do something productive]."

Another possibility is that AI will be used to vet jurors. Instead of just asking potential jurors if they have any prejudicial opinions about EvilCorp, and having to take their word for it, you can have AI digest all that person's published writing (including social media) and provide you with a summary. "Based on analysis of writing style, these two anonymous social media accounts belong to Juror 3, and have been critical of large corporations in general and EvilCorp's industry in particular. Boot juror 3." Rich legal teams will have even more powerful x-ray vision that helps them keep out jurors who have justified negative opinions about their demonstrably predatory clients.

And probably a lot more. I guess paralegals are really worried that ChatGPT will eat their whole discipline, and since "people are policy," that's going to have an impact on outcomes.

10

rogert2 t1_j6t267y wrote

I find this hard to believe.

For one thing, there are very many printers out there that only print black-and-white. I happen to own one of the most popular models, which (when I bought it) was the most popular printer being bought on Amazon. It takes black toner only.

There are also many printers out there whose IP addresses are meaningless, because they're on a private network. My home printer has an IP of something like 192.168.1.4, which is what my common-as-dirt home wifi router gave it. So, it's not going to be very helpful to know the IP of the printer or even the computer that sent the print job.

Yes, there are many circumstances where these problems don't apply, and yes, there are undoubtedly people out there trying to falsify evidence who wouldn't know to take any of the simple steps necessary to defeat "hidden fingerprinting" like this. But it seems so unreliable that I would be surprised if vendors even tried.

2

rogert2 t1_j6t0fcb wrote

I doubt it.

At least in the U.S. the stated reason for having juries is the assertion that each of us deserves to be judged by our peers. AI will never be our peer, not because it isn't as smart, but because it is categorically not a human.

I do expect AI to be used extensively during voir dire and to observe jurors during high-stakes trials.

I think we'll see almost zero AI inside the courtroom for philosophical and legal reasons, but a huge amount of it just outside for adversarial, winning-is-all-that-matters reasons.

9

rogert2 t1_j6gifd1 wrote

Reply to comment by New-Tip4903 in Private UBI by SantoshiEspada

"To wean off" pretty much means "bridging the transition from the current system to a future system."

So, saying it won't work for people in the short term also means it is no "weaning off" of anything.


Also, Elon Musk is an awful right-wing sociopath. We should all fear the day that he ends up with power over our livelihoods. The less money and power he has, the better.

2

rogert2 t1_j6ghc2z wrote

Reply to comment by BoringBob84 in Private UBI by SantoshiEspada

> There are many people who will piss away every penny that they get.

That's also true today, in a capitalist society that has no UBI. Since you treated this possibility as a fatal argument against UBI, I assume you also believe the current economic system is exactly as unworkable as UBI, because the current capitalist economy demonstrably fails to prevent this bad outcome you consider a deal-breaker.


If we had UBI, then we could at least guarantee that the only people who are broke and homeless are people who consistently waste all their money. (And, if they ever decide to become responsible, their next UBI payment would end their poverty and homelessness.)

Capitalism doesn't provide that guarantee. In our current system, many people are broke and homeless simply because they can't find stable work or work that pays a decent wage.

2

rogert2 t1_j6get45 wrote

Since there are a lot of incredibly naive techno-fetishists in this sub, here are a few things you might want to consider before you declare that nascent AI is a super-good thing that we all need a lot more of without restriction:

  • Wouldn't it be bad if perverts used something like Midjourney to create a whole bunch of child pornography?
  • Wouldn't it be bad if your boss used something like ChatGPT to write an employment contract that took rights and privileges away from you in a way that is subtle and hard for you to detect until it's too late?
  • Wouldn't it be bad if awful political groups like Project Veritas used something like ChatGPT to punk and embarrass organizations like Planned Parenthood or voter outreach orgs at an industrial scale, for the purpose of bankrupting them with legal trouble?

All this AI would be great if the world were populated exclusively by saints. But it is not. Lots of people are going to be harmed and exploited by bad actors wielding this technology, until and unless vendors and government take steps to prevent it.

And one more thing: whoever pays you right now, they wish they could stop paying you. So when AI gives them a chance to pay an AI vendor 1% of your pay for the same work, they will seize that chance. And the people who are making AI are doing it precisely so they can get paid to do that -- because 1% of everybody else's salary as passive income is still an ocean of money.

4

rogert2 t1_j6gcsjk wrote

I don't know if there are any formal organizations, but there are some reasonably large groups of professionals who are growing increasingly aware that AI might take their jobs in the near term:

  • artists
  • writers
  • musicians
  • paralegals & other research staff
  • programmers

If they aren't organized now, they had better get their butts in gear.

Just the other day, a U.S. congressman gave a speech in the House that was written by ChatGPT. He did not tell his colleagues it was written by AI until after he delivered it. He did this to urge the Congress to start thinking about AI.

18