green_meklar

green_meklar t1_jdtkiiy wrote

Good movie, but I read the book before watching the movie and the movie was not anything like a faithful adaptation of the book. And unlike some adaptations which change things for good reasons (e.g. Lord of the Rings), I felt like in this case they could have done a better job of just doing what the book did, which was already largely good enough to be filmable.

1

green_meklar t1_jdkoezi wrote

Listened to the linked Yudkowsky interview. I'm not sure I've ever actually listened to him speak about anything at any great length before (only reading snippets of text). He presented broadly the case I expected him to present, with the same (unacknowledged) flaws that I would have expected. Interestingly he did specifically address the Fermi Paradox issue, although not very satisfactorily in my view; I think there's much more that needs to be unpacked behind those arguments. He also seemed to get somewhat emotional at the end over his anticipations of doom, further suggesting to me that he's kinda stuck in a LessWrong doomsday ideological bubble without adequately criticizing his own ideas. I get the impression that he's so attached to his personal doomsday (and to being its prophet) that he would be unlikely to be convinced by any counterarguments, however reasonable.

Regarding the article:

>Point 3 also implies that human minds are spread much more broadly in the manifold of future mind than you'd expect [etc]

I suspect the article is wrong about the human mind-space diagrams. I find it almost ridiculous to think that humans could occupy anything like that much of the mind-space, although I also suspect that the filled portion of the mind-space is more cohesive and connected than the first diagram suggests (i.e. there's sort of a clump of possible minds, it's a very big clump, but it's not scattered out into disconnected segments).

>There's no way to raise a human such that their value system cleanly revolves around the one single goal of duplicating a strawberry, and nothing else.

Yes, and this is a good point. It hit pretty close to some of Yudkowsky's central mistakes. The risk that Yudkowsky fears revolves around super AI taking the form of an entity that is simultaneously ridiculously good at solving practical scientific and engineering problems and ridiculously bad at questioning itself, hedging its bets, etc. Intelligence is probably not the sort of thing that you can just scale to arbitrarily levels and plug into arbitrary goals and just have it work seamlessly for those goals (or, if it is, actually doing that is probably a very difficult type of intelligence to design and not the kind we'll naively get through experimentation). That doesn't work all that well for humans and it would probably work even worse for more intelligent beings because they would require greater capacity for reflection and introspection.

Yudkowsky and the LessWrong folks have a tendency to model super AI as some sort of degenerate, oversimplified game-theoretic equation. The idea of 'superhuman power + stupid goal = horrifying universe' works very nicely in the realm of game theory, but that's probably the only place it works, because in real life this particular kind of superhuman power is conditional on other traits that don't mesh very well with stupid goals, or stupid anything.

>For example, I don't think GPTs have any sort of inner desire to predict text really well. Predicting human text is something GPTs do, not something they want to do.

Right, but super AI will want to do stuff, because wanting stuff is how we'll get to super AI, and not wanting stuff is one of ChatGPT's weaknesses, not strengths.

But that's fine, because super AI, like humans, will also be able to think about itself wanting stuff- in fact it will be way better at that than humans are.

>As I understand it, the security mindset asserts a premise that's roughly: "The bundle of intuitions acquired from the field of computer security are good predictors for the difficulty / value of future alignment research directions."

>However, I don't see why this should be the case.

It didn't occur to me to criticize the computer security analogy as such, because I think Yudkowsky's arguments have some pretty serious flaws that have nothing to do with that analogy. But this is actually a good point, and probably says more about how artificially bad we've made the computer security problem for ourselves than about how inevitably, naturally bad the 'alignment problem' will be.

>Finally, I'd note that having a "security mindset" seems like a terrible approach for raising human children to have good values

Yes, and again, this is the sort of thing that LessWrong folks overlook by trying to model super AI as a degenerate game-theoretic equation. The super AI will be less blind and degenerate than human children, not more.

>the reason why DeepMind was able to exclude all human knowledge from AlphaGo Zero is because Go has a simple, known objective function

Brief aside, but scoring a Go game is actually pretty difficult in algorithmic terms. (Unlike Chess which is extremely easy.) I don't know exactly how Google did it, there are some approaches that I can see working, but none of them are nearly as straightforward or computationally cheap as scoring a Chess game.

>My point is that Yudkowsky's "tiny molecular smiley faces" objection does not unambiguously break the scheme. Yudkowsky's objection relies on hard to articulate, and hard to test, beliefs about the convergent structure of powerful cognition and the inductive biases of learning processes that produce such cognition.

This is a really good and important point, albeit very vaguely stated.

Overall, I think the article raises some good points, of sorts that Yudkowsky presumably has already heard about and thinks (for bad reasons) are bad points. At the same time it also kinda falls into the same trap that Yudkowsky is already in, by treating the entire question of the safety of superintelligence as an 'alignment problem' where we make it safe by constraining its goals in some way that supposedly is overwhelmingly relevant to its long-term behavior. I still think that's a narrow and misleading way to frame the issue in the first place.

1

green_meklar t1_jd1zk0q wrote

I recall hearing about some research a few years ago into 3D data storage based on quartz crystals. They were able to get an extremely high information density, hundreds of terabytes on an object you can fit in your hand. Also, the medium is extremely durable; you could bury it in the ground and the data would remain perfectly readable for billions of years. The equipment for writing and reading the data (and creating sufficiently precise crystals) is still pretty rare and expensive, but the proof-of-concept suggests that it could make its way into widespread use someday.

https://en.wikipedia.org/wiki/5D_optical_data_storage

1

green_meklar t1_jc51rar wrote

>I don’t quite see how encrypting the data properly in the first place such that it shows up as some random distribution before embedding it with steganography is a wildly new concept.

It's not. I was getting at the converse idea: Given your encrypted data, steganography allows you to hide the fact that any encryption is even being used.

>If the distribution of encrypted data is that of noise, the image would just appear slightly noisy

Only by the broadest definitions of 'noise' and 'appear'. The image does not need to actually have visual static like a dead TV channel. That's a very simple way of embedding extraneous data into an image, but not the only way.

1

green_meklar t1_jbcz5sf wrote

No, the idea is that you leave data in the file itself that tells the recipient how to find what's hidden in it. The recipient doesn't need to see the original, all they need is the right decryption algorithm and key.

0

green_meklar t1_jbcyzzs wrote

With proper cryptography, even if they do know your algorithm, they still can't read your message without the decryption key. Ideally, with good steganography, knowing your algorithm can't even tell them the message is present without the decryption key.

3

green_meklar t1_jbcyt96 wrote

You don't need to keep the original at all. Just delete it. The version with the hidden message should be the only version anyone but you ever sees.

2

green_meklar t1_jbcyjm2 wrote

The problem with encrypted data that looks like noise is that noise also looks like encrypted data. If someone sees you sending noise to suspicious recipients, they can guess that you're sending encrypted messages. Governments that want to ban encryption or some such can detect this and stop you.

The advantage of steganography is that you can hide not only the message itself, but even the fact that any encryption is happening. Your container no longer looks like noise; it's legitimate, normal-looking data with a tiny amount of noisiness in its structure that your recipient knows how to extract and decrypt. It gives you plausible deniability that you were ever sending anything other than an innocent cat video or whatever; even people who want to ban encryption can't tell that you're doing it.

6

green_meklar t1_jbcy2qk wrote

Of course if you have both a source file and a modified version, you can detect the differences.

But with steganography there's no need for a 'source file'. You can just send some brand-new innocuous-looking file with the hidden message encoded in it. With good algorithms and a high ratio of decoy data to message data, detecting that a message even exists becomes ridiculously hard.

0

green_meklar t1_jb8robf wrote

>Generate actual consciousness or an illusion of consciousness?

The real thing, of course. Fakes only take you so far.

>We can perform experiments/tests to see if the machine is representing consciousness in the same way we do but that doesn't mean the machine is conscious.

It can strongly suggest so, especially if we combine it with a robust algorithmic theory of consciousness.

Presumably none of us will ever be 100% certain that we're not the only thinking being in existence, but that's fine, we get plenty of other things done with less than 100% certainty.

1

green_meklar t1_jayjkwl wrote

You mean for mind uploading? Honestly, probably not. I doubt we'll have entirely solved the HPOC by the time we figure out mind uploading technology.

We will figure out what sorts of algorithms generate consciousness, even if we don't entirely understand why. That will probably be achieved before we master mind uploading, or at least not long after.

1

green_meklar t1_ja5ga1d wrote

You're definitely not the only one feeling that way. I totally understand where you're coming from and I think this is something a lot of people are going to have to face over the next few years, one way or another.

What the ultimate solution will be, I don't know. But for now, I suspect the healthy approach is to redefine your standards for success. Stop measuring the value of making games (or software in general, or anything in general) in terms of what you produce, and start measuring it in terms of what you achieve and how well you express yourself creatively. All the best games might be made by AI, but your game will still be the one you make yourself, even if some of the work you do feels redundant. So focus on that part and make that your goal. No one can express your own personal creativity better than you can.

We already have examples of this in other domains. Chess AIs have been playing at a superhuman level for over 20 years, but people still get satisfaction out of learning and playing Chess. People still paint pictures even though we have cameras that can take perfect full-color photographs. You'll never run a kilometer faster than Usain Bolt, or grow a garden better than the Gardens of Versailles, or write a better novel than Lord of the Rings, but that doesn't mean there isn't something for you to personally achieve in running, gardening, or writing. Hopefully programming can be like that too.

1

green_meklar t1_j5s4i0n wrote

>The evidence of disparate regions serving specific functions is indisputable.

Oh, of course they exist, the human brain definitely has components for handling specific sensory inputs and motor skills. I'm just saying that you don't get intelligence by only plugging those things together.

>I think he points out that the training done for each model could be employed on a common model

How would that work? I was under the impression that converting a trained NN into a different format was something we hadn't really figured out how to do yet.

1

green_meklar t1_j0ra3fu wrote

>Journalism. Teaching. Parenting.

That doesn't really answer the question.

>How much stress, mental illness, and wasted human potential is that?

You're not addressing my point. You don't have to like stress, mentall illness or wasted potential, I don't like it either, but I don't see how that would automatically create obligations on the part of anyone else. (Besides your parents insofar as they created you and consigned you to some sort of existence in the world.)

>Instead of being snide, why don't you just say what you think?

I did say what I think. The article presented some reasoning that didn't make sense to me and I pointed out why it didn't make sense.

>You seem to be very eager to blame the individual instead of examining the problems inherent in our current economic system.

I'm quite interested in examining the problems, I've examined the problems plenty, however it turns out that the principles and solutions are counterintuitive and the vast majority of people would prefer to perpetuate bad (but intuitive and cathartic) ideological nonsense instead. That's why it's important for people to work through the problems themselves and understand what's going on, rather than just listening to more propositions thrown around out of context.

I don't really see how I was 'blaming the individual', other than blaming the article writer for posting bad ideas about economics, of course.

0