Unfrozen__Caveman

Unfrozen__Caveman OP t1_jef1gki wrote

I don't think that's the right path but I think completely ignoring him and others like him who are deeply concerned about the risks of AGI would be foolish.

In Yudkowsky's view, this technology is much more dangerous than nuclear weapons, and he's right. His solutions might not be good but the concern is valid and that's what people should focus on imo.

3

Unfrozen__Caveman OP t1_jecucvk wrote

There's a lot in your post but I just wanted to provide a counter opinion to this part:

> I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.

I think as a whole species, if we use humans as an example then yes, this is true on the surface. But ethics and empathy aren't even consistent among our different cultures. Some cultures value certain animals that other cultures don't care about; some cultures believe all of us are equal while others execute anyone who strays outside of their sexual norms; if you fill a room with 10 people and tell them 5 need to die or everyone dies, what happens to empathy? Why are there cannibals? Why are there serial killers? Why are there dog lovers or ant lovers or bee keepers?

Ultimately empathy has no concrete definition outside of cultural norms. A goat doesn't empathize with the grass it eats and humans don't even empathize with each other most of the time, let alone follow ethics. And that doesn't even address the main problem with your premise, which is that an AGI isn't biological intelligence - most likely it's going to be unlike anything we've ever seen.

What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?

Like individual humans, I believe the most likely thing it's going to empathize with and align with is itself, not us. Maybe it will think we're cute and keep us as pets, or use us as food for biological machines, or maybe it'll help us make really nice spreadsheets for marketing firms. Who knows...

2

Unfrozen__Caveman OP t1_jecbant wrote

Thanks for saying that. I don't want to be a doomer either, and I'm hopeful about the future, but I think a good amount of pessimism - or even fear - is healthy.

Being purely optimistic would be extremely irresponsible and honestly just plain stupid. All of the brightest minds in the field, including Altman and Ilya Sutskever have stressed over and over again how important alignment and safety are right now.

I'm not sure how accurate it is, but this graph of ML experts concern levels is also very disturbing.

If RLHF doesn't work perfectly and AGI isn't aligned, but it acts as though it IS aligned and deceives us then we're dealing with something out of a nightmare. We don't even know how these things work, yet people are asking to have access to the source code or wanting GPT4 to have access to literally everything. I think they mean well but I don't think they fully understand how dangerous this technology can be.

3

Unfrozen__Caveman t1_jdtt7t3 wrote

Not to downplay your experience but this is basically what a therapist does - although GPT isn't charging you $200 for a 50 minute session.

For therapy I think LLMs can be very useful and a lot of people could benefit from chatting with them in their current state.

Just an idea but next time you could prompt it to act as if it has a PhD in (insert specific type) psychology. I use this kind of prompt a lot.

For example, you could start off with:

You are a specialist in trauma-based counseling for (men/women) who are around (put your age) years old. In this therapy session we'll be talking about (insert subject) and you will ask me questions until I feel like going deeper into the subject. You will not offer any advice until I explicitly ask for it by saying {more about that}. If you understand, please reply with "I understand" and ask me your first question.

You might need to play around with the wording but these kind of prompts have gotten me some really great answers and ideas during my time with GPT4.

30

Unfrozen__Caveman t1_jd4cbc2 wrote

Not a movie but I'd highly recommend the Jeff Vandermeer novel, "Borne". He wrote the Southern Reach Trilogy, which included Annihilation (made into the movie). It's a dystopian view of a post-singularity world but it's incredibly interesting and there's crazy stuff like giant flying AI bears that eat buildings.

3

Unfrozen__Caveman t1_j89xqag wrote

The entire concept of a post scarcity society is flawed though. We see artificial scarcity all over the place today. Look at diamonds for a simple example. When something is plentiful and valuable humans almost always step in and throttle its availability.

Insulin is insanely cheap to make but drug companies stepped in and now it costs people hundreds of dollars.

You could have an AGI that creates whatever you want out of nothing but if the distribution of resources is handled by humans greed will always corrupt the process and average people will get exploited. It's been that way since the dawn of civilization.

If we're truly going to have a utopia (which I don't believe we will) human beings would need to be removed from decision-making roles. And even if that were to happen, who's to say that an AGI would even care about us? They might just look at us how we look at our single-cell organism ancestors.

1

Unfrozen__Caveman t1_j87fcas wrote

How exactly is the machine going to generate income if capitalism or some sort of goods and services economy doesn't exist? Who is going to pay the machine? Other machines? Income doesn't just magically appear out of thin air...

Who owns the machines? Nobody? Other "CEO" machines?

4

Unfrozen__Caveman t1_j0e4ei5 wrote

One way or another, most likely. When the true singularity takes place our lives will be completely transformed and imo the human species will either be wiped off the face of the earth or have everything dramatically enhanced very quickly. There might be a middle ground but in my opinion there's a good chance it'll be the former.

1