4e_65_6f

4e_65_6f t1_jeg7rkq wrote

Yes, two reasons:

1- If mistreating self aware robots becomes widely accepted in the culture people could start treating each other in the same manner or think this is normal.

2- If they're human like enough, it causes emotional distress to other people through empathy, even if the AI's sentience itself is iffy.

10

4e_65_6f t1_jdrg24t wrote

The argument does not follow:

​

>We should believe only the truth. (premise)

If S should do A, then S can do A. (premise)

If determinism is true, then if S can do A, S does A. (premise)

So if determinism is true, then if S should do A, S does A. (from 2, 3)

So if determinism is true, then we believe only the truth. (from 1, 4)

I believe I have free will. (empirical premise)

So if determinism is true, then it is true that I have free will. (from 5, 6)

So determinism is false. (from 7)

​

Just because you should believe the truth is does not mean you can only believe the truth.

This seems like a phrasing trick. By this logic you can justify any belief as true, determinism does not mean everything you believe is true.

This argument ignores that there is such a thing as a mistaken belief.

2

4e_65_6f t1_j37ambo wrote

>The ChatGPT model alone requires ~350GB of GPU memory to generate an output (essentially performing inference). So imagine a model capable of all that and more? It'd require a lot of compute power.

I didn't say "try training LLM's on your laptop". I know that's not feasible.

The point of trying independently is to do something different than what they're doing. You're not supposed to copy what it's being done already. You're supposed to try to code what you think would work.

Because, well LLM's aren't AGI and we don't know yet if they will ever be.

1

4e_65_6f t1_j375fm7 wrote

Every year I try at least once to code a new type of AI in python. It's been like 3 years now.

I try it because I think LLM's seems like a primitive approach to the problem. It's like you want to know the definition of a word in the dictionary, but instead of looking it up in the dictionary you just read the whole library until you eventually stumble upon the dictionary and find the right word.

True AGI probably won't require a quadrillion parameters and exaflops.

I have like folders full of different AI.py versions. None of which is AGI though lmao.

But I've learned a lot by attempting it.

10

4e_65_6f t1_j24rgi8 wrote

Reply to comment by aeblemost in A future without jobs by cummypussycat

>Why would rich people not just continue hoarding wealth?

Because without labor there's no costumers, without costumers the meaning of wealth itself changes. You're not gonna be able to sell the stuff afterwards and even if you did there's no point in it because your factory is the one making everything.

The only reason I could think of for a person in that situation to continue uselessly hoarding is if they're stupid. If that's the case then we're truly fucked.

3

4e_65_6f t1_j24n7n7 wrote

>forcing others to live in poverty, for their satisfaction.

Think of it like this, would you rather have everything and be liked by everyone. Or have everything and be hated by everyone?

Humans are a social species, there's no profit from elon shitposting on twitter (in fact it costs him money) yet he still does it every day.

8

4e_65_6f t1_j21o0zm wrote

In the wiki for openAI says gpt started when a researcher who isn't even an openAI contributor, a guy named Alec Radford posted a paper to the openAI forums. If the wiki info is correct it sounds like open discussion about the project is what got them there in the first place because it doesn't look like he was even an employee.

2

4e_65_6f t1_j1zqmmi wrote

Yeah like I said in another post, under capitalism it's likely that some company seeks complete monopoly of the labor market before we can all have access to the benefits of AGI. There's no good reason to release your model if it's much better than the current competition if you're a company.

I think this hasn't happened yet because they don't have AGI yet, they'll likely keep it open to the public in case anyone figures out how to advance the research and release it as an open source project so they can copy again.

6

4e_65_6f t1_j1u4io0 wrote

I'm not advocating for it I just think it is impossible to have non biased news. It's even more impossible IMO to try filtering the news and find a non biased perspectives by finding commonalities and statistical averages between all biased news.

Whenever you think "Oh this news source isn't biased" it's because they have the same bias as you do. So we don't see it.

The example you gave about Galileo would require the "news sorting bot" to understand the science so thoroughly that it would be able to realize when a scientist is speaking the truth and being ridiculed by it. But at that point it would be AGI already and humans probably wouldn't be the ones doing the research anymore.

1

4e_65_6f OP t1_j1u3oxt wrote

I was trying to include everyone's variation of post singularity economics as an option. I knew that no matter what I put in the poll there would be people complaining theirs it's not there. And TBH there are not a lot of choices some of these don't have an actual defined name it's more what I've seen people mention around in the sub from time to time.

Transhumanism in this economic context means you'll have to merge with machines in order to work for a living. I also believe it's possible that a mix of multiple of these options happen at the same time.

1

4e_65_6f OP t1_j1sgmei wrote

> What's stopping AI from deciding what's best for humanity if its infinitely more intelligent than us?

Well so far the only "goal" it has been programmed to follow is human instructions. It does that even when it's uncalled for (car hotwiring suggestions for instance). I can totally see that being a reality in your system where you're allowed to be very stupid in a very smart way using AI.

1

4e_65_6f OP t1_j1sdust wrote

I believe that it may come a point where AI always suggests the best decisions accurately. But if you follow those or not may still be up to you.

It's very possible that you'll still be allowed to do whatever the hell you want post singularity and use AI to aid you on that. I haven't come across an argument as to why not yet.

5