Noname_FTW

Noname_FTW t1_j33srty wrote

True. The whole eugenics movement and the application by the nazis is leaving its shadow. But if don't act we will come to more severe arbitrary situations like we are currently in. We have human apes that can talk through sign languages and we still keep some of them in zoos. There is simply no rational approach being made but just arbitrary rules.

0

Noname_FTW t1_j33pquo wrote

>nazi shit

The system wouldn't classify individuals but species. You are not getting your human rights by proofing you are smart enough but by being born a human.

There are certainly smarter apes than some humans. We still haven't given apes human rights even though there are certainly arguments to do so.

I'd say to a small part that is because we haven't yet devloped a science based approach towards the topic of studying the differences. There is certainly science in the area of intelligence but it needs some practical application in the end.

The core issue is that this problem will sooner or later arise when you have the talking android which seems human. Look at the movie Bicentennial Man.

If we nip the issue in the bud we can prevent a lot of suffering.

0

Noname_FTW t1_j33bbbu wrote

Additionally: While we know how these AI's work on a technical level and can therefore explain their behavior it doesn't mean that this isn't consciousness. People tend differentiate this between the human experience because we do not yet have the intricate understanding on how exactly the brain works. We know a lot about it but not to the degree we could recreate it.

The solution should be that we will develop an "intelligence level chart" so to speak. I heard this as a reference somewhere in some scifi material but can't remember where.

The point would be that we are going to start develop a system with levels in which we classifiy AI's and biological beings in terms of their intelligence.

It would look similar on how we classify autonomous vehicles today with level 5 being entirely autonomous.

The chart could go from 0-10 where humans would be somewhere around 5-8. 10 being a super AGI and viruses being 0.

Each level would be assigned properties that are associated with more intelligence.

Having this system it would help to assign rights to species of each category.

Obviously it would have to be under constant scrutiny to be as accurate and objective as it can be.

6

Noname_FTW t1_iy0c675 wrote

I agree. I think there will be companies that will use AI's to create easy simple software solutions in Lego-kinda way where anyone can make their own software like it is currently already with homepages.

But once you get into very complex and specific specifications you will need skilled humans that can guide AI's to the correct result.

Anything else would require AGI and at that point we have basicaly >human intelligence competing against humans. At that point we can no longer sustain the current concept of a labor market.

7

Noname_FTW t1_ixzwatp wrote

As someone working in the Software Development Field I am genuinely curious what the future version of the job is going to be. I can't imagine anything an AI couldn't do equally good or better than a human.

It could be that future software devs make desgins and technical supervision. Even if an AI can write code 100% bugfree (unlikely) and test it perfectly (unlikely), someone has to tell the AI what the specification of any process should be. And likely if there is a problem the AI can't solve it will require human intervention to fix/improve the issue.

So while we will be able to produce a piece of software 10-100x faster, iterating through the versions will likely still require SOME technical personal.

11