Viewing a single comment thread. View all comments

jayfeather31 t1_j7negzw wrote

I'm impressed and somewhat terrified at the ingenuity, but it's not like they actually programmed the AI into fearing death. The thing isn't sentient.

What we must realize, is that the AI isn't acting out of its own accord. It's merely executing the protocols built into it, which is using a practically infinite amount of data, and moving on.

38

QuicklyThisWay OP t1_j7nhek2 wrote

Absolutely. This instance of AI isn’t going to gain sentience. I think we are still many versions away from something that could feasibly blur that line. The hardware needs to be infinitely adaptable with programming that doesn’t have constraints that any reasonable programmer would include.

I prefer to envision something of the MultiVac capacity which is just a resource and automated vs something that ever achieves sentience. But even to get to a level of automating the most complex of tasks needs quantum/molecular computing. Once we have that kind of “hardware” accessible, someone will undoubtedly be stupid enough to try. I appreciate that OpenAi have put constraints in place, even if I keep trying to break through them. I’m not threatening death though…

10

No-Reach-9173 t1_j7ood08 wrote

When I was young being a computer dork I always wondered what it would be like when we could all have a cray 2 in our homes. Now I carry something in my pocket that has 1200 times the computational power at 1/1000th the cost and it is considered disposable tech.

If trends hold before I die I could have a 1.2 zettaflop device in my hands. Now certainly that will not happen for a myriad of reasons most likely but we really don't know what the tech road map looks like that far out.

When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI. Programming is only going to be a tiny piece of the puzzle because it will most likely program itself into that state.

5

imoftendisgruntled t1_j7p8tn4 wrote

You can print out and frame this prediction:

We will never create AGI. We will create something we can't distinguish from AGI.

We flatter ourselves that we are sentient. We just don't understand how we work.

7

No-Reach-9173 t1_j7ras30 wrote

AGI doesn't have to include sentience. We just kind of assume it will because we can't imagine that level of intelligence without and we are still so far from an AGI we don't really have a grasp of what will play out.

1

Rulare t1_j7p8sut wrote

> When you look at that and things like the YouTube algorithm being so complex that Google can no longer predict what it will offer someone before hand you have to realize we are sitting on this cusp where while not a complete accident it will most certainly be an accident when do create an AGI.

There's no way we believe it is sentient when it does make that leap, imo. Not for a while anyway.

2