Viewing a single comment thread. View all comments

net_junkey t1_ja6rg48 wrote

Reply to comment by greatdrams23 in So what should we do? by googoobah

AIs like Chat GPT have the complexity of a brain. With Moore's law predicting PERSONAL commercially available computers with computing power equal to a brain coming in 20-25 years. In 3 decades we should have the convergence of software and hardware for sentient AIs.

−1

billtowson1982 t1_ja74aj4 wrote

1.) Whether AI is sentient not is almost irrelevant for its impact on jobs or pretty much any other aspect of society. Something can be plenty intelligent without being sentient, and even a rather dumb being can still be sentient. AI intelligence (or in other words, capability) will be the main thing that affects society. Not sentience.

2.) No AI today has the complexity of a brain based on any meaningful measurement. Even a brief chat with chatGPT is enough to show a person how stupid it is. Further today's AIs are all absurdly specialized compared to biological actors. Powerful, but in absurdly narrow ways.

1

net_junkey t1_ja7ar08 wrote

#2 have you talked to people? ChatGPT's answers are as good or better then the average person's. Not to mention this is after it got lobotomized to not give answers that can be considered offensive or that sound like the AI has personal oppinions.

1

billtowson1982 t1_ja8xpvl wrote

They're only better in the sense that Google circa 2004's answers were better than the average humans - both had access to an extremely large database of reasonably written (by humans) information. ChatGPT just adds the ability to reorganize that information on the fly. It doesn't have any ability to understand the information or to produce truly new information - two abilities that literally every conscious human (and in fact every awake animal) has to varying degrees.

1

net_junkey t1_ja9ktk7 wrote

AIs understand. Human brains learn concepts by forming a bundle of neurons dedicated to the concept of (lets say) "cat" based on the input of our senses - sight, smell...Modern AI's are designed to replicate the same process 1 to 1 on a software level. If anything they understand basic concepts better then humans.

The big jump right now is AIs understanding the relationship between concepts. Example: "cat" should be linked to the concept of "pet" and definitely not with the concept of "oven".

Problem is there are still kinks in the relationship between concepts part. AI is modeled on the human brain and the human brain is not a perfect system. In theory writing a simulation for the human Id, Ego, and Super- Ego and bundling it into a sentient AI package is quite doable. Making it happen while the foundations are still unstable is practically/near impossible.

1

billtowson1982 t1_jaa2f0n wrote

You don't know anything about AIs do you? I mean you read an article in USA today and now I'm having to hear you repeat things from it, plus some stuff you imagined to be reasonable extrapolations based on what your read.

0

net_junkey t1_jabo5vx wrote

The learning part of AI is based on/similar to how neurons learn. Once an AI has learned/been trained it stores data and filters for it on the hard drive.

How does a brain work? Data is written in neuron clusters (scientist have been able to find neuron bundles representing concept). The filters are neural connections coming out of those bundles. Brain optimises performance by strengthening commonly used connections and removing old unused ones.

Tained AI + continuous learning algorithm = basic brain even if only comparable to an insect.

1