Viewing a single comment thread. View all comments

greatdrams23 t1_ja4mxvl wrote

100%

In the 60s 70s, AI was'just around the corner'.

I studied AI in 1980 and AI just around the corner.

Now, after another 40 years, it is just around the corner.

25

Cryptizard t1_ja4wsim wrote

You are trolling if you say you can't see the difference this time.

6

johnnymoha t1_ja4z0xk wrote

Seems arrogant to think you can see the difference this time.

6

Cryptizard t1_ja4zefv wrote

No, it's just uh... what is it called... objective reality? Maybe you should try it some time.

−4

boersc t1_ja50jqu wrote

AI currently really isn't that much different from 30-40 years ago. Not really. Back then, they also did mass training of ai and also got it horribly wrong, for reasons difficult to explain. Ai identifying tanks based on whether the sun is shining or not, was a prime example back then.

It hasn't progressed that much beyond that, when you actually study it. Boston dynamics probably are most advanced nowadays and even those robots aren't really 'smart'. They can't do what they are not trained to do. Same with all the chatboxes nowaday. They can only combine and extrapolate that they have been taught. There is no original thought.

1

atleastimnotabanker t1_ja576s5 wrote

Boston Dynamics is specializing in robotics, there are different companies that are far more advanced when it comes to AI

3

hervalfreire t1_ja79j62 wrote

Machine Learning (“mass training”?) didn’t exist 40 years ago. Cases like the tank one you described used a completely different technique that didn’t utilize RNNs or the like. Other than hardware capabilities, there’s been a big number of breakthroughs in the past 2-3 decades or so, from LSTMs to diffusion models and LLMs. It’s 100% not even close to what we did back in the 90s…

2

Cryptizard t1_ja51wb0 wrote

No, lol, you are completely bullshitting here. It is extremely different, even compared to a few years ago. The advent of a transformer model literally changed everything. That's not to say that it is the only advancement, or even that it is ultimately the thing that will lead to AGI, but to claim that it is "not much different" is either uninformed or trolling.

0

johnnymoha t1_ja6m48v wrote

Sure random redditor. You've cracked the code. You're the smartest among us. Your reaction shows you're less concerned with objectivity than you think.

1

ianitic t1_ja5chec wrote

Most of the models are based on the same core algorithms from decades ago. The biggest improvements has been from moores law which will end in 2025 at current rates. Even without moores law ending, we are far away from an agi.

0

Cryptizard t1_ja5d7l8 wrote

You can say that, but it doesn't make it true. The algorithms are extremely different. The attention/transformer model is what made all of this recent progress possible.

3

ianitic t1_ja5fsnj wrote

So says you too. Transformers are marginal in the grand scheme of technological progress. If transformers were even 10x more efficient than CNNs or LSTMs, transformers would still be an improvement that came orders of magnitude slower than Moores law. CNNs/LSTMs being decades old.

There's a reason why all articles regarding a singularity uses Moore law as it's base, it's been the largest contributor to our increase in technological advancement over the years. That contributor is ending.

1

Cryptizard t1_ja5ipf2 wrote

>That contributor is ending.

Now its my turn to point out that they have been saying that since the 80s.

3

ianitic t1_ja5jy61 wrote

That's true, but it was always known to not a be forever thing and it has slowed down. I think I remember the last big milestone where they said that was die size of 45nm or so because of quantum tunneling. Thing is, there is a physical limit to how small we can make transistors.

Once we're dealing with transistors that are as thin as atoms, where do we go from there? Yes quantum computing, optical transistors, graphene, etc, exist, but do they provide a higher performance per dollar than silicon transistors? Probably not and it's all about price per performance.

0

Cryptizard t1_ja5mqse wrote

Nvidia seems to disagree with you. They think it is speeding up.

0

ianitic t1_ja5pfbj wrote

A CEO trying to sell their products says that their products are going to be even better in their future? They're trying to make Nvidia seem relevant and ease investor concerns with all the other big tech companies taking a hit recently.

0

Enzo-chan t1_ja5053v wrote

Yes, but this time we have computers many orders of magnitude more denser, faster and efficient than those in 60s-80s, I'm not sayin it'll happen in the next decade, it's just that claiming that sounds way more credible nowadays.

6

hervalfreire t1_ja793k1 wrote

It always sounds more credible, as things progress. We’re still VERY far from a singularity or AGIs, the best computers can do today is language models (something we already know and do for decades), just faster/larger ones.

Yes, we’re about to see a big impact in professions that mostly rely on “creativity” and memorization, but I’d not worry about a “singularity” happening any time soon.

1

karnyboy t1_ja6cfxh wrote

Exactly, I have yet to see Boston Dynamics robot deliver me something that can prove to me it can react with AI speed that a trained human can't do faster (climbing, etc)

Now, AI replacing certain menial jobs..yeah it may be right around the corner. McDonalds is pretty close to fully automated assembly line already. Soon they may only employ like 4 people per building. Maybe even one trained just to "be there"

mailman? maaaaybe a drone, that's about it. But a drone is not going to know wtf a black bin in the back yard by the gargage is from another and open it and put my package in, so maybe not.

1

net_junkey t1_ja6rg48 wrote

AIs like Chat GPT have the complexity of a brain. With Moore's law predicting PERSONAL commercially available computers with computing power equal to a brain coming in 20-25 years. In 3 decades we should have the convergence of software and hardware for sentient AIs.

−1

billtowson1982 t1_ja74aj4 wrote

1.) Whether AI is sentient not is almost irrelevant for its impact on jobs or pretty much any other aspect of society. Something can be plenty intelligent without being sentient, and even a rather dumb being can still be sentient. AI intelligence (or in other words, capability) will be the main thing that affects society. Not sentience.

2.) No AI today has the complexity of a brain based on any meaningful measurement. Even a brief chat with chatGPT is enough to show a person how stupid it is. Further today's AIs are all absurdly specialized compared to biological actors. Powerful, but in absurdly narrow ways.

1

net_junkey t1_ja7ar08 wrote

#2 have you talked to people? ChatGPT's answers are as good or better then the average person's. Not to mention this is after it got lobotomized to not give answers that can be considered offensive or that sound like the AI has personal oppinions.

1

billtowson1982 t1_ja8xpvl wrote

They're only better in the sense that Google circa 2004's answers were better than the average humans - both had access to an extremely large database of reasonably written (by humans) information. ChatGPT just adds the ability to reorganize that information on the fly. It doesn't have any ability to understand the information or to produce truly new information - two abilities that literally every conscious human (and in fact every awake animal) has to varying degrees.

1

net_junkey t1_ja9ktk7 wrote

AIs understand. Human brains learn concepts by forming a bundle of neurons dedicated to the concept of (lets say) "cat" based on the input of our senses - sight, smell...Modern AI's are designed to replicate the same process 1 to 1 on a software level. If anything they understand basic concepts better then humans.

The big jump right now is AIs understanding the relationship between concepts. Example: "cat" should be linked to the concept of "pet" and definitely not with the concept of "oven".

Problem is there are still kinks in the relationship between concepts part. AI is modeled on the human brain and the human brain is not a perfect system. In theory writing a simulation for the human Id, Ego, and Super- Ego and bundling it into a sentient AI package is quite doable. Making it happen while the foundations are still unstable is practically/near impossible.

1

billtowson1982 t1_jaa2f0n wrote

You don't know anything about AIs do you? I mean you read an article in USA today and now I'm having to hear you repeat things from it, plus some stuff you imagined to be reasonable extrapolations based on what your read.

0

net_junkey t1_jabo5vx wrote

The learning part of AI is based on/similar to how neurons learn. Once an AI has learned/been trained it stores data and filters for it on the hard drive.

How does a brain work? Data is written in neuron clusters (scientist have been able to find neuron bundles representing concept). The filters are neural connections coming out of those bundles. Brain optimises performance by strengthening commonly used connections and removing old unused ones.

Tained AI + continuous learning algorithm = basic brain even if only comparable to an insect.

1