Comments

You must log in or register to comment.

Kaarssteun t1_j0s73ps wrote

100% a software problem. There is no guarantee, but the trends don't lie. Chances of AGI never arriving are... astronomically small.

30

thEiAoLoGy t1_j0sih7c wrote

Chances of it arriving any time soon? Unclear, none of the paths seem like it’ll lead there to me.

7

ghostfuckbuddy t1_j0tfite wrote

If we magically stumbled across the right algorithm, then it's only a software problem. But if we need to test a bunch of different approaches before we get there, then hardware becomes the limiting factor in progress.

2

civilrunner t1_j0ubwai wrote

I'd say it's still a hardware and software problem. We are still nowhere close to building a computational circuit that replicates the human brain which uses complex 3D computational structures where connections can be made between neurons that are far apart to link computational circuits in completely different ways from what we do with lithography constructed computers. While it's possible that we'll be able to achieve AGI through the raw power of miniaturizing lithography built computation, it is a completely different structure compared to the brain so it's not a guarantee.

The difference between a true 3D compute architecture and a 2D compute or even a stacked 2D compute is pretty enormous (it's like comparing x² vs x³).

It's also clearly a software problem as well, though I'm curious if you need plasticity and massive connectivity between far reaching compute sections to archive an AGI level intelligence for things like creativity similar to a human brain.

1

Kaarssteun t1_j0ucilx wrote

It's not our goal to replicate a human brain; that's what making children is for. We are trying to replicate the brain's intellectual intelligence in a way that enslaving it would still be ethical.

4

civilrunner t1_j0udhrm wrote

I agree, though it may not be nearly as efficient as a human brain when it comes to being intelligent. I'm my opinion all you need to do is look at the gains from GPUs vs CPU AI training to see how much scaling up local chip compute potential does for AI to see how much potentially better a 3D human brain may be compared to even a wafer scale stacked 2D chip and then acknowledge that the human brain doesn't just compute with 1 and 0, the chemical signals offers slightly more options and just off and on as we learned recently.

There are advantages to a silicone electronic circuit as well of course, the main one being speed since electricity flows far far faster than chemical signals.

I am also personally unsure of how "enslaving" a verified general intelligence would be ethical regardless of it's computational architecture. It's far better to ensure alignment so that it's not "enslaved" but rather wants to collaborate to achieve the same goals.

1

Kaarssteun t1_j0udpg9 wrote

Right, enslaving is subjective; but we want to make sure it enhances our lives rather than destroying it.

1

civilrunner t1_j0udzlb wrote

Sure, just wouldn't call it "enslaving" them seeing as that generally means forcing them to work against their will which if we build an AGI or an ASI seems unlikely to be feasible. Well aligned is a far better term and in my view will be the only thing that could work.

2

hydraofwar t1_j0up12g wrote

That's true, but replicating the brain's intellectual intelligence may require hardware made specifically for it. If I'm not mistaken, Google's AI Palm has a specific latest generation hardware for it

1

dasnihil t1_j0unptg wrote

yep, i almost feel bad for the nerds who are going into neuromorphic computing in the hopes of mimicking brain like computer while they can totally do all this in software. it's always a software/theoretical problem and once you solve it, you can implement it on a hardware, which is whole another engineering challenge. also imo we need some new/better algorithms for solving problems that look "hard" to solve. hope most people focus on this.

1

Borrowedshorts t1_j0sm79z wrote

Real time inference is still limited and there is still a wide gap between humans and AI. If we assume humans are at 100 trillion parameters and the limits of real time AI inference is still around 20 billion parameters, we still have a long ways to go in matching hardware capability with human performance. Both are constraints, though I would say software has typically quickly followed the hardware capabilities which allowed it. Imo, hardware is actually the bigger constraint.

9

Danger-Dom t1_j0sw1m2 wrote

Hardware problem. We don't have the compute to run models big enough. Some software problems as well but mostly we've been seeing those solve themselves with scale.

9

DungeonsAndDradis t1_j0vv35m wrote

Moore's Law is quickly running out of space for physical reasons: https://interestingengineering.com/innovation/transistors-moores-law

We can't effectively get transistors any smaller. At least with our current understanding and technology.

I've read blurbs about stacked chips, but their biggest opponents are saying we don't have the heat transfer technology to allow that to happen (the insides of the chips will overheat). See here for some potentials solutions to stacking chips

The simple fact is, right now, we don't have the computing power strong enough to replicate the human brain's operating capacity. Which is what we need for general human intelligence.

3

Coolguy123456789012 t1_j0ti1t6 wrote

Both. Hardware in that current machine learning requires a big dataset, software in that it needs a big dataset.

3

No_Ninja3309_NoNoYes t1_j0u1km5 wrote

It's impossible to say, but currently neither hardware nor software are near where AGI is generally considered to be Personally I think that Deep Learning is the wrong paradigm. Even introducing neuromorphic hardware and asynchronous algorithms would not be enough You would need to simulate brains close to the molecular level, including neurotransmitters and chemical synapses.

2

AndromedaAnimated t1_j0so2x8 wrote

It is a hardware, software, moral and financial issue.

1

brad_l_taylor t1_j0tipxb wrote

I guess it was a hardware problem. Currently it is a software problem. Soon it will be an ai problem

1

imlaggingsobad t1_j0tlgjk wrote

Software problem. We still don't have all the algorithms needed for AGI.

1

Accomplished_Diver86 t1_j0trf8v wrote

How is anyone supposed to answer this in a logical way? How can we know if hardware is good enough if we don’t even have AGI yet.

Reliable answers not possible - speculation only

1

rainy_moon_bear t1_j0utlxp wrote

If you consider transformer models progress towards AGI, then I think the answer is hardware.

There really isn't anything too shocking or new about the transformer architecture, it is derived from statistics and ML concepts that have been around for a while.

Of course advancing the architecture and training methods is important but the only reason these models did not exist sooner seems to be hardware cost efficiency.

1

genericrich t1_j0vp0hm wrote

Nobody knows what "it" is, to begin with. AI is constantly being redefined as "stuff computers don't do yet".

If we agree that *all* LLMs are doing are applying a sophisticated statistical algorithm to choose the next relevant word, and that "extended LLMs" will be doing the same tricks using different techniques...does it makes sense to at some point declare "we have AGI" when all we'll really have are a bunch of layered models doing the same sort of applied statistics?

Is that all human beings are? Is cognition just computation?

1

Electronic_Source_70 t1_j0wqs2d wrote

Depends on what you will consider AGI if it's human level intelligence since the only thing we have as a comparison is our brain, then it's a hardware problem if you think our brains are super inefficient then a software problem that's all

1

Superschlenz t1_j0scoc7 wrote

As AGI has to solve intellectual problems only, it's a software problem.

As a single human's mind is created by that single human's body, it's a hardware problem. Trying to cheat by training on second-hand utterances from a billion of single humans on the internet will not work well enough.

As you did not post your question as a survey, you are not truely interested in how the "generally" considers it.

0

AndromedaAnimated t1_j0snx88 wrote

A single human‘s mind isn’t created by that single human body though.

It is created by genetics (and hence a line of ancestry), by environmental influence (food, weather, accidents, education, social standing etc.) and by a powerful dataset of knowledge, culture, technology, art and language the human is confronted with.

3

Superschlenz t1_j0sq953 wrote

Genetics creates the body. The body includes a brain. The brain learns the mind. Everything that enters the mind has to go through the body (except some hardwired pattern generators in early childhood which are used to guide wiring and speed up learning).

Every environmental influence, and every powerful dataset of knowledge, culture, technology, art and language is curated by that single individual's body.

For me, creation and curation are the same.

For you, curation is less powerful.

3

AndromedaAnimated t1_j0stm8f wrote

I think the difference of our thinking is that I cannot see an individual body as a single entity. For me, its genetical patterns are dictated by a long series of events long before the individuals birth, and expression of genes and phenotype also requires certain (also environmental) conditions. Social interaction and experience shapes the mind, with the body - curating the experience as you would probably see it - being an interface to acquire datasets for the mind to learn on. The same body, placed in different social and environmental conditions during upbringing, can host very different minds. Twin studies show that there are many differences even between genetically identical people when it comes to their mind.

You could still argue and say that even identical twins have differences - and yes, here we come to the expression of genes and mutation, which is influenced by different factors.

I am pretty sure that a mind without a body could easily exist as long as you provide it with a virtual “anchor” to its perception of self.

So the differences we talk about are probably of philosophical/world view kind, not about actual functions of body and mind as biology understands them.

2

Superschlenz t1_j0t1f8e wrote

>I am pretty sure that a mind without a body could easily exist as long as you provide it with a virtual “anchor” to its perception of self.

What does "exist" mean?

  • Start to exist, from scratch, as in a human-made machine with ramdomly initialized weights, or

  • Continue to exist, as in Stephen Hawking, who got ALS at the age of 26 and had a healthy body before?

And yes, of course can minds exist without a human body. But how well can those minds simulate a human mind, which is what Wikipedia's definition requires from AGI, without having been formed by a human-like body, physical environment, and human education?

2

AndromedaAnimated t1_j0tb3wt wrote

I meant both, a hypothetical newly created artificial mind, or a human mind who used to have a body. The sensor and motor cortical areas are well known as is the cerebellum. We are also already able to simulate spatial perception. Simulating a body that can „move“ in virtual space and provide biofeedback to the brain shouldn’t be so difficult. The Synchron Stentrode interface for example already allows people with upper body paralysis to move a cursor and access the internet with their motor cortex - no real hands or arms necessary. And the motor cortex would be not difficult to simulate.

So yeah. I think it won’t be as difficult as we think to simulate human minds. It’s all a question of processing power.

1

Superschlenz t1_j0te7zz wrote

And how is tested whether these simulations really do what the corresponding part of the brain does?

By some argumentation in the form of: "Brains have oscillations in the alpha, beta, and theta range. My model has oscillations in the alpha, beta, and theta range, too! So I have built a brain. Where is my Nobel prize?" (Or the equivalent with pieces of dead rat cortex' firing patterns and one billion euros.)

> The Synchron Stentrode interface

An interface to the real thing is not a replacement.

1

AndromedaAnimated t1_j0tf89h wrote

You are probably joking about the EEG waves, aren’t you? Because it is pretty strange to assume that you will be able to measure EEG correlates of sentience in an AI by placing electrodes on its imagined head. Or in its imagined brain. We won’t need to recreate a three-dimensional physical model of the brain to simulate it.

I don’t want to assume that you don’t know a lot about the brain, but your reasoning really starts to confuse me. Of course the interface to the brain is not the replacement for the brain, that’s just logical 🫤 But that was not the reason why I mentioned it.

I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about. A simulated „human-like“ mind being not able to exist without a physical human body.

1

Superschlenz t1_j0x7n5v wrote

>You are probably joking about the EEG waves, aren’t you?

Of course I was joking, because https://www.izhikevich.org/human_brain_simulation/Blue_Brain.htm#Simulation%20of%20Large-Scale%20Brain%20Models mentions only alpha and gamma rhythms, but not beta and theta.

>I mentioned the Synchron interface to show that motor activity of the body can be replaced by a simulated motor activity. Meaning the physical body can be simulated if needed for the development of human brain. Since that was what you were talking about.

The human body is not just the output of ~200 motors and input of their corresponding joint angles and forces (proprioception). It is also the input of ~1M touch sensors from the skin. This input would have to be simulated as well. As much touch information in childhood comes from social interaction with the mother, you would have to simulate her, too. This may be possible in theory, but at the moment, neither a simulated mother for a simulated baby nor a real robot baby with full body touch sensitive skin for a real mother is possible. My personal experience with the MuJoCo simulator in 2016 had shown me that it is so buggy, it can't even simulate some nuts and bolts correctly. If it even fails at such a simple mechanical rigid object physics task, how could it simulate deformable skin or a virtual mother?

2

AndromedaAnimated t1_j0y5h90 wrote

I am still pretty sure that we don’t need to simulate a three-dimensional brain to simulate a mind, but okay I got now that you were joking (the model you wrote about is still a cool thing, and I see lots of further research and application possibilities).

Touch sensors would not necessarily be needed. The brain doesn’t get touched, it gets signals mediated by oxytocin and other chemicals. So simulating a holding, touching mother would not be this difficult. If you wanted to do that in the first place instead of simulating a mind that automatically gets its „touch needs“ fulfilled by other types of communication. Or a mind that has simulated memories of being touched directly at the time of it being put into function.

But this is actually a very interesting idea you mentioned. Simulating a mother with deformable, touchable skin or a robot baby with feeling skin. This would be akin to simulating touch in the virtual world generally.

I agree that we are not jet there. But the engine is already gaining steam so to say. I would say we only need around 2 to 3 more years max to simulate a functioning human mind. Can imagine that your timeline would be different here.

By the way, thank you for the very civil discussion. I have made very different experiences with others. Thank you. You‘re cool.

2

turnip_burrito t1_j0tgymr wrote

I don't know what it is generally considered. I'd guess most would call it a software problem. In my opinion, it is both software and hardware. Here are examples why:

Software: Algorithmic changes to the code (Dall-E vs diffusion models) can give similar results but way faster runtime. The hardware doesn't really change.

Hardware: think about how having custom circuits to run your algorithm (ASICs), or custom chemical processes (brain), can run faster than the same computations occuring on a single CPU or GPU. But your hardware kind of becomes a physical instantiation of your algorithm, from a certain point of view.

These kind of blend together. It is possible that one Algorithm A works best on CPUs or GPUs today and a second Algorithm B takes forever to run on those. But with the correct specialized physical processes (brain), it is faster to run the B than it is A and get the same or better performance. Was it the hardware or software holding it back? You could argue you just needed better CPUs (hardware) for A, or a combination of algorithm B and B-specialized hardware.

0