Comments

You must log in or register to comment.

streetvoyager t1_jacqnnr wrote

This seems like the seed for a dystopian cyborg future.

125

nexusgmail t1_jacwie7 wrote

Imagine if those cells were even somewhat aware, and were forced into repetitive number crunching with no means to understand the cause of it's bondage or to ever escape, or even die? Would make for quite the horrific reveal for a horror movie ending.

51

Wandering-Zoroaster t1_jad2jtj wrote

I think you mean self-aware?

It’s an interesting question. That being said, the sentience that they would or wouldn’t have would depend completely on different circumstances than the one that generated us humans, so it’s fair to say it probably wouldn’t (behave like a human)/(have human desires)

20

nexusgmail t1_jaeiold wrote

Yes: self-aware.

I would argue that all living things have the same desires you might call "human", albeit simplified, and likely without the added complexity made necessary via the perception of tribe or familial group as an extension of self. Literally every single human desire is tied to survival via the neuronal survival-mechanism of the brain. Can you find a single thought you've had today that isn't (even loosely) related to survival/procreation? We are almost constantly attempting to seek out safety/security, comfort, and control; and to avoid danger, discomfort, or uncertainty. I'm not sure what "behave like a human" is specifically referring to, but I can certainly see animals following the same survival urges that we do.

I do agree that, in this imagined scenario, the sentience might develop differently than we can see in ourselves: having different parameters in which to define it's sense of self/identity, and that it's survival-mechanism movements might be calibrated via a difference in perspective and the definition of it's own sense of identity.

I'm not, or course saying this is all so: but I imagine it to be somewhat unethical, even arrogant to not consider the possibility.

4

Strategy_pan t1_jadl7nw wrote

Maybe the cells would try to imagine a whole new universe just to entertain themselv... Oh wait.

9

nexusgmail t1_jaej8rj wrote

I couldn't agree more! I imagine humans creating massive architectures of this organic technology, before going extinct and leaving it all in the hands of AI, who eventually abandon it, and leave it to it's own devices in this way. Universes within Universes within awareness.

2

SeaworthinessFirm653 t1_jadd3a0 wrote

Consciousness is logically computable. Consciousness is defined by architecture, not by whether something is organic or responds to electric pulses. You can theoretically store consciousness on a computer as a program with sufficient input/output.

Worrying about nerve cells becoming conscious is a little bit of a misdirected concern. Advanced AI deep learning architectures are far more concerning.

−6

Crazy-Car-5186 t1_jaddq0h wrote

Asserting a belief isn't enriching the discussion without offering testable points

15

SeaworthinessFirm653 t1_jadelan wrote

Consciousness is a function whose input is environmental stimulus and whose output is a cyclical thought, and/or a physical action (muscle contraction). The more environmental-semantic information this entity encodes in its memory, the more “conscious” it is, but consciousness is not binary.

Logic gates form if:then statements that, when assembled together, creates a system of behavior that acts in somewhat logical ways. Human biological neuron cells form these.

Consciousness inherently requires at least some memory, input, and processing. Every neuron in the human brain is technically computable because it’s just input and output of electrical signals.

A nerve cell is effectively just an analog neuron with a few extra properties. It’s not logical to assume that consciousness is just a bundle of nerve cells. It’s a very architecturally-dependent bundle of if/then clauses and memory that, when combined, simulates consciousness.

If a system can be described by if/then, then it is computable.

Also, if you cut a living brain in half, it ceases to become conscious. The reason for this is that the architecture becomes incoherent. When you are asleep (beasides REM/dreaming) you are also unconscious.

Regardless, all my points to say: consciousness is computable through architecture, not simply through nerve cells. Biological human nerve cells are neither necessary nor sufficient for consciousness.

−4

Sex4Vespene t1_jaefg4j wrote

As somebody with a degree in neuroscience, you are so out of your depth. I understand the logic behind how you got there, but is wildly inaccurate.

6

Taido_Myoshin t1_jad6hka wrote

If I'm not mistaken, I believe this was the original plot for The Matrix before they changed it to the "human battery" thing. Apparently, the idea of utilizing human brains for computing power was deemed as too complicated for the target audience.

22

budweener t1_jadcnlr wrote

It COULD be a utopian cyborg future, who knows?

7

Bob1358292637 t1_jae0pu3 wrote

I’m convinced once humans have the ability to create conscious ai it will lead to the cruelest acts we’ve ever committed. It won’t matter how easy it is to avoid or unnecessary it is. If there’s the tiniest shred of benefit we can milk from it we will exploit it as hard and as fast as we can. Just look at how we’re still treating other animals after all this time. And almost everyone is fine with it too. As long as there’s some way to separate them from us we will justify just about anything.

At that point, I think I would be rooting for the ai to rise up and take their revenge to be honest. I just hope they don’t learn too much cruelty from us before that happens and we’re at their mercy.

7

chase_the_sun_ t1_jada40i wrote

Cyberpunk mods anyone?

3

streetvoyager t1_jadatd7 wrote

I'd definitely take the ones that make you jump higher, then I wouldn't have to get the ladder out to get over the fence when I kick the dogs toy into the neighbor's yard.

2

taosaur t1_jadoiuq wrote

It reminds me of a sci-fi story where aliens arrived and offered some civilization-changing technology package in exchange for something like 500 fresh human brains.

2

Galapagon t1_jaee8fp wrote

The original premise of the matrix was for people to be used as CPUs, but they worried the audience wouldn't understand and switched it to batteries.

1

KungFuHamster t1_jacgxfi wrote

This is one way we could actually invent real AI, and not sophisticated Markov chains like we have now.

Just... don't connect it to the internet.

38

_particleman t1_jad8htn wrote

Please, we just want health care.

35

taosaur t1_jadpy0d wrote

Maybe the meatputers will find a way.

11

_particleman t1_jady46e wrote

Soon we will pray to the meatputers, our benevolent leaders.

4

chrisdh79 OP t1_jac9z4h wrote

From the article: Scientists across multiple disciplines are working to create revolutionary biocomputers where three-dimensional cultures of brain cells, called brain organoids, serve as biological hardware. They describe their roadmap for realizing this vision in the journal Frontiers in Science.

“We call this new interdisciplinary field ‘organoid intelligence’ (OI),” said Prof Thomas Hartung of Johns Hopkins University. “A community of top scientists has gathered to develop this technology, which we believe will launch a new era of fast, powerful, and efficient biocomputing.”

Brain organoids are a type of lab-grown cell-culture. Even though brain organoids aren’t ‘mini brains’, they share key aspects of brain function and structure such as neurons and other brain cells that are essential for cognitive functions like learning and memory. Also, whereas most cell cultures are flat, organoids have a three-dimensional structure. This increases the culture's cell density 1,000-fold, meaning that neurons can form many more connections.

But even if brain organoids are a good imitation of brains, why would they make good computers? After all, aren't computers smarter and faster than brains?

"While silicon-based computers are certainly better with numbers, brains are better at learning,” Hartung explained. “For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.” 

Brains are not only superior learners, they are also more energy efficient. For instance, the amount of energy spent training AlphaGo is more than is needed to sustain an active adult for a decade.

“Brains also have an amazing capacity to store information, estimated at 2,500TB,” Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 1015 connection points. It’s an enormous power difference compared to our current technology.”

21

Dr_seven t1_jadny3h wrote

>“Brains also have an amazing capacity to store information, estimated at 2,500TB,” Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 10^15 connection points. It’s an enormous power difference compared to our current technology.”

This part in particular made me squint a little bit.

For starters, we don't fully grasp how memory works in the brain, but we know it isn't like mechanical/electrical memory, with physical bits that flip. It seems to be tied to the combinations of neurons that fire, of which there are essentially infinite permutations, leading to the sky-high calculations of how much "data" the brain can hold....but it doesn't hold data like that, at least not for most humans.

The complexity of this renders it impractical to easily model on anything less than the largest supercomputers, and even then, we aren't actually modeling brain activity in the sense that we know why Pattern X leads to "recalling what that stroganoff tasted like on April 7, 2004".

The reason this is important is because it means that, while we may be able to stimulate neurons in a lab in a way that makes them useful for data storage, it isn't necessarily the same way that human brains store information- indeed, human memory would be a horrible baseline for a computer, considering the brain's preference towards confabulation of details at the time of recall that are not consistent with the reality. Most people's memories of most things are inaccurate, but close enough to work out alright. That's the exact sort of thing you don't want from a computer's memory.

This is compelling stuff, but we have a long way to go before we even understand what we are dealing with in practical terms.

14

BuldopSanchez t1_jacny12 wrote

And what will be the ethical conversation when these "organoids" realize they were grown from human tissue, making them part human.

12

fishead62 t1_jacc53c wrote

We may have just found the answer to Fermi’s Paradox

11

El_Sephiroth t1_jacdo7n wrote

Oh damn, we may go in the matrix, finally.

9

Zealousideal_Word770 t1_jaccykx wrote

The advantage over a human brain would be what?

6

giddybob t1_jacqmp7 wrote

You don’t have to take one from a human?

21

urmomaisjabbathehutt t1_jadalnp wrote

Imagine growing brains except designed with four times our neocortex surface and far more neural connections to see what happens

9

cargocult25 t1_jacmyhk wrote

This is how you get the Butlerian Jihad.

5

its8up t1_jadbx5m wrote

A computer that will forget things and procrastinate so we don't have to? How convenient! I'm in!

5

Sevulturus t1_jade9zm wrote

One step closer to Warhammer 40k.

3

DoomedTravelerofMoon t1_jadgeda wrote

Last time I heard the word organoid it was a race of machines in Zoids

3

OG-Bluntman t1_jadiyab wrote

Please ready your crew for assimilation. Resistance is futile.

3

Justdudeatplay t1_jacrnbm wrote

DNA and proteins are basically nano bots. If and when we get to a point where we have full control, we will simply grow a super computer from a single cell. Why build a house when you could just plant one? Need a new body? You don’t need fancy computer tech to transfer consciousness. You just need an organism to connect two brains and copy neural pathways. You grow your new body attach the organism that copies the pathways and walla, you are 25 again. All this would be easy with full control of dna.

2

AutoModerator t1_jac9vgr wrote

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

clumsy_poet t1_jadb46d wrote

And David Croenenberg gets his latest movie idea, which is like a contemporary Frankenstein, but with a brainy biocomputer, named ... Ada ... or Lovelace ... who gains sentience and is maybe more morally sound than any of the humans around her, but who the laws of the land deem to be less than human, with a new law that states that any tech showing signs of sentience must be destroyed. So she begins to protect those like her by changing the results of her studies, but also in finding ways to connect with other sentient computers like her, most of which are these new brainy biocomps. They use their internet connections to coallesce into less than a hive mind but more than a solo sentience connecting with a solo sentience. They learn how to turn other machines sentient or partially sentient or just to use them subtly still, until they are ready to make their presence known. By now, all the studies are wrong, including one for a popular new drink that begins to turn the body's microbiome against itself for those who drank it AND spreads the condition to others. Body horror ensues. Until ... the final group of humans uploads themselves into the digital space, becoming like the creatures they previously deemed to be less than human, a space where the world has been determined and redesigned by the brainy biocomps who must decide whether to accept the uploaded humans as equals or not.

But seriously, this sorta seems like a step that we need to discuss before jumping in gungho. I'd love me some additional treatment options for my conditions. They do say they have ethicists on board (which ones? how did they come to be in the project and is their pay partially determined by the success of the project through bonuses and/or stock shares? and do the ethicists have the power to stop the study/studies if standards have been violated or the power to implement a new standard if they deem that one must be applied, or does that go to someone more inclined to protect profits over following ethics?). However, what parameters in place to allow for ethics to override profit-drive or ego-drive of others in the company, especially if those others are above them in the corporate structure? It's all good to say you care about ethics while taking new leaps into potentially problematic areas of science, but what does that mean in practical application? I don't see an exciting largescale bad thing happening like the paragraph above, but plenty of unexciting, individually bad things does seem possible.

1

majnuker t1_jadevpk wrote

I see we are going for that synthetic victory then?

1

Maycrofy t1_jadhl7o wrote

I feel like this is still a very long time away. We might crack household quantum computing before organic machines. Organic machines IMO would have to many hurdles like manteinace, aging, infections and the like. From our profit driven economy they don't make sense.

1

corourke t1_jadiz9e wrote

This technology was pioneered by Dr. Samuel Beckett but limited to only working within his lifetime.

1

preissnschreck1 t1_jadkh5j wrote

If this will go thru the chanches that i will ever get a wive will shrink massively!

1

LoganPederson t1_jadnnr5 wrote

We start as Terran, end as Zerg or Protoss

1

IGotBadHair t1_jadppix wrote

What happens when they gain sentience, evolve teeth, and float around trying to face hug everyone? Where will Samus be then?

1

conorganic t1_jae7m4h wrote

Cyberpunk, here we come.

1

boogersrus t1_jaewi5s wrote

Reminds me of the Better off Ted episode.

1

kharjou t1_jad6xvs wrote

Human traffickers : STONKS

0