ThePokemon_BandaiD

ThePokemon_BandaiD t1_j4q677q wrote

My point is its not simulating any of the computational aspects of the neuron, which lie mainly in neurotransmitter receptor expression and regulation, its basically just simulating the biochemical version of wires, which for computational purposes, can be simulated more simply with just regular wires, which are also much faster.

1

ThePokemon_BandaiD t1_j4q5qkr wrote

I'll give you that you probably understand it better than me haha. i agree it is interesting research that could be useful in understanding the nervous system, my point is that it that it doesn't seem to have the relevance to the singularity and BCI that people in the comments are assuming. I'd imagine that BCI would require more understanding of neuroreceptor regulation and how expression levels change to allow useful self organizing properties(and therefore learning, self regulation etc), and that signals could be transmitted in such a system just as well through purely electrical means rather than simulating the slower electrochemical propogation of action potentials.

1

ThePokemon_BandaiD t1_j4q4trj wrote

its chemically activated in the same way a pH meter is, and biocompatible in the same way as existing electrostimulation, which has been around for years. its not sensing particular neurotransmitters, just ion charge, and isn't able to provide any computational ability or useable connection to the brain.

1

ThePokemon_BandaiD t1_j4q44ei wrote

typo, i meant field. ive studied neuroscience for a few years in college and a good bit on my own. i read the article before commenting, also did just finish reading the study, and this tech really doesn't seem all that useful as opposed to existing electrostimulation of neurons and basic ion sensors. it could be useful in treating some neurological conditions in the future as a feedback device similar to a pacemaker, but doesn't really have any bearing on intelligence, computation, or the singularity.

1

ThePokemon_BandaiD t1_j4ppjvp wrote

yeah these "neurons" don't actually do anything except transmit an electrical signal slower than a wire...

they simulated action potentials, which is just the way the cells use chemical gradients to send a signal from the body of the neuron to the synapses, and has nothing to do with it's computational capacity.

1

ThePokemon_BandaiD t1_j4powbm wrote

this is not what the article is about... they didn't use amino acids in any way and these "neurons" definitely weren't used in a mouse because they don't actually function as a neuron, they just mimic the most basic aspect of how a neuron sends a signal along itself, and nothing about how they communicate with eachother.

2

ThePokemon_BandaiD t1_j4oo0e2 wrote

I didn't read the actually study yet, but based on the article and my understanding of the field, this doesn't really seem that significant. it's just mimicking action potentials, the way an electric signal is transmitted chemically along a neuron, while the current understanding of the brain suggests that most of the information processing ability of neurons comes from the receptor-neurotransmitter balances and receptor regulation at the synapses, and how that affects network organization. those mechanisms are what allow learning, like back propogation or similar algorithms in NNs.

Edit: fixed "feels" to "field", and i did finish reading the study and my point stands.

1

ThePokemon_BandaiD t1_j3nrqgt wrote

that's actually not entirely true, someone can fly a drone in your yard, the law is pretty ambiguous there, but you're also allowed to use as much of the airspace as you want, and a hanging net structure probably wouldn't count as interfering with aircraft in the same way that retroactively shooting them down etc would

2

ThePokemon_BandaiD t1_j1ipluc wrote

Reply to comment by GuyWithLag in Hype bubble by fortunum

First of all, current big datasets aren't the full internet, just large subsections, specific datasets of pictures or regular text. We also generate about 100 zettabytes of new data on a yearly basis as of this year, and generative models can, with the help of humans to sort it for value for now, generate their own datasets. And while currently available LLMs and Image recognition and generation models are still quite narrow, stuff like gato, flamingo, etc have shown that at the very least multimodal models are possible with current tech, and imo it’s pretty clear that more narrow AI models could be combined together to create a program that acts as an AGI agent.

1