userbrn1 t1_jd06q14 wrote

Redditors will be enlightened rational atheists about everything in politics and then when you ask them why laws are different for dogs and cows it's pure feelings. There's no scientific basis for the belief that dogs are more conscious, more sentient, more intelligent, and more capable of emotion than cows. And yet the hypocrisy in the laws bothers almost nobody


userbrn1 t1_j397vpj wrote

I'm skeptical of your claim that this is better to get now than later. You say that the present day expression can be a "save state", but from my understanding your genome is pretty static throughout your life, save for telomeres. Expression changes over time, but it is much more likely sounding to me that there are general proteomic changes associated with aging that are pretty standard in the population, and can be targeted with much more accuracy in a each passing decade.

I think it would probably be more worthwhile to do one of these in 5 years, when the same product will cost $100 instead of $1000.


userbrn1 t1_ix42oqg wrote

> f we by chance crack getting functionally unlimited clean energy, then we'll still have a butt load of work to do.

If we mastered nuclear fusion today it would still be over 20 years before it replaced even 5% of global energy generation. It takes time to scale things up especially if they're huge in scale already, like global energy production.


userbrn1 t1_iwni7hk wrote

I would caution against thinking this brings us close to FDVR; there is a fundamental difference between encoding and decoding neural patterns.

Progress like this is in decoding, meaning we take the neural patterns and try and determine what the person was trying to envision. This is analogous to neuralink monkeys playing pong with their minds or people controlling robotic limbs with their thoughts. We are making lots of progress in this area, and we can afford to be imprecise. For example, with a robotic limb, it's ok if your elbow bends 50 degrees instead of 49 degrees for the vast majority of tasks. Even fine motor tasks like writing have a point at which further precision is no longer useful. We also have the benefit of being able to measure the end result easily; we can measure the movement of limbs or the accuracy of cursor movement controlled by the mind.

Encoding would involve figuring out what signals we can put in in order to recreate a specific conscious experience; it is the opposite process of the above. Full dive VR would require us to master the ability to send a signal that gets interpreted by our brain in a specific way. For example, if you're on the beach on a windy day, you'd need to find a way to send a signal so precise that your brain truly interprets it as its own vision, which is incredibly complex. You'd need to find a way to simulate the very complex sensation of wind blowing across your arms, moving your clothes in certain ways, specific hair cells. You have likely never felt the same gust of wind twice, because of how rich our conscious experiences are. In contrast to decoding, encoding has orders of magnitude smaller room for error; if the sensation on your skin is even slightly off, you'll realize it's fake and weird. We also cannot measure the end result of encoding easily at all, since the end result is a conscious experience; imagine trying to describe in words to a researcher that it feels as though your proprioceptive sense of where all your limbs are in space relative to each other feels kind of off. The only way to actually empirically iterate would be to first master decoding, build up a massive database of decoded human experiences, and then simulate trillions of fake walks on the beach into a human brain hoping to get neural signals that, when decoded, are close to perfectly in line with the empirically derived decoded data from real human experiences. This is of course impossible to test on real humans and would likely require server farms lined with millions human brains in jars which, by definition, would have to be sentient and conscious in order to have it be relevant to our own conscious experience.

tl;dr it's good that we're getting better at decoding neural data but it is an entirely different problem from the encoding that FDVR requires. In my opinion we do not have a viable pathway to FDVR due to our inability empirically test neural encoding at the scale and precision needed to make FDVR worth doing.


userbrn1 t1_iwhf0by wrote

This is the known publicly disclosed list of fastest computers. China stopped sharing info on theirs for national security reasons; they are suspected to have at least one that's higher than the top on this list. It would not be surprising if other countries like the US had military-funded supercomputers larger than the top one on the list as well.


userbrn1 t1_iwhdrz2 wrote

> As soon as we get proto-AGI we will probably get advanced BCI's and full dive vr.

I think that's quite the leap. "full dive" vr would require us to essentially have completely mastered neural encoding both from a theoretical perspective (we're not even close) and from a practical perspective (we're not even close to being close).

It's important to realize that decoding neural signals (brain computer interface) is profoundly different than encoding neural signals (full dive vr). We're currently getting better at neural decoding, such as turning brain signals into limb movements and controlling virtual keyboard and pong paddles.

As far as I know we have virtually no success in artificially simulating sensory stimuli input. We aren't able to plug something into your brain and make you feel touch sensations, or make you clearly see images. We're not even remotely close to that. If we get even a small fraction of the way there the first thing we'd do is create prosthetic eyes, ears, skin, etc. Even the best tech today with cochlear implants requires an intact nerve to take the signal into the brain to get interpreted; we are not at the stage where we can directly encode auditory stimuli into the brain.


userbrn1 t1_iw9f1zl wrote

> If inflation is unwinding, as we believe, then we could be heading back to the future, the Roaring Twenties, the last time several general purpose technologies evolved at the same time: telephone, electricity, and the internal combustion engine. The setup is remarkably similar!

>5:44 PM · Nov 12, 2022


userbrn1 OP t1_iw3wnqy wrote

After you turn off the tumor suppressor genes, at that point do you have to wait until you get lucky with the correct mutation you're hoping to study? Or is it relatively quick if you're precise enough with the gene modification


userbrn1 t1_ivtrjuo wrote

Maybe someone can help explain this to me.

It doesn't make sense to me that an AI model is able to generate synthetic data that results in another AI model being better trained in the real world than if it was trained on real world data. Seems like sorcery to generate better real world performance by using synthetic data.


userbrn1 t1_iueiyy0 wrote

You're right that the microchip stuff took a hit but I think the overall trend upwards will be hard to stop.

Truthfully I think the whole AI thing complicates matters in ways that it couldn't have in the USSR. I agree with many on this sub that AI will enable economic growth that has been unprecedented; 2-5% annual growth will give way to much larger jumps as mass deployment of industrial AI and robotics takes effect. IF (and this is a big if) China is able to roll out this tech faster than western countries, any current advantages the west has will quickly be overshadowed.

Perhaps self driving cars is a good litmus test for this. Western countries were the first ones to seriously make strides in the tech, but now a lot of Chinese cities are working with tech companies to enable taxi services. If China can take the lead in deployment of self driving cars both for personal and industrial use, it would be a good indicator that their unique command-market hybrid was a success in bringing emerging tech to fruition.

Next 5-10 years will be interestin haha


userbrn1 t1_iue5yz8 wrote

People have been saying that for decades. And the US government strongly disagrees with your assessment; you don't increase sanctions and export restrictions on a country you think will collapse on its own. Western leaders are correctly assessing the situation and realizing that China will surpass the US in economic, technological, and military dominance if they don't take increasingly aggressive action to stop it. Listen to Trump and Biden speak on China; it's not longer "we need to improve the US to stay dominant" its now "we need to slow China's rise as much as possible". They all know that it's a losing battle


userbrn1 t1_iue15jt wrote

It went from being a nation of illiterate peasants to being the 2nd largest economy in the entire world in less than a century. US political leaders have been publicly and privately shifting an increasing amount of effort into combatting china, even taking extraordinary steps such as banning US companies (like NVIDEA) from selling profitable tech goods to china, harming the US company, because they feel china's meteoric rise is an existential threat to US dominance. How is china's economy "in the gutter" when it is only the second country in post-industrial human history (the first being the failed USSR) to build an economy fast enough to legitimately challenge the US economy in terms of size, productivity, and technological achievement?


userbrn1 t1_ir2is90 wrote

This drug in the OP also works on AB-amyloid and does appear to slow mental decline, but it allegedly does so by inturrupting the formation of new AB-amyloid by binding its precursors rather than trying to reduce already existing amyloid plaques. If these results hold then it further strengthens the amyloid theory.

I think that first article you posted exaggerates the degree to which the research was faked; independent biochemical testing was performed by more than just the scientists in question in the article and the amyloid theory continues to prove fruitful