Aevbobob
Aevbobob t1_je8c5kj wrote
Reply to The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
The things you list in your title aren’t forbidden by the laws of physics, they’re just engineering problems. If you’re going to speculate about the capabilities of a mind 100x smarter than you, it’d be worth considering what a dog thinks about chip manufacturing. Now, while you’re considering this perspective, consider how much smarter you are than your dog. It’s not 100x. It’s not even 10x.
A better discussion topic might be to point out that our DNA grew up in a linear world. Our primal intuitions about change are that things will probably not change that much. In an exponential world follow the evidence, not just your intuition. If you think an exponential will stop, make sure your reasoning, at its core, is not “it feels like it has to stop” or “it seems too good to be true”.
Aevbobob t1_je7txch wrote
Consider the difference between humans and chimps (2% DNA difference and larger brain). Look how much we’ve done with that difference. Now imagine a being that much smarter than us. Ok now speed it up 1000x, to match the speed of current AI. That’s AGI. Is it a question for you whether such a mind would be able to solve these issues?
Sam Altman suggested that we might end up with a rate of progress sorta like if you took every advancement since the enlightenment and compressed it into a year. And I tend to agree.
Aevbobob t1_je7rzlq wrote
I think it’s important to differentiate between “thing I do so I can eat” and “thing I do because it gives me meaning and purpose”. People seem to conflate the two into “job” and it muddies the discussion
Aevbobob t1_j61z2aw wrote
Reply to Self driving cars are a scary thought by chicagotopsail
Ah the trolley problem. Useful thought experiment while AI is maturing but it misses the endgame of such an AI which is to simply see so many steps ahead that no one ever gets hit. Because once it can drive as well as a human, it won’t be long before it is VASTLY superhuman.
Aevbobob t1_j221tuh wrote
When the worlds best teacher in every field lives on your phone and works for free, what, exactly, is the point of a college education?
Aevbobob t1_j1szgm9 wrote
Reply to Will the singularity require political revolution to be of maximum benefit? If so, what ideas need to change? by OldWorldRevival
One consequence of superintelligence is that the cost of basically everything will trend to zero. When the necessities and even the luxuries of life are virtually free for all, I don’t see a lot of political issues left.
Land ownership might be one, though there is a LOT of open land in the world that is just inconvenient to live on in today’s world with today’s technology. Like, most of the land.
Aevbobob t1_j0o3tcj wrote
Reply to comment by TrainquilOasis1423 in When AI automates all the jobs what are you going to do with your life? by TrainquilOasis1423
Absolutely.
Aevbobob t1_j0o16ea wrote
Reply to When AI automates all the jobs what are you going to do with your life? by TrainquilOasis1423
Full Dive VR Star Wars. With a good enough BCI + AI, I imagine it would actually be possible to have a visceral experience of the force. Plus, that'd be a fun universe to just adventure around in.
Aevbobob t1_iypdsdw wrote
Starting to think GPT-4 might qualify as an AGI. The following generation definitely will. After that, it’ll also be interesting to follow the long march of algorithmic efficiency and compute density towards AGI that can be run locally at high speed.
Aevbobob t1_ix5n7wa wrote
Reply to comment by GodOfThunder101 in How to ride the financial wave of the AI revolution? by kmtrp
Separate their investment decisions from their research. Their research is excellent. At least on Tesla
Aevbobob t1_ix4891t wrote
That looks like Ark Invest research. Check out their model for Tesla
Aevbobob t1_ix3v1ch wrote
Reply to is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
I feel you on this one. I think that if AI continues at it’s current pace, climate change will sorta become like the Black Death sometime in the 2030s. That is, a big problem that is easily solvable with modern tech.
Some people just wanna believe Armageddon is coming. They may cite data, but at the end of the day, they actually just don’t want to let it go. I think it is a waste of time to try to reach these people. I just let them have it.
But then there’s others who just have bad data or no data on how tech is progressing. For them, citing cost declines and exponential trends and why they are happening can be very useful
For example, I’m not just blindly optimistic on AI progress through the next decade. Instead, I’m noticing that while the traditional Moore’s Law around transistor density seems to be slowing, ASICs and algorithmic improvement are more than making up for it. In terms of how much intelligence you can build for some set amount of money, the exponential actually seems to be SPEEDING UP. And now that large, somewhat general models have commercial viability, there are MASSIVE financial incentives to continue progress.
Aevbobob t1_iwrvmbm wrote
Reply to When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
I have come to believe that we are a pattern of matter, not matter itself
Aevbobob t1_ivfxxyy wrote
Reply to Essential reading material? by YB55qDC8b
Life Force by Tony Robbins, Peter Diamandis, and Robert Hariri
Aevbobob t1_iryv1ei wrote
Joined around a year ago, discovered our exponential trajectory around 2018. Getting more optimistic as time goes on. It took a while for the realization of what we are living through to settle in, but the evidence continuing to pile up makes it easier to let myself believe that something miraculous is happening. Seems like the rate of epic advances is most certainly accelerating.
Aevbobob t1_irtq90w wrote
I think the “do stuff for survival money” paradigm is ending. A “job” of the future will be whatever is most meaningful to you.
Aevbobob t1_iqrhi2o wrote
Reply to A thought about future technology by Secure-Name-4116
I’ve thought for a while that proteins may be a major route into nanoscale robots. There’s orders of magnitude more potential proteins possible from just the 20 amino acids nature uses to make all of life. Then there’s all the unconsidered possible amino acids we could engineer ourselves
Aevbobob t1_je8d6cm wrote
Reply to Would it be a good idea for AI to govern society? by JamPixD
How about a direct democracy where your personal AI represents you? Laws directly and immediately reflect the will of the people and are engineered by superintelligent minds with continuous access to the true will of the people on every topic