Aevbobob

Aevbobob t1_je8d6cm wrote

How about a direct democracy where your personal AI represents you? Laws directly and immediately reflect the will of the people and are engineered by superintelligent minds with continuous access to the true will of the people on every topic

1

Aevbobob t1_je8c5kj wrote

The things you list in your title aren’t forbidden by the laws of physics, they’re just engineering problems. If you’re going to speculate about the capabilities of a mind 100x smarter than you, it’d be worth considering what a dog thinks about chip manufacturing. Now, while you’re considering this perspective, consider how much smarter you are than your dog. It’s not 100x. It’s not even 10x.

A better discussion topic might be to point out that our DNA grew up in a linear world. Our primal intuitions about change are that things will probably not change that much. In an exponential world follow the evidence, not just your intuition. If you think an exponential will stop, make sure your reasoning, at its core, is not “it feels like it has to stop” or “it seems too good to be true”.

3

Aevbobob t1_je7txch wrote

Consider the difference between humans and chimps (2% DNA difference and larger brain). Look how much we’ve done with that difference. Now imagine a being that much smarter than us. Ok now speed it up 1000x, to match the speed of current AI. That’s AGI. Is it a question for you whether such a mind would be able to solve these issues?

Sam Altman suggested that we might end up with a rate of progress sorta like if you took every advancement since the enlightenment and compressed it into a year. And I tend to agree.

5

Aevbobob t1_j61z2aw wrote

Ah the trolley problem. Useful thought experiment while AI is maturing but it misses the endgame of such an AI which is to simply see so many steps ahead that no one ever gets hit. Because once it can drive as well as a human, it won’t be long before it is VASTLY superhuman.

1

Aevbobob t1_j1szgm9 wrote

One consequence of superintelligence is that the cost of basically everything will trend to zero. When the necessities and even the luxuries of life are virtually free for all, I don’t see a lot of political issues left.

Land ownership might be one, though there is a LOT of open land in the world that is just inconvenient to live on in today’s world with today’s technology. Like, most of the land.

4

Aevbobob t1_iypdsdw wrote

Starting to think GPT-4 might qualify as an AGI. The following generation definitely will. After that, it’ll also be interesting to follow the long march of algorithmic efficiency and compute density towards AGI that can be run locally at high speed.

1

Aevbobob t1_ix3v1ch wrote

I feel you on this one. I think that if AI continues at it’s current pace, climate change will sorta become like the Black Death sometime in the 2030s. That is, a big problem that is easily solvable with modern tech.

Some people just wanna believe Armageddon is coming. They may cite data, but at the end of the day, they actually just don’t want to let it go. I think it is a waste of time to try to reach these people. I just let them have it.

But then there’s others who just have bad data or no data on how tech is progressing. For them, citing cost declines and exponential trends and why they are happening can be very useful

For example, I’m not just blindly optimistic on AI progress through the next decade. Instead, I’m noticing that while the traditional Moore’s Law around transistor density seems to be slowing, ASICs and algorithmic improvement are more than making up for it. In terms of how much intelligence you can build for some set amount of money, the exponential actually seems to be SPEEDING UP. And now that large, somewhat general models have commercial viability, there are MASSIVE financial incentives to continue progress.

1

Aevbobob t1_iryv1ei wrote

Joined around a year ago, discovered our exponential trajectory around 2018. Getting more optimistic as time goes on. It took a while for the realization of what we are living through to settle in, but the evidence continuing to pile up makes it easier to let myself believe that something miraculous is happening. Seems like the rate of epic advances is most certainly accelerating.

9

Aevbobob t1_iqrhi2o wrote

I’ve thought for a while that proteins may be a major route into nanoscale robots. There’s orders of magnitude more potential proteins possible from just the 20 amino acids nature uses to make all of life. Then there’s all the unconsidered possible amino acids we could engineer ourselves

3