Viewing a single comment thread. View all comments

FDP_666 t1_iyjlxfa wrote

Mechanization (machines) replaced muscles. As a consequence, agriculture (for example) now requires a fraction of the workers it once needed while productivity skyrocketed.

Artificial intelligence will replace brains and do the same thing mechanization did to agriculture, but to every other job. Some will hold out for longer than others, but long term it's all gone.

Your expertise will have the same value against AI that your strength has against a tractor.

42

AvgAIbot t1_iykj2u0 wrote

What do you predict will happen at 25% , 50%, and 100% AI job replacement rates? Your prediction of when these rates will happen would also be interesting

6

FDP_666 t1_iymm1yz wrote

I don't know. Or rather, there is only one thing that comes to mind: even minor political decisions can take months or even years to be discussed and then implemented; I don't expect AGIs to be particularly concerned with their compatibility with our current political apparatus, and that probably means there will be some sort of chaos. Hence the "I don't know".

I also don't know how fast we will get there, but it seems obvious that multimodality will solve a lot of the common pitfalls of current LLMs. Texts tell a bunch of narratives and there is a bunch of unrealistic stuff there that looks reasonable if your only source of information is text; but I don't see how most of this unrealistic stuff would survive a model trained on texts, images, videos, audio (etc?) as any idea the model would come up with would lie at the intersection of what is both textually and visually possible.

Like, it's easier to mess up the act of finger-counting if your only concept of doing it comes from text than when you have an image of hands that comes with the text. It makes sense that it would constrain the space of possible mistakes, what do you think? Then, there would also need to be some sort of continuous learning to truly have an AGI and not a snapshot of it, maybe? I read somewhere that it's being solved, so it doesn't seem to be decades away.

And scale is of course important, but the real investments haven't begun, yet; it seems that the hundreds of millions, or even billions of dollars that could be used to train AIs will only be unlocked when some of the previous stuff makes an AI that can be used as a virtual worker. By then, data produced by AIs might be good enough to make more training data, and there will be more use cases anyway so people will feed more data to OpenAI (and others) so the amount of data might not be a constraint anymore.

To me, it looks like we'll get to the knee of the curve in 5 to 10 years but my prediction is as good as any other, so yeah, I don't know.

1

apyrexvision OP t1_iykvym5 wrote

That's real it's just the transitional period that concerns me.

2

visarga t1_iyle6xx wrote

Don't generalise from agriculture to coding. If the tractor misses the row, it's no big deal. If the AI fails the coding task, maybe things start falling apart.

1