ertgbnm

ertgbnm t1_jef18w7 wrote

Definitely not.

It will make translators as a profession useless. And it will make working with people who speak different languages easier. But I don't see how it wouldn't still be a valuable skill. There is a big difference between connecting with someone face to face vs through a translator. I'd argue that it's going to make language learning an even more accessible and rewarding hobby/skill than it already is.

7

ertgbnm t1_jeb1cii wrote

Sure. That doesn't change my bet though. Because much more investment and human attention will be devoted to optimizing conventional architecture and software since those are what have the largest return on investment at the moment. So, the speed up goes to all sectors. Granted quantum computing scales differently than conventional computing but I still don't see a reality where it outperforms conventional computing at training model weights before we already hit AGI. Also granted, that there is probably more low hanging fruit in Quantum computing compared to the nearly century of maturity that conventional computing has. There are trillions of dollars in conventional AI research and GPU manufacturing that would have to be retooled to achieve AGI via quantum computing whereas I believe that conventional approaches will be done faster, cheaper, and more easily. If I'm wrong then I think the issue with my beliefs is the time horizon for AGI and not about the future of technological development.

4

ertgbnm t1_jdegpsc wrote

I think that world is so unknowable it's pretty much impossible to say.

First, I let AGI plan my day because it will probably be way better at that than me.

I think the utopic future will be made up of time with friends and family, mental and physical stimulation, good food, good rest, novel experiences, novel destinations.

5

ertgbnm t1_j6e5hgp wrote

This post is no different than wingeing about cgi effects replacing some practical effects. You are way off base in my opinion. If first gen generative models have taught us anything it's that "human irrationality" is definitely automatable and perhaps it's easier to do than many other seemingly easier tasks.

3

ertgbnm t1_j4rcli6 wrote

Currently you can co-author with chatGPT and get a book of arbitrary length with enough revising, re-generations, and trial and error. The book will be ok, albeit cliche-ridden and surface level in a lot of areas but it'd be readable and worse books would certainly exist. Will we ever get to the point that we can do it with the click of a button? Probably. But that alone is super human. If you were told to write a book about a topic it would take you a lot of revising, rewriting, and trial/error too.

1