Viewing a single comment thread. View all comments

AdditionalPizza t1_isyv3gv wrote

I have a bit of a theory on this actually. It's a combination of a couple things. The AI effect being the most obvious, where people will say AI can't do something, and when it does they dismiss it because it's just computer calculations. A moving goal post of sorts.

Another reason is it's still in its infancy. Yes, if you know the proper search terms for specific AI you can find some stuff. Like go ask a random person if they know what Codex is, or Chinchilla. If you don't follow closely or care about AI and tech, you probably won't have heard of this unless someone you know is very interested in it and talks about it. Even then, I have some friends I talk to this stuff about but they aren't super interested in it so they don't go and look into things too much.

The last reason, some people might think it's borderline a conspiracy theory but hear me out. Big tech companies and professionals close to the creation of AI are well aware of how the general public would react to "Hey check out this AI, only a few more steps until it obliterates your usefulness at your current job" so they actively are championing this stuff as a tool to help people be productive. They are navigating everything by treading lightly until they are ultimately at the point of releasing some transformative AI and then there's no going back. There's no policies to be made quick enough to keep up with the advances and hold them back. The last thing tech companies want at this point is to be stifled on the road to AGI by some policy makers trying to save jobs. If they can get to the point of being able to bring down enough sectors quickly, it will be too late to do anything about it.

We're talking about the most brilliant minds in the world, the ones in charge of aligning AI properly. Of course they have to set everything up before they can go for the spike.

2

blueSGL t1_iszcalu wrote

> I have a bit of a theory on this actually. It's a combination of a couple things. The AI effect being the most obvious, where people will say AI can't do something, and when it does they dismiss it because it's just computer calculations. A moving goal post of sorts.

https://en.wikipedia.org/wiki/AI_effect

Also I've a feeling a lot of jobs are going to be made redundant by collections of narrow AIs you don't need AGI to replace a lot of jobs just a small collection of specialist AIs that can communicate. I wondered why the Gato paper (from what I read of it) didn't try any cross domain exercises. e.g. get a robot arm to play an atari game.

1

AdditionalPizza t1_iszdu3b wrote

I should've mentioned the AI effect isn't my theory, it's just a part inside of my theory haha.

>I wondered why the Gato paper (from what I read of it) didn't try any cross domain exercises. e.g. get a robot arm to play an atari game.

I believe this is being done by something with google, and likely others. I'm not sure why specifically they didn't do it with Gato a while back now, but it is definitely being done with other models.

1