tatleoat

tatleoat t1_je075m7 wrote

Yeah it's a matter of time, I've said it many times before but here's how I see it playing out (warning schizoposting):

Constitutional AI gives people the opportunity to 'create' bosses with publicly available constitutions, or plain-english codes of behavior. There can be contractual obligations not to change the constitution for five years, workers get a vote, etc, the point is it can be deliberately programmed and open sourced for 1. Maximum trustworthiness, and 2. Maximum employee benefit, meaning all the money that would ordinarily be the CEOs would go to the working class.

Since AI CEOs have a built in advantage of being capable of being deliberately programmed not to be greedy, which there is HUGE incentive and ability to do so, that gives them huge advantages over human adversaries. Humans will be held back by their own greed because the money isn't efficiently being distributed through the company.

Human CEOs will be caught in a bind, the only way they can survive as CEOs is not to be greedy but if they aren't greedy that defeats the point. Any competing AI CEOs that try to be greedy will simply not be able to keep up, it's a huge advantage because it means for now they will be taking on some or all of the human workers displaced by human CEOs automation.

The only point of disruption is that human CEOs have a lot of assets, so even though AI CEOs will have the workers, will it matter if human CEOs are the ones with all the means of effective production, like factories? That's the only part I can't get my head around yet.

12

tatleoat t1_jdv4bjz wrote

I think that's probably going to be the first thing that blows up the world order, and I think GPT-4 can pretty much already do it. If you can get an AI to optimize a business to prioritize the workers and the constitution/code is all open sourced so that you know you're not getting boned then we might be able to take these fat cats out sooner rather than later

9

tatleoat t1_j9uj5k8 wrote

I'm sorry I should have been clearer, by one I mean one vehicle, which is like 6 or 8 of those individual little flying guys, which are incredibly slow on an individual basis. But you're right, not much longer until almost the entire agricultural process is automated (and still prob only a few years before we can grow fruits in a lab to scale, making this entire process obsolete lol)

5

tatleoat t1_j9udekc wrote

4

tatleoat t1_j6a5o2p wrote

I bet 2024 everyone will have their own private AI and if I want to accomplish anything that requires me to get in contact with someone I instead tell my AI what I need, then my AI gets in touch with their AI and they figure as much out as they can without involving us. Then they report back with further questions or "you now have an appointment scheduled for around five pm"

34

tatleoat t1_j5wtcat wrote

I feel like this answer is cheating but I think what'll be the main unforeseen advantage of the first supercompetent AI is that it'll always prevent unsafe situations like that from ever getting close to formulating, or at least always expertly swerve to avoid hitting both people. It just shouldn't be possible in the near future

11

tatleoat t1_j5tgyjk wrote

It can't be much further, we're finally beginning to clear some really important benchmarks, like the fact we now have AI that can transcribe human speech with the same level of accuracy as humans (~95%). I mean things like that open up every door

11

tatleoat t1_j5kh284 wrote

Transitioning from an AI that only responds to prompt stimulus, to an AI that can take initiative of its own accord. That might still turn out to be surprisingly hard

4

tatleoat t1_j55er3n wrote

I agree, up to this point most everything we've seen are cartoonishly simplified demonstrations in virtual worlds, or low stakes requests like retrieving a can of soda for you. I don't think these are simple little games because that's all AI is capable of right now, I think they're simple tasks just for demonstrative purposes and the AIs themselves are actually capable of much more as-is.

Couple this with the fact that the public is informed of AI progress much later than the AIs creation itself AND the fact the public can't know too many specifics because it's a national security risk AND it could be hooked up to GPT-4 AND it's multi modal AND OpenAI has 10 billion dollars to throw at compute AND we have AIs helping us create the next batch of AIs (plus much more going for it) then you have an insane amount of reasons why the truly life-changing stuff is so much closer at hand than you might otherwise intuit

29

tatleoat t1_j3rpi30 wrote

I don't see how saying "[thing] will come in 7 years" influences anything as a prediction, it's too far away to generate any tangible hype in the public. If he was going to lie to manipulate a products value I'd think I'd make my predictions something more near term, if we're indeed cynically manipulating the market. Not to mention any of that about Sam Altman changes nothing about the fact he's an expert and his credibility rests on his correctness, it's in his interest to be right. You can't just claim biased interests here, it's more nuanced than that, also none of that changes the fact they all are saying the same thing, 2029. That's pretty consistent, and I'm inclined to believe it.

1