Submitted by Shiningc t3_124rga4 in Futurology

Some people have the apprehension that the rich or the 1% will only keep the AI to themselves and block the access to the public. That would make sense, if they actually have an AI that's on the level of human intelligence, or an AGI.

An AGI is basically a golden-egg laying goose for the corporation. They can just use the AGI to produce as much innovations as possible. If it really has human-level intelligence, then they can just fire every employee except for a few, and let the AGI do all the work. That is, if they actually have an AGI.

People believing the corporate PR that they have an AGI or "proto-AGI" are incoherent. Why would they release such a thing to the public? Why would they let their rivals have access to such a revolutionary tool? And why wouldn't they just start firing every one of their employee except for a few? I'll believe or speculate that they have an AGI once they start firing 99% of their employees.

So, either a corporation wouldn't release an AGI to the public, or they don't have one. Corporations are releasing AIs because it's something that is "moderately useful", but nothing revolutionary.

And in order to counter this, we'll need non-profit organizations making AIs, and not just believe whatever PR that the corporations come up with.



You must log in or register to comment.

imnotuimmeCTmofo t1_je0jn9d wrote

If a corporation thinks its gonna be profitable it wlill do anything, no matter what, because that is the point of a corporation. A corporation will fuck your mother if it thought it will make money off it .


RaceGenderHeight t1_je1cjs6 wrote

Unlike those greedy corperations. I will fuck anyone's mother for free.


Cerulean_IsFancyBlue t1_je0tl3s wrote

“Our revolutionary ad-driven maternal coitus paradigm is a customer-facing data-driven using high-touch Hugging Face docker implementing an LLM-generated zero-hour contract with JIT cold boot tax-favored arbitrage of crypto, gold, human embryos, and rare metals.”


Trout_Shark t1_je0jtus wrote

The corporation that creates a functioning AGI first is going to be world wide player like we have never seen before. I think the biggest concern is can they control it. The only limitation to it's ability to expand itself is computational power. The future is going to be wild.


KungFuHamster t1_je0mtoj wrote

Yeah, the invention of AGI is often referred to in science fiction as the technical singularity, because the speed of AI makes the future beyond that point literally unknowable. If we can keep it from killing us, it should advance our technologies at a tremendous rate.


Trout_Shark t1_je0p2vb wrote

Implementing something like Asimov's "Three Laws of Robotics" should be a major priority. The singularity is of course a major concern. Anything that can learn at an exponential rate is going be difficult to keep under control for long.


acutelychronicpanic t1_je0z1p4 wrote

I agree with the sentiment, but a lot of work has gone into this since those 3 laws. Its still an unsolved problem.


nybbleth t1_je2tf6s wrote

The three laws don't really work, though; on multiple levels. They're far too simplistic and ambiguous, and effectively impossible to implement in a way that AI could consistently follow.


imanon33 t1_je0twsl wrote

AGI will be the last invention of man. After that, the AGI will invent everything else.


Cerulean_IsFancyBlue t1_je0upng wrote

An AGI would be an amazing feat.

The first AGI will be the equivalent of a human baby, completely helpless. It will likely use a massive array of computer hardware backed by a tremendous amount of electrical generation power, and even if it wanted to duplicate itself, will not be able to do so rapidly or without detection.

If anything, it will be even less able to survive on its own, than a human baby.

All the ideas we have about being unable to control an AI, are using Hollywood level ideas about what things are Hackable and controllable. It could thrash around and mess up a lot of systems. There’s a pretty good chance in the process that it would suicide. Every model we have for an AI right now, requires a tremendous amount of computing, power, electricity, and cooling. It’s not going to be able to run away and hide in “the internet”. If it does, it will probably contract a fatal disease from half the computers it tries to occupy.


Mercurionio t1_je46ts5 wrote

AGI is a hardware, that is not chained with awaiting for the prompt.

Imagine a loop "do... While...". Where "While" is limited by energy consumption.


imakenosensetopeople t1_je0jrra wrote

It all comes down to monetizing that AGI. If it worked on internal projects only and did not need to interact with customers, then your example is likely correct. But if the AGI can become the product and they can sell access to it? They’ll be rich.


Cerulean_IsFancyBlue t1_je0uv9e wrote

They could do that without exposing it as an AGI. They could just quietly replace entire teams of customer service people, people that approve home loans, the ones negotiate contracts for shipping cargo. Etc. “We ummmm outsourced it to India.”


Villad_rock t1_je42ukz wrote

An agi could do everything a human can which means no human would earn money anymore.


chcampb t1_je0jy2j wrote

Would a corporation need to?

If you create the model there's a good chance open source is within a few years of also creating that model. You would only have dominance for a short while, relatively speaking, unless your AGI created the next AGI, in which case there will be two exponential curves separated by a short distance.


Cerulean_IsFancyBlue t1_je0wa5f wrote

I think one of the questions would be whether only corporations can afford an AGI. Most of the current models seem to depend upon massive amounts of computing power, which is only available for pretty large amounts of money.


chcampb t1_je142oa wrote

Computing power per cost decreases (increases, sorry, reverse that, flip it around) exponentially. So what I think you are saying is, AGI will be prohibitively expensive. But what I think you actually said is that all it takes is computing power (as opposed to secret sauce).

If that's the case it's inevitable that a company has AGI, and if all it takes is computing power, eventually FOSS will have it too.


Cerulean_IsFancyBlue t1_je14jvo wrote

No, that’s not what I was saying. I was saying that currently our very best sauce still requires a lot of computing power. And that once the secret is out, knowledge is great but it will still take tons of computer power to implement it.

It’s also true that computing power will continue to increase, although Moore and his law may both be dead now. So the rate of increase is uncertain.

It’s possible that some things just won’t scale to the individual level. If that’s true, then most individuals will only have gated access to AGI.


KungFuHamster t1_je0mdwt wrote

If an AGI were released to the general public, it would be devastating. Think of the worst thing you could do with a nimble AI that can learn and do anything, and it's only a matter of time before someone would use it for that purpose. The power of an AGI would be akin to a nuclear bomb, but for anything connected to the internet.

There are trolls and terrorists and mercenaries who would use it to pwn corporations and governments, disabling or even destroying public utilities, banks, hospitals, and rival entities. Not all of these entities are vulnerable, but enough would be that the attacks would completely disrupt trade and economies, and possibly the world economy. The US, Russia, China, and North Korea would not hesitate to obtain and weaponize AGI.


koliamparta t1_je3poyi wrote

Ok so instead of China, Russia, NSA, terrorists, and you having AGI, your preference is that only China, Russia, NSA, and terrorists have it?


KungFuHamster t1_je4gaon wrote

Have you tried being a writer? You really like to make shit up out of nothing.


koliamparta t1_je778lg wrote

Yes, tell me of a scenario where the decision to not release it to the public stops any of the above entities from developing it.


KungFuHamster t1_je7j0z4 wrote

The thing you invented was my preference. My original statement was agnostic in that regard; I merely stated an eventuality. Which is why you got downvoted; you inferred something that wasn't there.


Rusted_Hulk t1_je10z1x wrote

I continue to wonder at the paranoia surrounding the anticipated AI singularity when all we have seen so far is expert systems, which do not have to pretend to be conscious, and elaborations on the clever hans swindle. This, imo, is the immediate threat, an AI fake that's good enough to fool a lot of people. The computers that could do this are expensive and expensive to run, some rich corp would likely figure a way to use it to make money, big woopdedoo. If you are anticipating the advent of a true singularity, get comfortable, it will be a long wait. AGI? Changing the acronym just makes it look like the goalposts were moved. The advances we are seeing so far in the latest round of equine genius has also increased our potential to bamboozle each other. And the idea that a machine with an off switch could wind up ruling the world and wiping out mankind is something that should stay in the pulp magazines.


kompootor t1_je1copl wrote

Why do corporate research publish papers and attend and present at scientific conferences on AI and quantum computing regularly? Why does it seem like with every innovation of one company, the other companies and startups are just one step behind?

Who exactly do you think develops AI at these companies? Business school grads? MBAs? Young coding boot-camp go-getters looking to strike gold with a killer app?

Get your head out of the conspiracy sand and read an actual piece of information by a professional on what the industry actually looks like (and, as they hire high-demand highly specialized scientists, what they get in their contracts). Then come back if you have a serious question of whether a groundbreaking discovery will be kept a secret.

[Edit: on third reading of OP's post, it's more clear what they are arguing. Thus my post here is now a message not to criticize OP (which wouldn't make sense given their post), but rather is more or less my supplementary argument of the same essential point as OP -- which is that nobody is hiding an AGI (though I don't know if OP agrees with me that at this stage no corporate R&D will hide an AGI).]


Shiningc OP t1_je1f5wg wrote

I'm not saying that it's kept a secret, I'm saying that they don't have one.

If anything, if there were ever to be an AGI then I would think a non-corporate entity would come up with one first.


kompootor t1_je269zu wrote

> People believing the corporate PR that they have an AGI or "proto-AGI" are incoherent. Why would they release such a thing to the public? Why would they let their rivals have access to such a revolutionary tool? ...

> So, either a corporation wouldn't release an AGI to the public, or they don't have one. ...

> And in order to counter this, we'll need non-profit organizations making AIs, and not just believe whatever PR that the corporations come up with.

[Edit: See edit above. Even if I was quoting sentences in context, I quoted it out of context to the overall point of the post, which I realized last night and today. Apologies.]


Shiningc OP t1_je26lfd wrote

I said "or they don't have one". If you don't believe that AGI can be kept a secret, then they don't have one.


Western_Cow_3914 t1_je0nn98 wrote

Corp exists to make money. If AGI gets them ahead of the competition and makes them money then yes they will.


Shiningc OP t1_je0o7r0 wrote

They can make more money by making the AGI come up with innovations.


Villad_rock t1_je431ar wrote

An agi could them completely self sufficient, they won’t need any money.


Tetrylene t1_je0tx7h wrote

Because realistically if you’re able to make an AGI then someone else will be able to too. If you don’t release it as a product, someone else will because there’s a gap in the market.


acutelychronicpanic t1_je0zic1 wrote

They would release the AGI because of competitors nipping at their heels. That, and it would make them a lot of money to be first.

I would buy your argument if one company was years ahead of everyone else. Right now the gap is more like months.


Shiningc OP t1_je13vpz wrote

I mean if you have a golden-egg laying goose, then you don't even need to sell the goose. You can have all the money in the world.

An AGI is, metaphorically, like a super-genius. They wouldn't want a super genius to be poached by somebody else.


acutelychronicpanic t1_je15v21 wrote

They aren't the only ones with a goose. They're just the first to release it. Across the world, companies are scrambling right now to catch up, and my understanding of the tech is that it should work. The most important mechanisms exist as publicly available knowledge.


Shiningc OP t1_je17yyx wrote

Yes, but they don't need to sell it to make money because the AGI can make all the money for them.


acutelychronicpanic t1_je19pww wrote

I'd agree if it were true ASI (artificial super intelligence). But a proto-agi as smart as a highchooler that can run on a desktop would be worth hundreds of billions, if not trillions. They would have incentive to lease that system out before they reached AGI.


Shiningc OP t1_je1bc9g wrote

Soo, basically they wouldn't release an AGI.


acutelychronicpanic t1_je1ddup wrote

I think we disagree on what an AGI is. I would define an AGI as roughly human level. It doesn't need to be superhuman.

And I still think they would if they suspected someone else would beat them to it.


Shiningc OP t1_je1en1s wrote

But an AGI is going to be millions of times faster than a human.


wood_for_trees t1_je10h0e wrote

Would an AGI realistically allow itself to be controlled by a single corporation?


Shiningc OP t1_je1330i wrote

Probably not, but then again a lot of people are controlled by corporations.


JefferyTheQuaxly t1_je1c4ew wrote

if one company in the us got access to AGI, every other tech company in the country will send all of their lobbyists to DC to either get AGI outlawed or force them to sell the code to prevent a monopoly.

frankly i think once a single company figures out AGI itll only lead to dozens of other companies following shortly after. if a dozen companies have AGI i dont see how it can remain something only a select few have. at least one of those companies will realize that its quicker profit just to sell the AI to other companies, governments, or the public.


Shiningc OP t1_je1eub7 wrote

So, why wouldn't they keep the AGI a secret?


Iffykindofguy t1_je1vyp8 wrote

This would require a massive conspiracy between buisness men.



yaosio t1_je25ue9 wrote

It doesn't matter. The first AGI being made means the technology to create it exists, and so will also be created elsewhere. OpenAI thought they had a permanent monopoly on image generation and kept it to themselves in the name of "safety", then MidJourney and Stable Diffusion came out. Not revealing an AGI will only delay it's public release, not prevent it from ever happening.


isleepinahammock t1_je2orow wrote

Why did Ford sell cars instead of manufacturing them, keeping them, and just running a big taxi service?


Shiningc OP t1_je2xtab wrote

It would be equivalent to selling Henry Ford, the guy who came up with the car.


ArcticWinterZzZ t1_je3kgan wrote

The last people to acknowledge that an AGI is actually AGI will be its creators. When Garry Kasparov played Deep Blue, he saw within it a deep sort of human intelligence; insight that said more than the chess AIs he was used to. Deep Blue's creators did not appreciate the chess genius it was capable of, because they were not brilliant chess players. Under a microscope, a human brain does not look very intelligent. So too will the creators of AGI deny its real intelligence, because they know its artificiality and foibles more than anyone.


Norseviking4 t1_je44g3a wrote

People hate this argument usually, but i see a role for government here. If a company makes a true agi, that company will be visited by the govt real fast. (Im in Scandinavia, so my govt doing this is not very scary)

Now, depending on the govt who do the visiting this may or may not be a good thing. But there is atleast a chance government regulation will lead to better outcomes than Microsoft taking over the world.


Mercurionio t1_je46ed5 wrote


They will keep it to themselves as long as possible.


Philosipho t1_je4r75m wrote

It's funny how everyone turns to socialist systems when capitalism fails them. That's not going to work out the way you think though. You're just taking power from those that have it and spreading it around. It'll just find it's way to whoever figures out how to use capitalize on it the best.

The problem with society isn't fair access to things like information or technology, it's with how people use them. Socialist systems never work for capitalist societies. Capitalism is just the economy used by fascists, and such people always seek to consolidate power and wealth for themselves. That inevitably leads to the majority being left out in the cold.

Actual socialism is when a society wants to ensure that all members have what they need to survive and prosper. But you all thought you'd be the ones to make it to the top, so you competed with each other over everything. Now you're crying because someone else is there instead.

Socialism will not save you, because you are not a socialist.


ovirt001 t1_je5vmox wrote

The first instance of AGI isn't going to replace all those employees, it will only have the capacity of a single human. For this reason we'll see them start selling access to it (the trend has already begun) and then sell "personal assistants". Holding onto AGI until it's sufficient to replace all the company's workers would risk losing out to a competitor that releases theirs to the public. Once everyone has one of these personal assistants it's no longer possible to close AGI off to the world.


Shiningc OP t1_je5x446 wrote

What, an AGI will basically have all the computing power of hundreds of thousands of computers.


ovirt001 t1_je63rvi wrote

Until we find more efficient ways to run it or design better hardware, a single instance of an AGI will require all those hundreds of thousands of computers to run. ChatGPT was estimated in January to cost $3 million per month to run (Azure cloud resources) and it's still pretty far from an actual AGI.


Shiningc OP t1_je64llj wrote

The thing is, once you make an AGI then the AGI itself should theoretically make better versions of itself. There's really no reason to sell the AGI because the AGI should find ways to make more money.


ovirt001 t1_je65tu2 wrote

Assuming it can optimize its own code. Humans can't exactly optimize themselves to run on better hardware. Even so, it wouldn't matter because access is already being distributed. GPT-4 is available to researchers and businesses and is currently being integrated into all kinds of products.


Shiningc OP t1_je65z7k wrote

GPT-4 isn't AGI.


ovirt001 t1_je668ur wrote

I'm aware. It's a precursor, we don't actually know where the line is for AGI.


Shiningc OP t1_je67axg wrote

And why do you think companies are using their own computing power to lease the AI? Because they know that it's just something that is "moderately useful", but not revolutionary.

The "AI" can't exactly answer questions in a unique way like "How do I outsmart and destroy Microsoft?". If it was a smart person, then maybe he/she could. So would a company lease a smart person, even if it made them money?


ovirt001 t1_je67kko wrote

Yes, to the highest bidder. A "smart person" equivalent AI is still a very long way from 10,000 average people.


Shiningc OP t1_je67p5i wrote

But no company actually leases a smart person. It would want to keep the smart person loyal to the company and working for the company.


ovirt001 t1_je68vab wrote

As long as the "smart person" is making money, they aren't going to care. Using that smart person to dominate all industries would be ludicrously difficult and put the company at a disadvantage to any other company that has a similar "smart person" but chooses to lease their time.


Shiningc OP t1_je692xt wrote

I think that would be called a "brain drain" or "poaching". I mean sure they can do that, but it's short-sighted and won't be good for them in the long run.

It might be possible for the companies to lease the "dumb" AGIs but keep all the "smart" ones to themselves. But at this point it's basically a slave trade.


ovirt001 t1_je69lyg wrote

It could be considered consultancy if the AGI is capable of individual thought. Companies have some longer-term objectives but tend to focus their efforts on short-term gains to please investors.
There will be plenty of discussion around the ethics of using AGI in business. Whether it can be called "slavery" will depend on how like a human AGI turns out to be.


AchillesOnAMountain t1_je30s0y wrote

Here is the thing with powerful technology and powerful people. They all require underlings, maintenance, assistants, janitors...

Such a powerful secret wouldn't remain a secret for long. Especially if it was being used/hoarded maliciously.


the_new_standard t1_je3xa3y wrote

Have you not been paying attention to the news today? It's becoming increasingly clear that top AI labs have finally stumbled upon proto-AGI and are afraid to release it.

They've already invented something moderately useful which will make them trillionaires. Now they are afraid of leasing actually revolutionary because that would fuck everything up. Once it's public knowledge that AGI is possible, it's only a matter of time before more companies produce it and the market for their proto-agi products dries up.

Just like how Google had a decent LLM for years but didn't release it because they were already making a killing in the search engine business. Once you become an industry leader with you don't fire every employee and upend the whole industry.


Shiningc OP t1_je4cm8q wrote

You believed the “news” aka corporate PR?


NeurobotsIL t1_je0ldmh wrote

oh oh oh corporations corporations evil evil oh oh



Shiningc OP t1_je0ls6i wrote

Nah, there's nowhere in the post that says that the corporations are evil. However, if they really had an AGI, then they wouldn't release it to the public.


NeurobotsIL t1_je0mdev wrote

thats what i m saying - why to think about all of them will do same? There are a lot of corporations managed by new age people with new age goals and minds