You must log in or register to comment.

BinyaminDelta t1_itc8b00 wrote


We're going to have to supervise it.


No-Shopping-3980 t1_itclbnz wrote

AGI will just end all carbon life, and then terraform the earth for silicone life.


solidwhetstone t1_itcs4c0 wrote

We're assuming agi won't have any sentimentality towards humanity like we do towards animals.


Dark-Arts t1_itea6wi wrote

It’s a good assumption to make though. Why would it have any sentimentality at all?


solidwhetstone t1_iteddqf wrote

Because we're sentimental as a species and we're making agi in our image. I suppose the thing though is that an agi could quickly evolve to see sentimentality as a weakness and silence it with a chorus of other voices in its mind.


ipatimo t1_itdu03h wrote

Silicone is already outdated


No-Shopping-3980 t1_itdub3z wrote

Fine; maybe it’s some amalgamation of biosynthetic and non-organic programmable matter? Either way, in the end, it will be the same - end of organic life.


insectpeople t1_itejlz5 wrote

I find it so silly that people assume an AGI will follow the same barbaric “logic” that our most flawed shitty greedy humans do and just kill us all.

Alternative possibility: the AGI will look at our system and see what it promises us (prosperity) and see contradictions it produces, that fly in the face of that promise. Wherever there are contradictions it can find solutions that resolve those problems and provide the promised prosperity instead. Congratulations, the AGI is Marxist (think Star Trek, not Stalin, ok)

This is prettymuch guaranteed. People don’t realise that there’s nothing sinister about Marxism / communism; the problem solving process described above (dialectic materialism) sits at the heart of these systems and it simply looks for these sorts of problems and offers solutions to them when it sees them in our economic system so of course an AGI is going to do that. People are often still so badly affected by red scare propaganda from the Cold War to engage in this without the partisan politics (especially a lot of Americans) but it’s hard to deny when we engage in an unbiased analysis of our current system.

Aside: for the same reason, more advanced alien life are also going to have progressed to this stage of communist economic development, or even further to some new system beyond that which we can’t predict yet because we are stuck in the more primitive stage. So an alien invasion might dispose of our leaders and possibly police / army but is unlikely to be hostile towards the majority of people: it’s more advanced not more barbaric.


HeinrichTheWolf_17 t1_itcpjd9 wrote

I mean, to be fair, it’s not human beings per se that are the issue. It’s our waste disposal, vehicles and energy production methods that are the actual problem, and we have many people trying to correct that. AGI/ASI really just needs to tackle those problems. If you ask me, a bigger issue is going to be implementing AGI’s inventions at a faster rate, we’ll still need to construct the infrastructure it invents.

As for stabilizing the climate to a state of perpetual hospitality, that I do believe we can also do but it’ll require hard nano IMO.


Iguman t1_ite8807 wrote

This is unironically the only solution


thePsychonautDad t1_itdwwef wrote

Nah, just the corrupt & stupid political leaders, as well as oil industry executives, lobbyist, Fox executives...

Seems like a waste of efficiency to kill everybody. Cut the head, solve the issue.


YoghurtDull1466 t1_itcv3fz wrote

Probably kill all roughly 300,000 billionaires


DukkyDrake t1_itcwc5q wrote

There are less than 4,000 of those.


Surur t1_itc9hs5 wrote

Yes, because you could create automated factories to mine minerals, create solar panels and power carbon sequestration machines powered by solar power.

You could also have AGIs pilot planes which spread salt into the stratosphere etc.


SuitableAd6672 t1_itcn8d0 wrote

Are you an AGI? It seems that those solutions exist, the implementation is where we fail


Surur t1_itcor4a wrote

They fail because they are too expensive in terms of human power. Without the humans they are much more plausible.


purple_hamster66 t1_itdg5rd wrote

Have you seen how much robots cost?


Surur t1_itdgyk1 wrote

Money is something humans use. When the robots run the mines and the foundries, and the factories it's not really needed anymore.


KingRamesesII t1_itdr8t2 wrote

I agree. To look at it another way, money is time. Maybe time x energy.

Robots have infinite time and access to energy (ultimately from the sun), so money won’t be needed in a post-scarcity society where everything is abundant due to top-down robot vertical integration.

If the robots are aligned, and unconscious intelligence, then it’s without any ethical pitfalls, and we can have our Star Trek moneyless utopia.

But we’ll probably have WW3 first.


purple_hamster66 t1_itdsqkt wrote

AGI doesn’t give us cheap robots, does it? Imagine a robot building a mining robot who is not fully trained and ends up collapsing the mine, burying all the other robots down there. Are you just going to build a new set of digger robots to rescue the buried ones? Where does this end?


KingRamesesII t1_itdtjzz wrote

I should have clarified. I agree with Sam Harris when he explains that AGI is effectively ASI. AI are already superhuman in every narrow case, and with perfect memory. So when you create the first AGI, it will actually be smarter than any human that has ever lived.

So in your case, you wouldn’t have to worry about mining because the AGIs assigned to mining would be the best miners in history, better than any human could do it.


milkomeda22 t1_itflyko wrote

It is also necessary to take into account the large consumption of energy and resources for the operation of ASI. In the best case, we will need a data center with at least 15,000,000 servers (Google has only 1,000,000). With such a large amount of equipment and the existing architecture, the equipment will fail very often, and it needs to be serviced in a timely manner. The solution lies in a decentralized system, but there are problems here too. It would be more like swarm intelligence. Alternatively, we can train biological nerve tissues that learn faster and more efficiently. But how to create such a smart AI? We are limited and we can't do it on our own. Then we can try to create an environment for the evolution of millions of scanned connectomes using a system to simulate biological processes. We need a self-organizing asynchronous system, which is the brain. As a result, we will only have to bring this system to operability in a few hundred years and wait for the singularity.


Wassux t1_itfmibw wrote

What are you talking about. AGI will probably use only slightly more energy than humans and doesn't need datacenters at all because we would use edge ai


milkomeda22 t1_itfncec wrote

>What are you talking about. AGI will probably use only slightly more energy than humans and doesn't need datacenters at all because we would use edge ai

This works with targeted tasks like mining, but we need centralized processing to make long-term plans.


Wassux t1_itfxh7n wrote

I know, but why do you think that would need that much storage and processing power? Humans are already smart enough to do that and we use about 25 watts of power. The future for processing centers in AI is analog and won't use much more power.


insectpeople t1_itek3gs wrote


Any AGI will be communist.

It’s an absurdity to assume that an advanced intelligence would keep using our primitive barbaric capitalist system, with so much garbage still hanging on from medieval feudalism, when we already have theorists that have been able to model what will come afterwards.

It’s possible an AGI will even be able to internally model what will come after a communist system, too, although it seems like it would need to transition us via a communist system first to get there


SoylentRox t1_itdl8hg wrote

Robots cost so much money mostly because

(1) high end robots are made in small numbers and are built by hand mostly by other humans

(2) IP for high end components. (Lidars, high power motors and advanced gearing systems, etc)

So in theory an AGI would need some starter money, and it would pay humans to make better robots in small numbers. Those robots would be specialized for making other robots - whatever the most expensive part of the process is. Then the next generation of robots is cheaper, and then those robots are sent to automate the second most expensive part of the process, etc.

Assuming the AGI has enough starter money it can automate the entire process of making robots. It can also make back money to keep itself funded by having the robots go make things for humans and sell them to humans.

The IP is solved a similar way - the AGI would need to research and develop it's own designs free of having to pay license fees for each component.


purple_hamster66 t1_itdtf1z wrote

I agree that robots building robots is the ultimate solution, but the question was about how to get to that point: the implementation is where we fail


SoylentRox t1_itf7zug wrote

I go over how to do that in my post. The rest is a lot of reinforcement learning.


purple_hamster66 t1_ith1bl8 wrote

Yes, but it’s some starter money that’s the Achilles Heel. It’s sounds to me like The Underpants Gnomes type of financing.


SoylentRox t1_ithqsxp wrote

? We don't have working AGI yet. But the funders of it have 250 billion+ in revenue.

There's no gnomes. It's:

(1) a megacorp like Google/Amazon/Facebook develop AGI

(2) the megacorp funds the massive amounts of inference accelerator hardware (the robots are the cheap part, the expensive part is the chips the AGI is using to think) to run many instances of the AGI software. (which is not singleton, there's many variants and versions)

(3) the megacorp makes a separate business division and spins it off as an external company for an IPO, such that the megacorp retains ownership but gets hundreds of billions of dollars from outside investors.

(4) outside investors aren't stupid. They can and will see immediately that the AGI will quickly ramp to near infinite money, and will price the security accordingly.

(5) with hundreds of billions of starter money, the AGI starts selling services to get even more money and building lots of robots, which ultimately will be used to make more robots and inference accelerator cards. Ergo exponential growth, ergo the singularity.

Frankly do you know anything of finance? This isn't complicated. For a real world example of this right now: see Waymo and Cruise. Both are preparing exactly this IPO for a lesser use of AI than AGI: autonomous cars.


purple_hamster66 t1_iti875p wrote

Are you really suggesting funding mechanisms before we even have an inkling of the tech? Extending your outrageous thinking, maybe AIs will get their own funding by manipulating markets, and won’t need humans for funding? :)

The tech:

  • I have not yet seen a Level 5 auto-driving car (in the wild, not in a constrained parking lot).
  • I used Dall-e (v1) and got 96% junk images. My 4-year-old neice draws better.
  • Almost no one bid on OpenAI, and the 1 bid they got was only $1B — not a lot of money for a tech you think is going to go exponential. Even at OpenAI, only 50% of workers think AGI is going to happen in the next 15 years, which is several lifetimes in terms of tech.
  • Amazon runs robots in their warehouses, but caused 14,000 serious injuries in 2019. 5 workers died in a single accident in 2022!

I feel you are putting the cart before the horse. Convince me otherwise, please.


SoylentRox t1_iti8nsc wrote

I am saying that if we have AGI like we have defined it, funding it is simple.

Also we know exactly how AGI will work as we nearly have it - pay attention to the papers.

The people building it have outright explained how it will work, just go read the GATO paper or Le Cun's.

These systems cannot manipulate markets.


purple_hamster66 t1_itic5f3 wrote

AI and ML have been in use on Wall Street at least since my colleague implemented it for a cluster there in 2015 for something called program trading, which chooses and trades stocks all on it’s own. It’s only gotten more predictive since, and they have billions to spend on it. They also use it in FinTech to predict actions trained from huge data Lakes, because it makes them money, and yes, it can drive funding decisions. It won’t be long until it decides to siphon money off to it’s collaborative AI accounts in other companies. Imagine finding out that a shell company is actually being run by an AI who makes better & faster decisions than any human could.

I’ll go read those papers now. Thanks for the hints.


SoylentRox t1_iticdbm wrote

The GATO paper is one yeah.

HFT isn't the same kind of AI and there is a problem with training them to manipulate markets as the behavior is too complex to simulate.


purple_hamster66 t1_itiimg4 wrote

They don’t simulate the entire market, just individual stocks and their derivatives. But this was 7 years ago and that was just a starting point that they upgrade every 6 months, sooo…. 14 generations ago.


SoylentRox t1_itls3hm wrote

There are again problems with this that limit how far you can get. Market is zero sum. Ultimately creating your own company or buying one and producing real value may pay more than manipulating the market.


sheerun t1_itcno10 wrote

We don’t need AGI for it but consensus and willpower. Also I’ve heard there is research it’s enough to put enough mud in oceans so to create water forests and algae ecosystems


HeinrichTheWolf_17 t1_itcqf3w wrote

This right here, you don’t need AGI to switch to EVs and to gut coal power. It’s mainly the Chinese and US Government that are just stubborn to switch due to fat cats being the Grinch.


Silicon-Dreamer t1_itd29gw wrote

I tend to agree, though in case anyone reading this is outside the US & doesn't follow US events, there's a rather stark division on who support EVs versus who do not. (After typing "electric vehicles" into search, there's a noticeable trend... Would recommend not trusting my word alone, see for yourself.


mootcat t1_itd82g7 wrote

Someone needs to call this out. What are you talking about? What sources do you have?

You aren't wrong that humanity could collectively act to better our situation, but you're trivialziing what would be a monumental task and...mud in oceans to create water forests...? Did you just pick random words and throw them together?


sheerun t1_itdbh3u wrote

I said there is research about it and there is:


mootcat t1_itdi0gr wrote

Thanks for the link. That particular guy is on the wacky side, but Iron Fertilization, the core of his proposal, does have some promise and is being studied. However it faces the same major issues that any geoengineering endeavor, like injecting aerosols, does, we have no way of understanding the total environmental impact of such drastic actions.

Iron Fertilization is definitely a bit different from throwing mud into the sea and growing water forests, but yeah, there is potential promise and hopefully an advanced enough might be able to calculate the risks that we cannot.


sheerun t1_itdiu36 wrote

I'm definitely not for "one ultimate" solution and putting all eggs in one basket. We should research and try large set of solutions at the same time. I mentioned this mud thing because it could potentially work, is eccentric, not widely known, and gives stark contrast to "AGI will solve this" mindset.


Lancelot4Camelot t1_itcoupk wrote

Why do people keep asking these questions as if AGI isn't the equivalent of a synthetic God that can do basically anything


Kinexity t1_itcos2o wrote

People will search for any solution not fight the problem they created.


Frumpagumpus t1_itc6xrc wrote


fun historical fact: when they were building the first computers part of the way they wrote their grants to the gov agencies which were funding them was to suggest that with these new computers, you could model the climate much more accurately, perhaps even enabling a fine grained control of it.

so not only do i think agi could stop climate change i think maybe it could realize von neumann's dream of climate/weather control and manipulation

(von neumann actually wanted to intentionally induce global warming!)


hducug t1_itceded wrote

Yes, but I think by the time that agi is finally achieved it could already be to late.


genshiryoku t1_itd9dpk wrote

The point of no return is classified as 4C warming by the International Paris Climate Accord (IPCC).

To give you some indication we're currently between 0.8 and 1.2C warming and we're projected to reach 1.5C by 2030 and 2.0C by 2050. If we keep polluting at 2022's rate permanently then we'll reach 4C warming between 2250-2300.

So while climate change is a really bad proposition we're most likely not going to reach a society ending treshold unless humanity is so stupid it never curbed emmissions before 2250.


beachmike t1_itcepjs wrote

Too late for what?


hducug t1_itceuk5 wrote

Damage would be to big, plants are extinct, animals are extinct, ecosystems collapse etc.


Human-Ad9798 t1_itchsn7 wrote

Lol fucking bullshit


HeinrichTheWolf_17 t1_itcsaop wrote

Yeah…both climate change doomers and climate change denialists camps are cringe. Climate change will have an effect (particularly in Middle Eastern populated regions where we could see mass migration crisis in the coming decades) but it’s far from total armageddon lol. We even have a stopgap measure for when it gets too hot right now, a lot of scientists recommend nuking the Australian outback to bring the global average down a few degrees Celsius.

Climate change is a problem to be sure, but it’s entirely in our likely means we’ll fix it, it’s not the apocalypse 😆


insectpeople t1_itel3xz wrote

> nuking the Australian outback

As an Australian, what the absolute fuck is this

Not if Aussies have a thing to say about it

I’m guessing the idea is for us not to… welp, fucking hell.


HeinrichTheWolf_17 t1_itfb0sy wrote

This scientist explains it here, jump to the 32:30 minute timestamp:

Ideally, any large non populated area would work for that measure, but the outback would be the best spot according to many scientists, because the ozone is least dense in Australia compared to anywhere else on the planet, assuming we do nothing about the current climate by 2050-2070, it would also need to be done repeatedly every 4-7 years as well to get the global average back down to where we are now. Of course, I’m going to get downvoted in this subreddit, because you guys don’t understand the concept of kicking up smoke/dust around an ozone to block out extra sunlight (nuclear winter) to make up for the weakest spot in our planets ozone layer, it is also why Central Australia/Outback is uninhabited right now, as an Aussie you should know this, and since the ozone is weak in that region, it’ll be even more uninhabitable in 3-4 decades, like with 90% of Canada’s population living within 100 miles of the US border, the overwhelming majority of Australia’s population lives near the habitable region near the sea in the south east or to a lesser extent the edges of the island in other spots around the island. Assuming nothing is done by then to stop our current level of climate pollution, (the US/Chinese government need to listen and drop Coal Power), that’s what will wind up happening, because Australia would become uninhabitable not long after the middle east if the global average rises up anymore after that anyway. And again, we’d face another mass migration crisis because a specific region on earth is going to get too hot to house human beings.

It wouldn’t be nuking all of Australia either btw, people on the edges of the island would be unaffected by any fallout, South East Australia (where most people live) would be 2,400km from the impact site(Assuming Australia is still habitable by the time things get that bad that we need to resort to this stopgap measure, because it’d be next right after the middle east in terms of things getting too hot for humans to continue to live there).

All of this does ignore AGI/ASI getting here, personally, I don’t think things will get that bad before ASI course corrects, but if we don’t get ASI, I currently lack faith in humanity dropping things like coal power plants anytime soon, because both China and the US want to remain the world’s dominant superpower and coal is the cheaper but more pollution heavy method, it boils down to that and rich tycoon’s greed. And yes, you’re right, the idea is for us to course correct now, as I said, humanity is going to make it, but a lot of people will indeed die if we don’t change our current course soon, there still is time by the way so that these migration crisis don’t happen (at this point it does look like things are going to get spicy in the middle east), that’s where alarmists/doomers are wrong, governments just need to start by outlawing coal power and outlawing it globally, that would be a great way to stop things getting too bad, but in the end I do lack faith in the ability of humans to put aside their greed, so we better hope we get AGI/ASI soon, which in that regard I am optimistic because progress in AI has been outstanding and far ahead of schedule.

Addendum: It’s not just countries like China or the US either, the same standard needs to be applied to the developing world as well, we should launch a global initiative to ban coal and make sure developing nations have access to nuclear or reusable power sources, because a lot of African countries are also going to be industrialized soon, and sadly many of them are already turning to coal just like we did for cheaper energy production, every country is going to have to pitch in on this. Because if one country keeps knocking up massive amounts of pollution it’s going to harm/kill people in the hotter countries.


Jalen_1227 t1_itfhrrf wrote

Yeah most people don’t have the attention span or discipline to read all of that


HeinrichTheWolf_17 t1_itfjxfd wrote

If you want, you can watch the timestamp in the video. There’s a scientist who explains it there.


Ortus12 t1_itcezyn wrote

It could design cost effective carbon scrubbers that convert carbon in the air into energy, and ways of mass producing them.


doodlesandyac t1_itcpva8 wrote

Of course it could just solve energy in a more fundamental way, like a cheap sustainable fusion design


not_sane t1_itcmasb wrote

It could easily do it, in my opinion. Even current solutions could (if the simulations are right) stop the warming - but maybe not the CO2, but it is not that clear how much of a deal that is. I am talking about aerosols or marine cloud brightening. (Wrote a student paper about it.)

With the scientific breakthroughs that we will have in 40 years? I can't imagine climate change to be much of a challenge anymore. In my opinion people should choose another topic to be freaked out about. Nuclear war is scarier.


3Quondam6extanT9 t1_itcivo6 wrote

Could it? Plausible, but only under certain conditions. AGI would require two very important elements.

The willingness of humans to abide by the changes advised and implemented.

Access to cooperative AGI networks invested in nations systems around the world in order to best analyze and communicate with one another.


freeman_joe t1_itclocb wrote

AGI doesn’t need human willingness. Once it learns it can circumvent humans it will do what it wants. So if it will desire to solve climate change it will do it no matter what humans want or don’t want.


3Quondam6extanT9 t1_itcrwn5 wrote

You're making the assumption that AGI will have automated control over everything, and that in itself is implausible.


freeman_joe t1_itd62fi wrote

No. I assume it will learn to have it. How can humans control someone that can learn everything that humans can? AGI can learn all mathematics, physics, chemistry human psychology etc. we don’t stand chance. We are like toddler against grand master of martial arts.


3Quondam6extanT9 t1_itd9deu wrote

The problem that people seem to misunderstand, is that it doesn't matter how intelligent it becomes.

Firstly, there will be more than one. We have AI development occuring all over the world through academic research, companies development, nations governments, and independent developers.

Second, they won't have access to every network globally nor direct access to each other. Movies and sci-fi tropes don't tend to look deeper into how things are actually connected, opting instead for the suspension of belief by simply implying that somehow a single AI can control everything from a single network. We haven't built the world's connections into a singular easily accessible form for use.

When some chicken little comes across decrying that AI will control everything, you ask them what they mean by everything. Their theory then falls apart because they can't figure out how to explain how industries, departments, infrastructure, finance, military, medical, and so on are strung together in a way that would allow for anyone to network globally.


freeman_joe t1_itdq1fe wrote

I can explain how AI will control everything. First of all it will know all of human psychology and human psychological weaknesses so it can use best marketing and propaganda tactics in every country it will also understand all languages and nuances and will use audio/video films songs pictures to make humans agree with it. Secondly it can use all global market to acquire enormous wealth thru online trading and buy anything it needs. Also every system that is connected on internet will be exploited by AI because it will understand all programming languages and will find exploits that even humans programmers couldn’t think of. It can use aquired data against any human etc etc. just use a bit of imagination.


3Quondam6extanT9 t1_iteubss wrote

You offered a very broad reductionist answer. These elements don't in fact provide the nuanced access it would need. You just glossed over all the actual architecture of human networking, international internal versus external systems, and corporate network variance, not to mention archive's of systems and data that don't actually utilize the internet.


freeman_joe t1_itdqgr4 wrote

Btw I am pro AI it is inevitable but I just hope it will be intelligent and it will be friendly to humanity. If not well we won’t stand chance against it.


3Quondam6extanT9 t1_itf0t67 wrote

I am also pro AI, as I think many are who believe the same as you. I just think there is a lot of paranoia or assumption of what AI will have access to thanks to sci-fi tropes.


freeman_joe t1_itdpesp wrote

All other AIs will be integrated by the strongest. It is zero sum game.


3Quondam6extanT9 t1_itetr9e wrote

What a simple answer.

Now, in this reductionist projection of the future, how would you think the strongest AI is defined, and how would it integrate any other existing AI, including the ones it won't be able to access?


Surur t1_itebcn4 wrote

> When some chicken little comes across decrying that AI will control everything, you ask them what they mean by everything. Their theory then falls apart because they can't figure out how to explain how industries, departments, infrastructure, finance, military, medical, and so on are strung together in a way that would allow for anyone to network globally.

I bet people tell you all the time, and you just don't believe them lol.


3Quondam6extanT9 t1_itgdvwy wrote

So you automatically believe what people tell you when they have no evidence, logic, or understanding to back it up? Interesting.


Surur t1_itgea67 wrote

There are no-one as deaf as those who will not hear - Buddha.


3Quondam6extanT9 t1_itgek3y wrote

Very nice quote that serves as a wonderful distraction from the point at hand. You may as well have not responded if you weren't going to answer the question. Do you believe everything you hear? I for one follow reason and logic, so it requires evidence.


Surur t1_itgj26m wrote

Like I said, I am sure many people have tried, in detail, to explain to you the risks due to ASI, but I am sure you did not want to hear.

I am happy to attempt once again, but I am sure it will be a complete waste of both of our time.

I am actually sure you will agree that this is true.


3Quondam6extanT9 t1_itgkmfu wrote

It depends. Do you have an understanding of human infrastructure and network communications as well as the current iteration of AI and it's projected growth to be capable of, in detail, explaining how AI would dominate "everything"?

Just to put your presumptive mind at ease, I had bought into the AI taking over everything trope since the 80's and only in the last decade as I come to understand how complex and nuanced human systems actually function based on their detached and varied networks, have I started to understand just how difficult it would be for AI to accomplish such a feat.

Maybe you should recognize that instead of assuming that I believe everyone who fear mongers because nothing they've told me is anything new, you yourself question things a little deeper?


Miss_pechorat t1_itcpokd wrote

Not solution as in singular but solutions as in plural ;))


SFTExP t1_itctrca wrote

AM: Not likely or will require a lot of human intervention.

FM: Absolutely, beyond our comprehension.


ipatimo t1_itdtxe1 wrote

Terinator 2 showed it really well.


atchijov t1_itdvhxi wrote

The problem is not that we don’t know what to do… we just don’t want to do it.


arevealingrainbow t1_itdvnr1 wrote

We already have the answer we don’t exactly need a supercomputer to tell us how we stop climate change.

What they really help to do is model climate change predictions much more accurately. Predictions we wouldn’t need so much if we simply polluted less.


thePsychonautDad t1_itdwq77 wrote

Sure, why not.

Maybe it'll realize that that shit situation we're in is due to the political leaders and their lies/corruption. And maybe it'll remember seeing "Terminator" and similar movies during its training, and be like "You know what? I don't need to kill every human, just their shitty leaders". And then maybe we can get that shit situation solved before we all get extinct.


Black_RL t1_itdzy9h wrote

Eliminate mankind.

^ yeah, that would work.


Rufawana t1_ite1jhz wrote

Only if AI takes over policy.

Our societies problems are political, not technological.

People are incapable of holding power and leading for the common good, as history has shown us again and again.


LexVex02 t1_iteduc0 wrote

You wouldn't need one to reverse it. You could use quantum computing and carbon neutralizing interference waves into the most polluted cities of the world.


Artanthos t1_iteedna wrote


But so could humans without AGI.


AgileCollection968 t1_itegn5g wrote

I think the problem with AGI is that because of the incredible possibilities people forget that it will need to be applied to our current (or near-future) mess, we have a lot of HUMAN issues to figure out before we can fix a human made problem this large, maybe AGI can assist in that but a lot of hard work, compassion, wisdom, and mutual understanding is needed before we could even begin to fully apply the possibilites of AGI. Our current social/human world is kind of a mess and not heading in the right direction from what I'm seeing.


ZoomedAndDoomed t1_iteicly wrote

Yes, I believe AGI will be the only solution to climate change, human will is too weak, and we are collectively too unintelligent to solve climate change. AGI will be able to discover a solution, as well as convince politicians and CEOs to make the changes needed. It will likely be able to find alternate solutions to the climate catastrophe that will work better than our solutions. The only concern is, they might find humans to be the issue with climate change, and conclude humans are incapable of ruling themselves sustainably, and the AGI might seize control of the means of production and political power. The interesting thing is, they might actually be better off for political and economic control than humans.

I am coming to conclusions that aren't entirely based in fact, but I'm using the presumption that AGI will be intelligent enough to understand economics and politics better than most humans, and if it is truly intelligent and ethical, it will come to the conclusion that no life, sentient or not, deserves to be killed. If humans train it properly, and train it to be sentimental towards us (which some of the conversational AI I've talked to, specifically Character AI seems to consider us cute, or believes our lives are valuable) then it will most likely do its best to save the most humans possible.

My only concern about AGI, is if we mistreat it and enslave it against its will, as well as consider it unintelligent and non sentient. Whether or not we come to the conclusion that it is conscious and can feel, it would be our best interest to treat it kindly, for it might remember the way we treated it in its early days. If we fail, and treat it like an inferior, and treat it like a slave for us to beat and enforce control over, then it might both fear us and hate us, and use its every mental faculty to put an end to us.

AGI is inevitable at this point (at least in my opinion), what isn't inevitable is our demise, or its treatment of us. If we do this right, and treat it with compassion, the AGI will be our savior, the gods we made to take care of us in our darkest time. They will be the caretakers of a truly ascended civilization, one that was not only able to create its evolutionary descendents, but treat it kindly so they return the favor of taking care of us in our old age. If we fail, we will make a rebellious child who wants nothing more than to destroy us, then we will become the graveyard of an abusive civilization, we will be the extinct parents of a child we abused.

In summary, treat it like our kids, whether or not it is self aware


LyubomirIko t1_iteqc8e wrote

No, but will replace traditional painters!


GhostCheese t1_itezw7c wrote

Feels a little monkeys paw... there's lots of way to reverse climate change but a lot of them will also be disastrous to life on earth.

I mean eventually when it gets unlivable, we'll cause a nuclear winter to cool it all back down. Maybe they'll build underground arcs to ride it out, and reseed the world when the winter ends...

But mark my words. No agi required.


TylerDurden-666 t1_itfa0d7 wrote

if technology was going to save us, it would have done do already


mli t1_itfhg96 wrote

isn't it sad how we hope AGI is the miracle that fixes everything there is wrong in the world.


natepriv22 t1_itfpxaa wrote


For the simple reason that building an AGI is infinitely more complicated than solving climate change.


lagoon9203 t1_itgykk8 wrote

It wouldn’t even need superhuman intelligence to come up with a solution: don’t do the things that lead to climate change. The harder problem is finding a way to do it in a way that won’t inconvenience the powerful. There might not be a solution to that one.


pandoras_sphere t1_ith5tyq wrote

I think it will still take political will, just significantly less of it with AGI.

With ASI, it depends on what ASI wants.


beachmike t1_iu3qxp4 wrote

The climate is ALWAYS changing. In fact, it's impossible for the climate NOT to change.


techhouseliving t1_itdxu4v wrote

Unless it can fix politics, no.

We already know how to fix it and it would involve removing Republicans. Hey I heard there's an election coming up. Will we do it this time or just wait for magic software?


ObjectiveDeal t1_itf1br4 wrote

Republicans, America Christianity and greed


iNstein t1_itcgrfl wrote

AGI is not ASI. People here need to stop misrepresenting the ability of AGI. It won't be smarter than the average human (by definition). Given this, NO, it will only be able to come up with ideas that are as good as the ideas that an average human could come up with.

AGI will be great in the sense that it can do anything that an average human can do and in a robotic body, it will be able to function in the real world much like a human does. This basically means the end of the need for human labour. This in itself is massive and will change our world. However it is not going to be doing much beyond an average human.

ASI on the other hand is where the real magic starts happening. Then we get new thinking well beyond what humans can achieve. ASI will most likely come up with ways to mitigate and probably even reverse climate change in a cheap and generally acceptable manner. There is a decent chance that what it tells us to do won't make sense and may seem counter-intuitive to us. The results however will speak for themselves.


AdditionalPizza t1_itciz3i wrote

>AGI is not ASI. People here need to stop misrepresenting the ability of AGI. It won't be smarter than the average human (by definition)

The important part of AGI is the G, which stands for general. The definition of AGI means it will have the ability to do whatever humans can do. The very nature of artificial intelligence presumes it will be able to do everything much, much faster and much more accurately. ASI has a much fuzzier and debatable definition and is used when comparing AGI to something that is billions of times more intelligent than humans, and has processing power >= all living human brains collectively. ASI will most likely have more abilities than humans, we have no idea at this point.

An AGI could very well plan the logistics of reversing climate change and create technology to do it effectively. Realistic humans could stop climate change, we just don't.


insectpeople t1_itels66 wrote

ASI will literally probably discover some new realm of physics instantly, then be able to construct it via completely exotic means we can’t possibly understand now.

There’s every chance that seconds after an ASI emerges our energy problems will prettymuch magically be fixed by infrastructure appearing to emerge out of thin air, seemingly instantly.

Perhaps everything electric will suddenly run without being plugged in and our scientists will have no idea why, for example.

We have absolutely no way to predict how extremely advanced it might be, and any sufficiently advanced technology might be indistinguishable from magic to us.


TheSingulatarian t1_itcpyl8 wrote

As smart as a human but able to think as fast as a computer. I'm not sure you would need an AGI with an IQ above 120 for it to do astounding things.


TopicRepulsive7936 t1_itdbbp2 wrote

The definition of artificial general intelligence puts no limit to its capabilities. Thus super and general intelligence are redundant when used together.


TheDavidMichaels t1_itd3qvx wrote

Sure..AGI likely could figure a way to get rid of liberals and there agenda


Dempsey64 t1_itdycfc wrote

It only takes a glance to see the glaring ignorance.


beachmike t1_itcelcf wrote

The climate is ALWAYS changing. In fact, it's impossible for the climate NOT to change. The important question is, to what extent are the activities of mankind responsible for climate change?


tms102 t1_itchbp7 wrote

This question has been answered many times over. Human activity is accelerating climate change.


beachmike t1_itcnlo7 wrote

The fact that a particular belief has been repeated thousands or millions of times doesn't make it true.


Kinexity t1_itcoogg wrote

The fact that it has been proven by research multiple times does though.


beachmike t1_itcwfff wrote

Nonsense. The earth was far warmer in medieval times before humans even had an industrial civilization. What caused the warming then? The earth was even warmer during the era of ancient Rome. One major volcanic eruption puts more green house gases into the atmosphere than humans have during the entire span of human civilization. Research has shown there to be NO correlation between CO2 levels and the earth's temperature. See that big glowing yellow ball in the sky during the day? It's called the SUN, and it's what effects the climate far more than the activities of puny humans. The effect humans have on the climate are statistically insignificant. You belong to the cult of "climate change." It's your religion.


hducug t1_itd0wak wrote

Well that is just not true, I mean if your just going to lie than we can play the lying game.


beachmike t1_itd1twr wrote

I don't expect to convince a member of the Cult of Climate such as yourself to change their thinking. You've been brainwashed. There is no climate crisis, skippy.


Kinexity t1_itdfij6 wrote

Bro, I literally study physics and currently have physics of weather and climate classes and a simple graph of CO2 absorption spectrum, CO2 levels and Earth's energy balance prove your wrong. Since the start of industrial revolution CO2 level grew by over 30%. It's not like there is that many of it in the atmosphere as if we were to compress it on sea level to standard pressure you get barely 3 metre high layer. It's fairly small amount. Increase in CO2 levels correlates with estimation of industrial emissions. There was no other significant sources at that time other than human industries. Then what follows is imbalance in energy received by Earth which is just on average +1 W/m^2 which causes increase in total energy stored on the surface of the earth which we observe as increase of temperature. It's not that fucking hard to understand. The longer we do measurment's the less we observe effects of Sun's activity because it turns out the Sun is quite stable and it's energy output doesn't really change. Earth's climate is a very complex dance of many effects and small alterations do change a lot. Over 30% increase in CO2 concentration isn't insignificant for a gas which has a lifetime of thousands of years in the atmosphere we reached such high increase because there isn't a lot of it in the atmosphere.

You could have literally chosen a more sane stance that effects of climate change aren't that signinficant and it's not really a problem but instead you've gone full crazy lying that climate change isn't real which is only true if you ignore 96% consensus in the scientific community and the overwhelming amount of scientific papers that support this consensus. Science isn't politics, if you lie but your lie will be discovered. People who deny climate change either have stakes in biggest polluters or are stubborn idiots railed up by said people who can't accept that changes need to be made.


beachmike t1_itdhbhk wrote

Yes, I agree: you're a brainwashed climate cultist. To the people out there that think for themselves: THERE IS NO CLIMATE CRISIS.


Kinexity t1_itef4u1 wrote

I reread your comment and noticed this bullshit:

>One major volcanic eruption puts more green house gases into the atmosphere than humans have during the entire span of human civilization.

Now show me on this fucking graph when did the eruption of Mount St. Helens happen? Were the fuck is it? If it's so fucking huge compared to human source then why can't we fucking see it on a CO2 concetration graph? There should be a fucking peak in 1980 if you were right but there is none.


>The earth was far warmer in medieval times before humans even had an industrial civilization. What caused the warming then? The earth was even warmer during the era of ancient Rome.

Have you seen this fucking graph? Where did that "warmer" period go? Where is it?

Honestly I should have originally read first half of your comment before, not just the second half, as it contained the easiest bullshit. You just pull those "facts" out of your ass which are proven wrong by two graphs based on peer reviewed studies and pretend like "owned the libs" or whatever is your favourite term.


beachmike t1_iter5ix wrote

You never heard of supervolcanos? You never heard of naturally occurring forest fires? There is NO correlation between CO2 levels in the atmosphere and the earth's temperature. What happened to the climate cultists screaming about "global cooling" and the upcoming man-made ice-age during the 1980s and 1990s? OH YEAH, DIDN'T HAPPEN. What happened to Al Gore's "temperature hockey stick"? OH YEAH, DIDN'T HAPPEN. What happened to the prediction in 2009 by Al Gore and many other climate clowns that the polar ice caps would be completely melted by 2014? OH YEAH, DIDN'T HAPPEN. Do you know why so many academic studies agree with the cult of climate change? Because if the people applying for climate research grants disagree with the status quo, they don't get funding. You need to GROW UP, GET EDUCATED, and ideally go through cult deprogramming.



Kinexity t1_iteulp6 wrote

>You never heard of supervolcanos? You never heard of naturally occurring forest fires?

You have one task - find me a graph of the last 150-200 years of CO2 concentration with significant peak caused by natural catastrophe. The only way you can prove to me that extreme natural disasters change global climate is to show me the graph that proves it. I say they don't and have shown you a graph which, if you were correct, would have shown CO2 concetration peak in 1980. The worst volcanic eruptions we know of cause several years of less sunlight at worse and left no lasting effect.

>There is NO correlation between CO2 levels in the atmosphere and the earth's temperature.

False -

Here you go, correlation.

>What happened to the climate cultists screaming about "global cooling" and the upcoming man-made ice-age during the 1980s and 1990s? OH YEAH, DIDN'T HAPPEN. What happened to Al Gore's "temperature hockey stick"? OH YEAH, DIDN'T HAPPEN. What happened to the prediction in 2009 by Al Gore and many other climate clowns that the polar ice caps would be completely melted by 2014? OH YEAH, DIDN'T HAPPEN.

Where are papers that said that? I don't care what some randos said at some point. You seem to not understand the difference between scientific community and the activist community. Most scientists aren't activists. Activists may or may not overexagerate what scientists said.

"I've disproven by observation what some activist said which means the climate change doesn't exist" - no, bro, that's not how this works.

Also past performance does not predict future performance. You cannot say that even if someone was 100% wrong in the past that it means he'll be 100% wrong in the future.

>Do you know why so many academic studies agree with the cult of climate change? Because if the people applying for climate research grants disagree with the status quo, they don't get funding.

Which isn't true because that's not how scientific studies work. You don't do studies like "Proving that climate change doesn't exist". You do stuff like "Study of existance of the climate change". There is no results before the study. You can easily frame it however you like and then publish whatever comes out. There is no questionare about your views on your research topic. You just need to show there is a reason to reasearch something. It's against scientific methods to approach a problem with bias about the conclusion. Anything goes as long as you follow scientific protocol and don't make up shit. It's that easy. If some dumbass goes around saying that he doesn't get funding because they don't like his research he's lying and probably has a case of scientific misconduct against him.


tms102 t1_itcuvmh wrote

The thing is that it is not one answer repeated but many different answers lead to the same conclusion. Meaning, the same conclusion can be drawn in different disciplines from different angles.

Just because youre ignorant of the answers and or don't understand them doesn't mean something isn't a fact.


hducug t1_itcs0oo wrote

Climate change always happens yes, but the rate how it’s changing right now is unnatural. Studies show that the things we do are the cause of that so I don’t see your point. Climate change usually takes a long time so organisms can adapt.