Gimbloy
Gimbloy t1_j96vdi1 wrote
Maybe. We run on prehistoric instincts and drives which can lead to all kinds of bad things. Maybe augmentation will mean the rational side of ourselves wins out and we can improve things like willpower/self reflection/empathy.
Gimbloy t1_j676gzx wrote
Reply to Don't despair; there is decent likelihood that an extremely large amount of resources will flow from AGI to the common man (even without UBI) by TheKing01
A lot of wishful thinking with little evidence to back any of this up.
You assume that absolute power will not corrupt these philanthropists. You also assume that AGI won’t operate under a game theoretic of winner takes all.
Gimbloy t1_j5s70bp wrote
Reply to Future-Proof Jobs by [deleted]
Monk.
Gimbloy t1_j47usci wrote
Reply to comment by gibecrake in Don't add "moral bloatware" to GPT-4. by SpinRed
Religion & Philosophy are going to be so important in the 21st century.
Gimbloy t1_j1o1utc wrote
Joke written by Hank Hill.
Gimbloy t1_j1nyg9a wrote
Reply to comment by diener1 in Will ChatGPT Replace Google? by SupPandaHugger
The problem with large organisations is that they become slow. Generally startups are at the forefront of new innovation. It would take a shake up on the scale of Steve Jobs returning to Apple to get google to where it needs to be imo.
Gimbloy t1_j1nldxt wrote
Reply to Will ChatGPT Replace Google? by SupPandaHugger
Yes, nothing seems more obvious to me. Google's whole value proposition was built on the PageRank algorithm (recommend reading the original paper), and 90% of it's revenue still comes from search. ChatGPT is like PageRank on steroids, it compresses information into knowledge, which is what people want when they google something.
Gimbloy t1_j1gqgqb wrote
Reply to comment by SurroundSwimming3494 in Hype bubble by fortunum
It doesn’t need to be full AGI to be dangerous. As long as it is better than humans in some narrow setting it could be dangerous. Examples: Software companies like Palantir have shown that AI can determine who wins and loses a war, it has allowed Ukraine to outperform a larger country with more military might.
Then there are all the ways it can be used to sway public opinion, propaganda generation, and win in financial markets/financial warfare. And the one I’m particularly afraid of is when it learns to compromise computer systems in a cyber warfare scenario. Just like in a game of Go or chess, where it discovered moves that boggled the minds of experts at the game, I can easily see an AI suddenly gaining root access to any computer network it likes.
Gimbloy t1_j1g4bk2 wrote
Reply to comment by Sashinii in Hype bubble by fortunum
People have been downplaying AI for too long, every year it gets more powerful and people are still like “meh, still way off AGI super-intelligence!” and they probably won’t change their mind until an autonomous robot knocks on their door and drags them into the street.
We need to start thinking seriously about how this will play out and start preparing society and institutions for what’s to come. It’s time to sound the alarm.
Gimbloy t1_j1fx7rl wrote
Reply to comment by Vitruvius8 in Meta AI announces high-level programming language for complex protein structure by maxtility
It all ultimately works by sets of rules and laws which means that throwing more compute at it will eventually yield more secrets.
Gimbloy t1_j1fx2hg wrote
Reply to Meta AI announces high-level programming language for complex protein structure by maxtility
I wonder how much cool shit is going on behind the scenes that they don’t report to the media.
Gimbloy t1_izupp9n wrote
Reply to comment by __ingeniare__ in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
At some point in that gradual progression AI must reach a level that is equivalent to a human though right? Or do you think it just skips a few steps and goes straight to ASI?
Gimbloy t1_izuayb3 wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
So you’re inclined to think the hard takeoff scenario is more likely?
Gimbloy t1_iuckybk wrote
Reply to Which book would you choose if you could only read one for the rest of your life? by NubbyNob
Finnegan’s wake. Might take a lifetime to decipher it.
Gimbloy t1_it1itob wrote
Reply to Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
There is some game theory at work here, the thinking is “If we don’t develop it our competitors will and outcompete us”.
Gimbloy t1_iruky6w wrote
Reply to comment by 4e_65_6f in Am I crazy? Or am I right? by AdditionalPizza
I’m astonished how insecure, leaky and anarchic the internet is. I think a decade from now we will look back on the current internet as the Wild West: manipulation, hacks, spam, viruses, bots. Hopefully by then the internet will be a lot nicer place where people come to vote, work and socialise.
Gimbloy t1_j9hi9qu wrote
Reply to What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
I think he’s probably too far on the pessimist side, but we need people presenting both extremes. The truth is usually somewhere in the middle.