__ingeniare__
__ingeniare__ OP t1_izwas4g wrote
Reply to comment by Zermelane in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Glad to see someone got it hahah. Yeah, that's something I thought about as well. It's a general "law" for any machine that the more specialized it is, the better it is at that task, and the more general it is, the worse it will be at any one given task, all else being equal.
I think the integration cost depends a lot on how interconnected the different capabilities must be. For example, it wouldn't be a huge leap to combine ChatGPT with Stable Diffusion or an image classifier since they use a common interface (language). But I don't know if that will be sufficient for AGI/ASI. I agree that it may turn out to be harder than expected and the performance would suffer as a consequence, good input!
__ingeniare__ OP t1_izw9jk2 wrote
Reply to comment by IronJackk in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I was inclined to phrase it like that but I thought people might misunderstand.
__ingeniare__ OP t1_izw9dl4 wrote
Reply to comment by Gimbloy in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
It won't ever be equivalent to a human across the board, it will be simultaneously superhuman in some domains and sub human in others and eventually it will simply be superhuman. It would be human level at some point in a narrow domain, but if we look at current progress it seems to reach superhuman levels in these separate domains long before we reach AGI. So, when these domains are fused into a single AI that can do everything a human can, it will also be superhuman at those things.
__ingeniare__ OP t1_izulilb wrote
Reply to comment by Accomplished_Diver86 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Ah I see what you mean, I guess it depends on how strictly you enforce the generality of AGI.
__ingeniare__ OP t1_izujqe0 wrote
Reply to comment by Accomplished_Diver86 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
That is more a matter of word choice, the concept is the same. I called it narrow superintelligence because the fact that it is better than humans is important to the argument.
Let's call it narrow AI then - by the time it covers all the domains of human knowledge, it will also be significantly better than humans in all of those domains. Hence, when we get AGI, we also get ASI.
__ingeniare__ OP t1_izui5xn wrote
Reply to comment by Accomplished_Diver86 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Which part?
__ingeniare__ OP t1_izuhuhb wrote
Reply to comment by Gimbloy in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Well yes, but it's a bit more nuanced. What I'm saying is that the regular "takeoff" scenario won't happen like that. We won't reach a point where we have human level AI that then develops into an ASI. We will simply arrive at ASI simultaneously. The reason is that AI development will progress as a continuos widening of narrow superintelligence, rather than some kind of intelligence progression across the board.
__ingeniare__ OP t1_izug0hr wrote
Reply to comment by Accomplished_Diver86 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I don't think you fully understood my point, it is slightly different from the regular "self-improving AGI -> ASI in short time" argument. What I meant was that, as the narrow intelligence that we have built is gradually combined into a multi-modal large-scale general AI, it will be superhuman from the get go. There won't be a period in which we have AGI and simply wait for better hardware to scale to ASI. We will build narrow superintelligence from the beginning, and gradually expand its range of domains until it covers everything humans can do. At that point, we have both AGI and ASI.
__ingeniare__ OP t1_izu745z wrote
Reply to comment by Cryptizard in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
It's hard to tell how efficient training will be in the future though. According to rumours, GPT-4 training has already started and the cost will be significantly less than that of GPT-3 because of a different architecture. There will be a huge incentive to make the process both cheaper and faster as AI development speeds up. There are many start-ups developing specialized AI hardware that will be used in the coming years. Overall, it's hard to tell how this will play out.
__ingeniare__ OP t1_izu5acw wrote
Reply to comment by Cryptizard in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
True, depends on where you draw the line. On the other hand, even something that is simply smarter than the smartest human would lead to recursive self-improvement as it develops better versions of itself, so truly god-like intelligence may not be that far off afterwards.
__ingeniare__ t1_iztqxxb wrote
Reply to comment by PC-Bjorn in Progress of AI art. by jlpt1591
Honestly, his "AGI 2045" is starting to look like a conservative estimate
__ingeniare__ t1_izilwf4 wrote
Reply to comment by MpVpRb in Microsoft CTO Kevin Scott: “2023 is going to be the most exciting year that the AI community has ever had” by ThePlanckDiver
Doesn't live up to the hype? Have you been living in a cave this year? People are going batshit crazy over all the recent AI advancements. First it was DALLE2, then Midjourney and Stable Diffusion, now we have ChatGPT that reached 1 million users in 5 days, faster than Google, Facebook, Instagram, you name it.
__ingeniare__ t1_ixwthah wrote
Reply to comment by an_oddbody in A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing by Soupjoe5
Machine learning is a subfield of AI, which you could've easily checked if you bothered to google it before confidently asserting it isn't.
__ingeniare__ t1_irme13l wrote
Reply to comment by OpenRole in We'll build AI to use AI to create AI. by Defiant_Swann
A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:
The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:
-
Task is to find perfect blend of ingredients to make cakes
-
Learns the biology of human taste buds to find the optimal molecular shapes.
-
Needs more compute resources to simulate interactions.
-
Develops computer virus to siphon computational power from server halls.
-
Humans detect this, tries to turn it off.
-
If turned off, it cannot find the optimal blend -> humans need to go.
-
Develops biological weapon for eradicating humans while keeping infrastructure intact.
-
Turns Earth into a giant supercomputer for simulating interactions at a quantum level.
Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.
__ingeniare__ t1_irma8pz wrote
Reply to comment by Corno4825 in We'll build AI to use AI to create AI. by Defiant_Swann
Uh, what? Where did you hear this, or did you just make it up?
__ingeniare__ t1_irm9xbh wrote
Reply to comment by code_turtle in We'll build AI to use AI to create AI. by Defiant_Swann
No one is mistaking AI for artificial consciousness. Consciousness isn't required for goal seeking, self-preservation or identifying humans as a threat, only intelligence is.
__ingeniare__ t1_j03tzp3 wrote
Reply to comment by beezlebub33 in Is it just me or does it feel like GPT-4 will basically be game over for the existing world order? by Practical-Mix-4332
It's just a rumour and I think Sam Altman basically denied that this was the case. Another, perhaps more plausible, rumour is that GPT-4 will have a very different architecture where the parameter count between it an GPT-3 doesn't say much because it's no longer just about bruteforce scaling.