Submitted by RushingRobotics_com t3_11xh8zp in Futurology
Comments
RushingRobotics_com OP t1_jd3a3pr wrote
Emergent abilities are consequences of unconscious self-improvement. The breaking point will be when AI can improve itself without direct human intervention. I think we will see that very soon. Definitely, the next few years will be the most exciting!
[deleted] t1_jd4794s wrote
[deleted]
RushingRobotics_com OP t1_jd61h9l wrote
I see your points, but I am more concerned about the unequal distribution of accessibility and regulation of AI. I believe that there is no turning back at this point, and that technology will continue to advance regardless of our concerns and actions. To mitigate these risks, we need to democratize accessibility, develop open-source code, and prevent large companies from making exceptions for themselves when pressuring governments to regulate AI more effectively.
[deleted] t1_jd6ooxo wrote
[deleted]
Mercurionio t1_jd7se25 wrote
If the AI goes on with self developing the only way to control it will be a bucket of water on it's servers. AI don't have emotions, ethics, desires for power and so on. It will just full on on what it "wants".
ChatGPT waits for your prompts. Self aware AI won't
why06 t1_jd6tef7 wrote
I know this isn't exactly what you mean, but AI is actually being used right now to help train AI. I think the OpenAI guys used it to supplement their human reinforcement learning.
Google Deepmind allows agents to compete against themselves in a type of natural selection process to produce AlphaZero.
Nvidia is using AI in limited capacity to design parts of it's latest chips.
So I know you probably mean on a whole different scale where the AI can take over the whole process of training itself, but in a limited way right now AIs already kinda are being used to build AI. And they are already speeding up progress because of these examples and more.
RushingRobotics_com OP t1_jd2ysxa wrote
Recent advancements in AI research such as the emergence of ToM-like abilities in language models, suggest that we are making progress towards AGI (artificial general intelligence). Emergent abilities are a fascinating aspect of complex systems like LLMs and the development of ToM-like abilities in language models is a remarkable achievement. The ability to understand and attribute mental states to oneself and others has long been considered a uniquely human ability, so the emergence of ToM-like abilities in language models is a significant breakthrough.
The increasing language skills of language models may have led to the emergence of ToM-like abilities, demonstrating the potential for artificial intelligence to possess human-like cognitive abilities.
[deleted] t1_jd43aso wrote
[removed]
The_One_Who_Slays t1_jd3w6gq wrote
I can see plenty of applications for AGI and it's been my lifelong dream ever since I found out about this concept. I hope I'll manage to survive until the time it becomes available for general public.
slash_asdf t1_jd3ye8o wrote
An actual AGI means it can think for itself, it might not agree with the applications you have in mind for it
But AGI might still be a long way off, we will however have extremely capable "regular" AI within this decade.
The_One_Who_Slays t1_jd43xca wrote
Well, I can be very persuasive when I need to. Plus, you shouldn't assume that it's thought process would operate within the same boundaries the average human mind has. I bet most of the times, as long as it's something non-detrimental and can be done with a flick of a metaphorical wrist, it'll go full "I gotchu, homie" mode.
But, well, if a regular AI will be able to do the fun stuff to the same capacity a hypothetical AGI can, then I wouldn't mind to settle for either, really.
pepepeoepepepeoeoe t1_jd5kvws wrote
Keep in mind AGI doesn’t necessarily imply it’s just a super smart digital human, but a program that can perform any task a human can at least as well or better. I’m not saying it wont be conscious or be able to “think for itself” but it’s definitely possible it won’t, since it’s not necessary.
[deleted] t1_jd7skvg wrote
[removed]
FuturologyBot t1_jd32o5k wrote
The following submission statement was provided by /u/RushingRobotics_com:
Recent advancements in AI research such as the emergence of ToM-like abilities in language models, suggest that we are making progress towards AGI (artificial general intelligence). Emergent abilities are a fascinating aspect of complex systems like LLMs and the development of ToM-like abilities in language models is a remarkable achievement. The ability to understand and attribute mental states to oneself and others has long been considered a uniquely human ability, so the emergence of ToM-like abilities in language models is a significant breakthrough.
The increasing language skills of language models may have led to the emergence of ToM-like abilities, demonstrating the potential for artificial intelligence to possess human-like cognitive abilities.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/11xh8zp/from_narrow_ai_to_selfimproving_ai_are_we_getting/jd2ysxa/
[deleted] t1_jd63v1g wrote
[removed]
Some-Ad9778 t1_jd6uaq1 wrote
Can an AI gain sentience? When we cross that threshold a lot of ethical questions are going to have to be asked as to what rights the AI will have
fwubglubbel t1_jd5425g wrote
No.
And now the mandatory pointless ramble to make the comment long enough for this sub. We think. But we're not sure, since "long enough" is undefined. Forget artificial intelligence, how about some real stuff?
yaosio t1_jd378qq wrote
When AI can improve itself things will speed up even faster than they are now. I wonder what one year from now will look like in the world of AI.