supernerd321
supernerd321 t1_iu7zbpa wrote
Reply to What are your ideas for AGI? by Enzor
AGI should be a recursive acronym for AGI since agi is infinitely self improving
Similar to how PHP is recursive
supernerd321 t1_it7sajd wrote
Reply to comment by mrpimpunicorn in If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
Extremely cool
Any estimate for what the deviation IQ would be for such a limit
Given iq isn't linear scale my guess would be like 1000 which I can't comprehend
supernerd321 OP t1_it5ymbl wrote
Reply to comment by Noiprox in Will we find a better way to measure intelligence or "g factor" post-singularity? by supernerd321
I kind of agree
It sounds unusual, but I've always believed that I have a 200 IQ but probably some of it is more divergent or creative and isn't picked up by standard iq tests only a portion of it is detectable today
Maybe instruments of the future that can measure neocortex in higher res will.be able to prove it one day
supernerd321 OP t1_it5wl9k wrote
Reply to comment by Sashinii in Will we find a better way to measure intelligence or "g factor" post-singularity? by supernerd321
Interesting
But how do you think such an enhanced being would view concept of jntelligence now being fixed and concept of IQ once transcended?
Or will it be completely debunked?
Submitted by supernerd321 t3_y9irc3 in singularity
supernerd321 t1_it46l21 wrote
Reply to If you believe you can think exponentially, you might be wrong. Transformative AI is here, and it is going to radically change the world before the Singularity, and before AGI. by AdditionalPizza
I think the singularity will happen in a completely unpredictable and unexpected way as well however I predict it will also be anticlimactic almost boring when it happens - "it" being AGI that can self improve leading to ASI.
Why? Because I think we'll be the first ASI... we will augment our working memory capacity with advancement in biotechnology most likely that comes about as part of narrow AI and will immediately increase our fluid intelligence proportional.to what we think of as ASI
So the AGI will be trivial for us to create at that point. We'll realize we can never exceed ourselves by creating something more intelligent because well always be able to keep up by augmenting our brains in same way we've augmented verbal memory through search and smart phones. It'll create a run away race condition between organic and synthetic life forms whose end is never attainable
The singularity by definition will never be realizable as our intelligence increases proportionally in a way that is inseparable from AGI itself
supernerd321 t1_j1p89o4 wrote
Reply to comment by Lawjarp2 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
Common misconception
IQ tests measure g which has nothing to do with "being human" it's a general theory of cognitive function