Viewing a single comment thread. View all comments

Silver_Ad_6874 t1_jdt73yo wrote

The upside could be insane. imagine being able to program a CAD program or to create a web app or basically do all sorts of work that are now done by humans. instead these people will be telling machines what to do in natural language so the acceleration to productivity could be enormous. If this Goes South Though de consequences will be bad because yes people will be combining AI with Boston Dynamics advanced new models so Ultimately a "Terminator" scenario is Absolutely possible. What A Timeline To Live in.

For The Record, if true, it confirms some of my suspicions around the nature of human intelligence, but the timeline is much earlear than I expected. 😬


Malachiian OP t1_jdteo63 wrote

Yeah, the fact that we basically tried to replicate the human brain and it all of a sudden became able to solve tasks it wasn't taught to do...

That certainly makes intelligent seem a lot less magical. Like, we are just neural nets, nothing more.


Silver_Ad_6874 t1_jdueq2n wrote

Exactly that. If the complexity of the human mind automatically emerges from a relatively simpel model with sufficiently advanced training/inputs, that would be very telling.


pharmamess t1_jdum003 wrote

What about the soul?


shr00mydan t1_jdvjen6 wrote

You are getting downvoted, but this is a fine question. Alan Turing himself answered it all the way back in 1950.

>Theological Objection: Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

>I am unable to accept any part of this, but will attempt to reply in theological terms... It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty. It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this soul. An argument of exactly similar form may be made for the case of machines. It may seem different because it is more difficult to “swallow”. But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul. The circumstances in question are discussed in the rest of this paper. In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.


RedditFuelsMyDepress t1_jdvsmlw wrote

I feel like some people maybe take the word "soul" a bit wrong, because it sounds like something from fantasy-fiction. But consciousness is something that undeniably exists and it's very difficult to prove that any machine has one.


terdroblade t1_jdvw2cy wrote

Can’t prove something if you don’t know what it is. It’s a deep rabbit hole with many different sciences , from philosophy to neuroscience.


idiocratic_method t1_jdx5fwp wrote

you use the word undeniably but I've never seen actual proof of consciousness


RedditFuelsMyDepress t1_jdxgkvb wrote

By consciousness I just mean the subjective experience of self. Like the old saying "I think, therefore I am". I can feel and experience the world through my own body and I can assume other people do as well since they're humans just like me (unless I'm living inside a simulation or something and none of you are real). But how do we know a non-biological machine is able to experience the same?


canad1anbacon t1_jdxmsyn wrote

There is the mirror test. Being able to look into a mirror and recognize that is your own body that you see. Dolphins and chimps can pass this test


RedditFuelsMyDepress t1_jdxs0d6 wrote

A smart robot probably would recognize itself in the mirror, but I don't think that's really enough to prove that it's conscious the same way we are. The problem is that everyone experiences the world through their own body so we can't truly put ourselves in someone else's shoes and see and feel what they do. There's no way for me to even know for certain that other humans are conscious, I can only assume that based on us being the same species. A robot may have the appearance of being conscious, but it could be fake. Like a marionette being pulled on strings by its programming. Or like a character written into a story except that this character is being written in real-time by computer algorithms based on things happening around it. Someone might argue that humans are similar to that too, but the point is that puppets and fictional characters aren't conscious even though they may appear as such and a robot could be the same way.

I think we'd have to do more research and understand how the brain and electrical signals in our bodies work to determine if a machine is conscious.


Seidans t1_je1etr3 wrote

the "soul" is just the answers to something scientist and theolgist couldn't understand a couple hundred years ago, humanity and especially theist are just slow to understand that we are just a biological machine

everything too complexe to understand have seen a simple theological answers, easy to understand and rassuring to believe, while the observation is far more cruel and nihilistic


pharmamess t1_jdvtcb6 wrote

Really appreciate this answer, thanks. Food for thought!


Express-Set-8843 t1_jdw6wde wrote

First we would have to define what a "soul" is and then demonstrate if that thing actually exists before we could proceed further with your question.

Attempts to do so have proven unfruitful.


pharmamess t1_jdwdnnf wrote

>Attempts to do so have proven unfruitful.

What you mean is that you're not convinced by any arguments/explanations/evidence that you've ever come across. Many people are.

I'm not put off by the lack of a scientific proof. I think that there's more to life than what can be measured using scientific instruments. Life has unequivocally taught me this truth. It doesn't follow that there is necessarily a soul but I get the sense of it being a valid concept - and I am far from the only one to think that. But I understand the intransigence of the hard materialist / scientific reductionist position so there might perhaps be a little difficulty agreeing to disagree (apologies if I'm being unduly cynical).

I don't think it follows at all that "we are just neural nets, nothing more". That's an extremely narrow take on human consciousness which is obvious to anyone who has scratched the surface.


Express-Set-8843 t1_jdxkyzs wrote

>What you mean is...

And we've exited the realm of constructive conversation.

When you are talking to someone, let them tell you what they mean and you tell them what you mean. I will now exit this pointless debate.


qepdibpbfessttrud t1_je03pra wrote

Soul is our inescapable wonderment about thoughts raising from the unconscious when we're caught in the thought loop as most people spend most of their lives in. In wake and in dreams even. I guess most of it is caused by strong emotions experienced one day and perpetuated for years. Regret is a big one

Meditation allows one to see clearly. At first u're on the bank of river flowing past u, but then u're nowhere to be found. There's just river. Always has been


KnightOfNothing t1_jdtvjbm wrote

that's exactly all humans are and i don't understand you could see anything "magical" about reality or anything inside it.


phyto123 t1_jdu4wp4 wrote

Most things in nature follow fibonacci sequence and golden ratio in design which I find fascinating, and the fact I can ponder and appreciate the beauty in that is, to me, magical.


BilingualThrowaway01 t1_jdudp5a wrote

Life always finds the path of least residence through natural selection. It will always gradually tend towards being more efficient over time through evolutionary pressure. The Fibonacci sequence and golden ratio happen to be geometrically efficient ratios to use when it comes to many physical distributions, for example when deciding how to place leaves in a spiral that will collect as much sunlight as possible.


phyto123 t1_jdvm93a wrote

Excellent explanation. I also find it fascinating there is evidence that our ancient ancestors would build according to this natural order. The way Luxor Temple was built utilizes this order from its first room to the last


4354574 t1_jdu8nat wrote

We're conscious. Subjective experience is magical. The experience of emotions is magical. Being aware of experience is magical. If that isn't magical to you, to be you. What is even the point of existing? You might as well just go through the motions until you die.

There is no evidence at all that AI is conscious.


Surur t1_jdubb31 wrote

How do you know you are not the only one who is conscious?


4354574 t1_jdunko1 wrote

I don't. It's the classic "problem of other minds". This is not an issue for Buddhism and the Yogic tradition, however, and ultimately at the highest level all of the mystical traditions, whether Sufism, Christian mysticism (St. John of the Cross and others), shamanism, the Kabbalah etc. What's important to these traditions is what your own individual experience of being conscious is like. More precisely, from a subjective POV, there are no "other minds" - it's all the same mind experiencing itself as what it thinks are separate minds.

If your experience of being conscious is innately freeing, and infinite, and unified, and fearless, and joyous, as they all, cross-culturally and across time, claim the state of being called 'enlightenment' is, then whether there are other minds or not is academic. You help other people to walk the path to enlightenment because they perceive *themselves* to be isolated, fearful, angry, grieving individual minds, that still perceive the idea that there are "other minds" to be a problem.

In Buddhism, the classic answer to people troubled by unanswerable questions is that the question does not go away, but the 'questioner' does. You don't care about the answer anymore, because you've seen through the illusion that there was anyone who wanted an answer in the first place.


Surur t1_jdur5b3 wrote

Sure, but my point is that while you may be conscious, you can not really objectively measure it in others, you can only believe when they say it or not.

So when the AI says it's conscious....


audioen t1_jdw2frs wrote

The trivial counterargument is that I can write a python program that says it is conscious, while being nothing such, as it is literally just a program that always prints these words.

It is too much of a stretch to regard a language model as conscious. It is deterministic -- it always predicts same probabilities for next token (word) if it sees the same input. It has no memory except words already in its context buffer. It has no ability to process more or less as task needs different amount of effort, but rather data flows from input to output token probabilities with the exact same amount of work each time. (With the exception that as input grows, its processing does take longer because the context matrix which holds the input becomes bigger. Still, it is computation flowing through the same steps, accumulating to the same matrices, but it does get applied to progressively more words/tokens that sit in the input buffer.)

However, we can probably design machine consciousness from the building blocks we have. We can give language models a scratch buffer they can use to store data and to plan their replies in stages. We can give them access to external memory so they don't have to memorize contents of wikipedia, they can just learn language and use something like Google Search just like the rest of us.

Language models can be simpler, but systems built from them can display planning, learning from experience via self-reflection of prior performance, long-term memory and other properties like that which at least sound like there might be something approximating a consciousness involved.

I'm just going to go out and say this: something like GPT-4 is probably like 200 IQ human when it comes to understanding language. The way we test it shows that it struggles to perform tasks, but this is mostly because of the architecture of directly going prompt to answer in a single step. The research right now is adding the ability to plan, edit and refine the replies from the AI, sort of like how a human makes multiple passes over their emails, or realizes after writing for a bit that they said something stupid or wrong and go back and erase the mistake. These are properties we do not currently grant our language models. Once we do, their performance will go through the roof, most likely.


4354574 t1_jdwkos3 wrote

Well, I don’t believe consciousness is computational. I think Roger Penrose’s quantum brain theory is more likely to be accurate. So if an AI told me it was conscious, I wouldn’t believe it. If consciousness arose from complexity alone, we should have signs of it in all sorts of complex systems, but we don’t, and not even the slightest hint of it in AI. The AI people hate his theory because it means literal consciousness is very far out.


Surur t1_jdwqof7 wrote

> If consciousness arose from complexity alone, we should have signs of it in all sorts of complex systems

So do you believe animals are conscious, and if so, which is the most primitive animal you think is conscious, and do you think they are equally conscious as you?


4354574 t1_jdx1c88 wrote

If you want to know more about what I think is going on, research Orchestrated Objective Reduction, developed by Penrose and anaesthesiologist Stuart Hameroff.

It is the most testable and therefore the most scientific theory of consciousness. It has made 14 predictions, which is 14 more than any other theory. Six of these predictions have been verified, and none falsified.

Anything else would just be me rehashing the argument of the people who actually came up with the theory, and I’m not interested in doing that.


Outrageous_Nothing26 t1_jdvbget wrote

Just calculate the probability of that arising from randomness. That’s just incredible, you see the answers and think easy because the problem was already solved for you.


KnightOfNothing t1_jdvcn5y wrote

no i see the answer and think "wow i really didn't care about the problem in the first place" sorry but things in reality stopped impressing/interesting me many years ago.


Outrageous_Nothing26 t1_jdvcq2h wrote

Sounds like a skill issue or depression one of the two


KnightOfNothing t1_jdvhuy8 wrote

you're not the first one to bring up "skill issue" when I've expressed my utter disappointment in all things real, is the human game of socialize work and sleep really so much fun for you guys? is this limited world lacking of anything fantastical really so impressive for all of you?

i've tried exceptionally hard to understand but all my efforts have been for naught. The only rational conclusion is that there's something necessary to the human experience i'm lacking but it's so fundamental no one would even think of mentioning it.


Outrageous_Nothing26 t1_jdvi8qx wrote

Well the truth, it doesn’t really matter, we could be living in a the magical world of harry potter and your anhedonia would do the same. I was just kidding with the skill issue but it sounds like depression, i had something similar happen but it’s just my unsolicited opinion and it doesn’t carry thar much weight


Outrageous_Nothing26 t1_jdvb8cl wrote

What less magical?? It takes a massive amount of computing power and data to train those things. Now try doing that without any templates to follow. How is that not complex enough?


808_Scalawag t1_jdu6d1b wrote

As a machinist my job would become quickly amazing and then non existent lol


Silver_Ad_6874 t1_jduf5ud wrote

Actually, like Tesla demonstrates with the remaining lack of true FSD,, interpreting the surroundings accurately may be more difficult than reasoning about those surroundings for now.


jetro30087 t1_jdu9slz wrote

How's that different from any Star Trek episode where a crew member goes to the holodeck and instructs the Enterprise's computer to build a program?

It's not inventing a program, it's completing a command using the information stored in its programming, according to the rules set by its programming. It codes because its trained-on terabytes of code that perform task. When you ask for code that does that task it's just retrieving that information and altering it somewhat based on the rules that dictate its response. Unlike humans however, it's not compelled to design a program that does anything without being prompted.


Silver_Ad_6874 t1_jduf07p wrote

The difference is emerging behaviour. If a sufficiently complex, self adapting structure can modify itself to perform more than it was trained for, the outcome is unknown. Unknown outcomes scare people.


BangEnergyFTW t1_jdu1x8h wrote

Silver_Ad_6874, while the potential benefits of AGI are certainly significant, we must also consider the potential risks and consequences that come with such a powerful technology. The acceleration of productivity you speak of could indeed be enormous, but it could also lead to massive job displacement and societal upheaval.

Furthermore, as you mentioned, combining AGI with advanced robotics technology could lead to catastrophic outcomes if not handled responsibly. It is therefore essential that we approach the development of AGI with caution and careful consideration of the potential risks and consequences.

As for your suspicions around the nature of human intelligence, it is important to note that while AGI may be capable of performing tasks that were previously done by humans, it is still fundamentally different from human intelligence. AGI may be able to learn and acquire skills, but it lacks the subjective experience and consciousness that are intrinsic to human intelligence.

In short, while the emergence of AGI is a significant development, we must approach it with a balanced perspective that takes into account both its potential benefits and risks.


deadlands_goon t1_jdvbfyh wrote

> Ultimately a “Terminator” scenario is Absolutely possible

ive been saying this for years and everyones been telling me we wont need to worry about that for like 50 years until chat gpt started making headlines


TheJesterOfHyrule t1_jdxnpht wrote

Upside? Taking my job? It won't aid, it will replace


Silver_Ad_6874 t1_je573vk wrote

Then figure out how to use AI to do something else that is easier and pays better. The times won't wait for you, as they didn't for sellers of buggy whips.

My own job seems to be on the line, too. Chatgpt can answer complex questions about my workfield with decent enough answers that if clients asked them to chatgpt instead of me, the differences are small enough to not matter. Luckily for me, most do not know what the right questions are to ask.

On the flip side? Imagine that you can now start to create things in a CAD program that you tell what to make in your own voice, not an arcane set of codes or even having to be able to draw. Then, get the AI aided/verified design 3D printed, and you have a prototype. The same goes for a modular printboard/micro computer design and the code for the software that runs on it. Suddenly, "everyone" can create new toys, tools, utilities, car parts, or whatever you can think of.

If you want to be fearful of AI, don't be afraid to lose your job. Be afraid to lose your life, Terminator style. 🙃