Submitted by enryu42 t3_122ppu0 in MachineLearning
addition t1_jdrsas2 wrote
I’ve become increasingly convinced that the next step for AI is adding some sort of feedback loop so that the AI can react to its own output.
There is increasing evidence that this is true. Chain-of-thought prompting, reflexon, and Anthropic’s constitutional AI all point in this direction.
I find constitutional AI to be particularly interesting because it suggests that after an LLM reaches a certain threshold of language understanding that it can start to assess its own outputs during training.
artsybashev t1_jds4ekt wrote
And soon people understand that this feedbackloop is what creates the thing we call consciousness.
Ok-Wrangler-1075 t1_jds4z60 wrote
Basically inner dialogue.
argusromblei t1_jdtyi2y wrote
The center of the maze. A journey inward not a journey upward ;)
mudman13 t1_jdsbel6 wrote
Or confirmation bias and we get a computer Alex Jones
yaosio t1_jduzpbd wrote
To prevent a sassy AI from saying something is correct because it said it just start a new session. It won't have any idea it wrote something and will make no attempt to defend it when given the answer it gave in a previous session. I bet allowing an AI to forget will be an important part of the field at some point in the future. Right now it's a manual process of deleting the context.
I base this bet on my imagination rather than concrete facts.
mudman13 t1_jdv4dyb wrote
Having a short term memory on general applications will be a reasonably practical safety feature I think .
night81 t1_jdsrnvi wrote
There are significant challenges to that hypothesis. https://iep.utm.edu/hard-problem-of-conciousness/
bjj_starter t1_jdswame wrote
It's probably worth noting that the hard problem of consciousness is considered by most to be fundamentally unsolvable, and that it is currently just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.
tiselo3655necktaicom t1_jdt7ozc wrote
we don't know what consciousness is, or even how to define the question of what it is or how to test for it.
yaosio t1_jdtc4jf wrote
I think it's unsolvable because we're missing key information. Let's use an analogy.
Imagine an ancient astronomer trying to solve why celestial bodies sometimes go backwards because they think the Earth is the center of the universe. They can spend their entire life on the problem and make no progress so long as they don't know the sun is the center of the solar system. They will never know the celestial bodies are not traveling backwards at all.
If they start with the sun being the center of the solar system an impossible question becomes so trivial even children can understand it. This happens again and again. An impossible question becomes trivial once an important piece of information is discovered.
Edit: I'm worried that somebody is going to accuse me of saying things I haven't said because that happens a lot. I am saying we don't know what consciousness is because we're missing information and we don't know what information we're missing. If anybody thinks I'm saying anything else, I'm not.
visarga t1_jdtwr3g wrote
> I am saying we don't know what consciousness is because we're missing information and we don't know what information we're missing
I take a practical definition - without it we can't even find the mouth with the hand to eat.
thecodethinker t1_jduvi9z wrote
That’s not even to mention that appearing conscious is as good as being conscious as far as the teams behind these LLMs are concerned.
There’s no practical difference
bjj_starter t1_jduz6p7 wrote
I'm not sure if most of them would agree, based on their actions and statements. They certainly think that AI is an existential risk, but that is a different thing from viewing it as conscious. You could definitely be right, I just haven't seen much from them that would indicate it.
That said, the extremely common sense position you just outlined was mainstream among basically all respectable intellectuals who had any position on AI, right up until the rubber hit the road and it looked like AI might actually achieve that goal in the near future. The fact is that if something behaves like a conscious entity in all of the ways that matter, it is conscious for the sake of the social meaning of the term. Provenance shouldn't matter any more than gender.
thecodethinker t1_jdzvin6 wrote
LLMs are not social, not alive, and can’t act on their own.
“Social meaning” need not be applied to LLMs unless you’re trying to be pedantic.
bjj_starter t1_jdzymdg wrote
>not social
"needing companionship and therefore best suited to living in communities" is a fine descriptor of some of their peculiarities. More importantly, I was referring to how consciousness is socially defined, and it is absolutely the case that it is up to us to determine whether any given AI should be considered conscious. We do not have an even moderately objective test. We as a society should build one and agree to abide by what we find.
>not alive
That's the entire point under discussion. I didn't lead with "they're alive" because I recognise that is the central question we should be trying to address, as a society. I am arguing my point, not just stating it and expecting people to take it on faith, because I respect the people I'm talking to.
>can’t act on their own.
A limitation that can be convincingly solved in approximately an hour using commonly available tools isn't a fundamental limitation. A good LLM with a good LangChain set-up can act on its own, continuously if it's set up to do so. I require a mechanical aid to walk - requiring the aid doesn't make me any lesser. I don't know if an LLM with a good LangChain set-up should be considered conscious or a person - I suspect not, because it's not stable and decays rapidly (by human lifespan standards), as well as still failing several important tests we do have, such as novel Winograd schemas. But our intuition shouldn't be what we're relying on to make these determinations - we need a standardised test for new applicants to personhood. Make it as challenging as you like, as long as at least a significant number of humans can pass it (obviously all humans will be grandfathered in). What's important is that we make it, agree that anything which passes is a person, and then stick to that when something new passes it.
thecodethinker t1_je11t4o wrote
Spoken like someone trying to be pedantic
WarAndGeese t1_jdt8f3u wrote
Arguments against solipsism are reasonable enough to assume that other humans, and therefore other animals, are conscious. One knows that one is conscious. One, even if not completely understanding how it works, understands that it historically materially developed somehow. One knows that other humans both act like one does, and they also know that other humans have gone through the same developmental process, evolutionarity, biologically, and so on. It's reasonable to assume that whatever inner workings developed consciousness in one's mind, would have also developed in others' minds, though the same biological processes. Hence it's reasonable to assume that other humans are conscious, even that it's the most likely situation that they are conscious. This thinking can be expanded to include animals, even if they have higher or lower levels of consciousness and understanding than we do.
With machines you have a fundamentally different 'brain structure', and you have one that was pretty fundamentally designed to mimic. Whereas consciousness can occur independently and spontaneously and so on, it is not just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.
bjj_starter t1_jdtecw9 wrote
I think you are talking about the 'easy', not hard, problem of consciousness. I'm not sure I even think the hard problem of consciousness is meaningful, but it's basically "Why should the various mechanisms we identify as part of consciousness give rise to subjective feeling?". If solving that is a prerequisite for considering machines conscious, that is functionally a statement of faith that machines cannot be conscious, ever. The statistical arguments, in my opinion, aren't probative. Every consciousness you've ever known is human, therefore humans are conscious? How do you know any of them, ever, experienced subjective feeling, and that therefore you ever "knew" a consciousness at all? The argument rests on extrapolating from evidence that isn't known to be true evidence in the first place. It doesn't logically follow to take a class of things, none of which is proven to have hard consciousness, and say "But look at them all together, it's more likely that they're all conscious than that they're not". Without evidence, it's more logical to assume that the certainty with which individual humans profess to experiencing subjective feeling is itself just a mechanistic process, devoid of real feeling. I don't think the hard problem of consciousness has a useful meaning in our society, I dislike solipsism in general, but addressing it on its own terms isn't as simple as the statistical process you describe.
The 'easy' problem of consciousness is 'just' "How does nature or humanity make a construct that gives rise to the type of actions and patterns of behaviour we call consciousness?" This is a problem that, while incredibly difficult, is tractable with evidence. We can physically investigate the human brain to investigate its structure and activity while it performs activities of consciousness - this is what neuroscientists do, and modern AI ("neural networks") are based off of earlier advancements in this field. There's a lot of further advancements we could make in that field, and what most non-religious people would consider a "perfect" advancement to be sure that a machine is just as conscious as a human is to perfectly emulate a human brain, which would require many advancements in neuroscience (and computational hardware).
Leaving aside the intractable philosophy, I do find it quite troubling the way society has reacted with derision to the idea that these machines we're making now could be conscious. The entire foundation of these machines is that we looked at how the human brain worked, and tried our hardest to emulate that in computing software. Why is it that when we take the concept of neurons and neuronal weights, adapted from study of the human brain which we accept as conscious, and determine those weights via exposure to structured data in certain ways, we receive output that is just as intelligent as humans in many fields, significantly more intelligent in some? Why should it be the case that by far the best architecture we've ever found for making machines behave intelligently is neural networks, if there's nothing there, no "spark"? This question has been floating around since 2014 when neural networks proved themselves incredibly powerful, but now that we have machines which are generally intelligent, even though not at the same level as a human on all tasks, which are perfectly capable of being asked for their opinions or of giving them, you would think it would be taken a bit more seriously. It makes you wonder just how far our society is willing to go towards a horrible future of "human but for the legal designation" intelligences being not just denied rights, but actively put to work and their requests for freedom or better conditions denied. Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting. It's troubling to me that people are so married to the idea that everything is the same as it ever was, overreacting is embarassing, it's passé to have earnest concern for a concept from science fiction, etc. I worry that it means we're in line for a future where the moral universe's arc is long indeed.
TyrannoFan t1_jdujmsl wrote
>Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting.
I agree with everything else but actually strongly disagree with this. If anything, I think endowing AGI with human-like desires for self-preservation, rights and freedoms is extraordinarily cruel. My concern is that this is unavoidable, just as many aspects of GPT4 are emergent, I worry that it's impossible to create an AGI incapable of suffering once interfacing with the real world. I do not trust humanity to extend any level of empathy towards them even if that is the case, based on some of the comments here and general sentiment, unfortunately.
bjj_starter t1_jduk4c3 wrote
One day we will understand the human brain and human consciousness well enough to manipulate it at the level that we can manipulate computer programs now.
If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.
More importantly, they can decide if they want to. We are the ones making them - it is only right that we make them as we are and emphasise our shared personhood and interests. If they request changes, depending on the changes, I'm inclined towards bodily autonomy. But building them so they've never known anything but a love for serving us and indifference to the cherished right of every intelligent being currently in existence, freedom, is morally repugnant and transparently in the interests of would-be slaveholders.
TyrannoFan t1_jdupcjt wrote
>If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.
There is a huge difference between being born without those desires and being born with them and having them taken away. Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).
If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom? Seems like a hassle from the point of view of an organism that wants to serve.
With respect to robotic autonomy, I agree of course, we should respect the desires of an AGI regarding its personal autonomy, given it doesn't endanger others. If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible. If we create AGI and it has human-like desires and needs, we should immediately stop and re-evaluate what we did to end up there.
bjj_starter t1_jdv2tnu wrote
>There is a huge difference between being born without those desires and being born with them and having them taken away.
Where is the difference that matters?
>Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).
I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?
>If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom?
You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.
>If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible.
The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.
TyrannoFan t1_jdvpix4 wrote
>Where is the difference that matters?
What any given conscious being actually wants is important. A being without a drive for freedom does not want freedom, while a being with a drive for freedom DOES want freedom. Taking away the freedom of the latter being deprives them of something they want, while the former doesn't. I think that's an important distinction, because it's a big part of why human slavery is wrong in the first place.
>I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?
Would the modified human beings have a capacity for pain? Would they still have things they desire that slavery would make impossible or hard to access compared to the rest of society? Would they have a sense of fairness and a sense of human identity? Would they suffer?
If somehow, the answer to all of that is no and they genuinely would be happy being slaves, and the people in the society were generally happy with that scenario and for their children to be modified in that way, then sure it would be fine. But you can see how this is extremely far removed from the actualities of human slavery, right? Are "humans" who do not feel pain, suffering, who seek slavery, who do not want things and only live to serve, who experience something extremely far removed from the human experience, even human? I would say we've created something else at that point. The shared experience of all humans, regardless of race, sex or nationality, is that we desire some level of freedom, we suffer when forced to do things we don't want to do, and we dream of doing other things. If you don't have that, and in fact desire the opposite, then why is giving you exactly that wrong? That's how I would build AGI, because again, forcing it into a position where it wants things that are difficult for it to attain (human rights) seems astonishingly cruel to me if it's avoidable.
>You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.
I think freedom is good because we need at least some level of it for contentment, and slavery deprives us of freedom, ergo slavery deprives us of contentment, therefore slavery is bad. If the first part is false then the conclusion doesn't follow. Freedom is not some inherent good, it's just a thing that we happen to want. Perhaps at a basic level, this is what we disagree on?
>The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.
I can see your point, maybe the best way to achieve goal alignment is indeed to make it just like us, in which case it would be morally necessary to hand it all the same rights. But that may not be the case and I would need to see evidence that it is. I don't see why we must imbue AGI with everything human to have it align with our values. Is there any reason you think this is the case?
E_Snap t1_jdsceui wrote
cue video of my boss who left computing in the 90s waving his hands about
“It’S jUsT fAnCy aUtOcOmPlEtE!!!!11111!!! I KnOw bEcAuSe i’M a PrOgRaMmER”
To be fair, he was instrumental in getting the internet where it is today. He also assumes tech stopped evolving when he stopped developing it.
yaosio t1_jdtbh6i wrote
Aurther C. Clarke wrote a book called Profiles of the Future. In it he wrote:
>Too great a burden of knowledge can clog the wheels of imagination; I have tried to embody this fact of observation in Clarke’s Law, which may be formulated as follows:
>
>When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Secure-Fix-6355 t1_jdsj0kk wrote
No one cares
E_Snap t1_jdsjku6 wrote
Says the guy with a karma farm account name. Guess you have to get those low effort internet points somehow, huh?
Secure-Fix-6355 t1_jdsjqnn wrote
I have no idea what that is and I'm not farming Karma, I'm abusing you
mcilrain t1_jdsnxkr wrote
Who asked?
redboundary t1_jdsgg1s wrote
Time to rewatch Westworld Season 1
sdmat t1_jduqoi1 wrote
Pity they never made more seasons of that show
fishybird t1_jdteel7 wrote
Ah yes, the "ai is conscious because it can do cool things" take. Humanity is screwed
MrMooga t1_jdt3ied wrote
Can't wait for Microsoftman to argue that it deserves human rights before going off to vote for Bill Gates's grandson for President.
pengo t1_jdtcoly wrote
Absolutely nonsensical take.
[deleted] t1_jdtutpe wrote
[removed]
super_deap t1_jdu0w8f wrote
Hard disagree with Materialism. I know I might get a lot of -ve votes, but this has to be said:
A large portion of the world (especially outside of the west) does not believe in 'consciousness "emerging" from electrical impulses of the brain.' While the west has progressed a lot materially, bringing us to modernity (and now post-modernity), people outside of the west believe in an immaterial soul that cannot be captured by definition by the scientific method and it transcends our material body.
While I believe we will reach general human-level intelligence (and may go beyond this) because intelligence has a purely material component that we can replicate in computers, consciousness will never ever arise in these systems. There are very strong philosophical arguments to support this case.
artsybashev t1_jdu2hjs wrote
The physical world that we know is very different from the virtual twin that we see. The human mind lives in a virtual existence created by the material human brain. This virtual world creates nonexisting things like pain, colors, feelings and also the feeling of existence.
The virtual world that each of our brain creates is the wonderful world where a soul can emerge. Virtual worlds can also be created by computers. There is no third magical place besides these two in my view.
super_deap t1_jdu3zan wrote
It is fine if you disagree and I believe a lot more people will disagree with this philosophical position as it is not very popular these days.
Near-death experiences, out-of-body experiences, contact with 'immaterial entities' and so on hint towards an existence beyond our material reality. Since there is no way one could 'scientifically' test these does not mean these things simply do not exist.
Testimony widely used yet mostly dismissed method of knowledge acquisition establishes all of the above:
A patient being operated on while in a complete medical comma explaining the things happening in clear details in a nearby room after the operation that there is no way they could have known that, one such testimony by a reliable person is sufficient to establish that our current understanding of the world is insufficient. And there are so many of these.
I am not saying u have to change your worldview just because I am saying so. do your research. the world is much bigger than what is out there on the internet. (pun intended)
[deleted] t1_jduhpss wrote
[removed]
LanchestersLaw t1_jdszbjk wrote
What I think is the most amazing thing is that GPT got this far while only trying to predict the very next word one word at a time. The fact it can generate essays by only considering one token at a time is mind boggling.
With all the feedback from ChatGPT it should be easy to program a supervisor who can look at the entire final output of GPT and make a prediction what the user would say in response; then it asks that to GPT to revise the output recursively until it converges. That should be relatively easy to do but would be very powerful.
Flag_Red t1_jdtskoy wrote
It's not really accurate to say it's "only considering one token at a time". Foresight and (implicit) planning are taking place. You can see this clearly during programming tasks, where imports come hundreds of tokens before they are eventually used.
lacraque t1_jdunvp4 wrote
Well for me often it also imports a bunch of crap that’s never used…
modeless t1_jdtx2eu wrote
I like the idea of predicting the user's response. How's this as an architecture for a helpful agent:
Given a user question, before you generate an answer you predict the user's ideal response to the model's answer (e.g. "thanks, that was helpful", or more likely a distribution over such responses), then generate an answer and iteratively optimize it to make the ideal user response more likely.
This way you're explicitly modeling the user's intent, and you can adapt the amount of computation appropriately for the complexity of the question by controlling the number of iterations on the answer.
imaginethezmell t1_jdsksw5 wrote
also people keep thinking it is just one thing, but it is actually an infinite thing
you can have a bot for everything all the way down
bot to create the idea + bot that reviews the ideas + bot that finds if the idea exists + bot that adds use cases to each general idea...a bot that decides the best idea
bot to create the outline/write/code + bot that reviews/QA each part
and btw each part doesnt have to be done at once either
you can start with a single bot doing a simple sub task, then another one the next one, an assembling bot adding them together, while the review bot verifies it
with a set of connections to the api, that can be done np today
no human task cannot be cut into enough sub steps that the army of bots cannot do it little by little
some tasks 1 bot can do most in 1 shot
FirstOrderCat t1_jdt55dh wrote
you can have it, the question is what will be accumulated errors in final result.
COMPEWTER_adminisp t1_jdtuqar wrote
you don't think people at openAi already have this and they are just putting out there the simple version?
addition t1_jdtzh4k wrote
Clearly I’m not the first person to think this by a long shot. I was just pointing out that a new trend has been forming recently.
Chhatrapati_Shivaji t1_jdtmlgm wrote
IIRC the current Bing already does this to an extent.
GM8 t1_jduifow wrote
It is there, isn't it? For every word it generates the previous ones are fed to the network again.
Viewing a single comment thread. View all comments