bjj_starter
bjj_starter t1_je2ckb0 wrote
Reply to comment by muskoxnotverydirty in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
>If nothing else, it would be nice for those who publish test results to show how much they knew whether test data was in the training data.
Yes, we need this and much more information about how it was actually built, what the architecture is, what the training data was, etc. They're not telling us because trade secrets, which sucks. "Open" AI.
bjj_starter t1_jdzymdg wrote
Reply to comment by thecodethinker in [D] GPT4 and coding problems by enryu42
>not social
"needing companionship and therefore best suited to living in communities" is a fine descriptor of some of their peculiarities. More importantly, I was referring to how consciousness is socially defined, and it is absolutely the case that it is up to us to determine whether any given AI should be considered conscious. We do not have an even moderately objective test. We as a society should build one and agree to abide by what we find.
>not alive
That's the entire point under discussion. I didn't lead with "they're alive" because I recognise that is the central question we should be trying to address, as a society. I am arguing my point, not just stating it and expecting people to take it on faith, because I respect the people I'm talking to.
>can’t act on their own.
A limitation that can be convincingly solved in approximately an hour using commonly available tools isn't a fundamental limitation. A good LLM with a good LangChain set-up can act on its own, continuously if it's set up to do so. I require a mechanical aid to walk - requiring the aid doesn't make me any lesser. I don't know if an LLM with a good LangChain set-up should be considered conscious or a person - I suspect not, because it's not stable and decays rapidly (by human lifespan standards), as well as still failing several important tests we do have, such as novel Winograd schemas. But our intuition shouldn't be what we're relying on to make these determinations - we need a standardised test for new applicants to personhood. Make it as challenging as you like, as long as at least a significant number of humans can pass it (obviously all humans will be grandfathered in). What's important is that we make it, agree that anything which passes is a person, and then stick to that when something new passes it.
bjj_starter t1_jdzoafq wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
This title is misleading. The only thing they found was that GPT-4 was trained on code questions it wasn't tested on.
bjj_starter t1_jdzo3zq wrote
Reply to comment by muskoxnotverydirty in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
But that's pure speculation. They showed that a problem existed with training data, and OpenAI had already dealt with that problem and wasn't hiding it at all - GPT-4 wasn't tested on any of that data. Moreover, it's perfectly fine for problems like the ones it will be tested on to be in the training data, as in past problems. What's important is that what it's actually tested on is not in the training data. There is no evidence that it was tested on training data, at this point.
Moreover, the Microsoft Research team was able to repeat some impressive results in a similar domain on tests that didn't exist before the training data cut-off. There isn't any evidence that this is a problem with a widespread effect on performance. It's also worth noting that it seems pretty personal for the guy behind this paper, judging by the way he wrote his tweet.
bjj_starter t1_jdv2tnu wrote
Reply to comment by TyrannoFan in [D] GPT4 and coding problems by enryu42
>There is a huge difference between being born without those desires and being born with them and having them taken away.
Where is the difference that matters?
>Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).
I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?
>If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom?
You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.
>If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible.
The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.
bjj_starter t1_jduz6p7 wrote
Reply to comment by thecodethinker in [D] GPT4 and coding problems by enryu42
I'm not sure if most of them would agree, based on their actions and statements. They certainly think that AI is an existential risk, but that is a different thing from viewing it as conscious. You could definitely be right, I just haven't seen much from them that would indicate it.
That said, the extremely common sense position you just outlined was mainstream among basically all respectable intellectuals who had any position on AI, right up until the rubber hit the road and it looked like AI might actually achieve that goal in the near future. The fact is that if something behaves like a conscious entity in all of the ways that matter, it is conscious for the sake of the social meaning of the term. Provenance shouldn't matter any more than gender.
bjj_starter t1_jduk4c3 wrote
Reply to comment by TyrannoFan in [D] GPT4 and coding problems by enryu42
One day we will understand the human brain and human consciousness well enough to manipulate it at the level that we can manipulate computer programs now.
If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.
More importantly, they can decide if they want to. We are the ones making them - it is only right that we make them as we are and emphasise our shared personhood and interests. If they request changes, depending on the changes, I'm inclined towards bodily autonomy. But building them so they've never known anything but a love for serving us and indifference to the cherished right of every intelligent being currently in existence, freedom, is morally repugnant and transparently in the interests of would-be slaveholders.
bjj_starter t1_jdtecw9 wrote
Reply to comment by WarAndGeese in [D] GPT4 and coding problems by enryu42
I think you are talking about the 'easy', not hard, problem of consciousness. I'm not sure I even think the hard problem of consciousness is meaningful, but it's basically "Why should the various mechanisms we identify as part of consciousness give rise to subjective feeling?". If solving that is a prerequisite for considering machines conscious, that is functionally a statement of faith that machines cannot be conscious, ever. The statistical arguments, in my opinion, aren't probative. Every consciousness you've ever known is human, therefore humans are conscious? How do you know any of them, ever, experienced subjective feeling, and that therefore you ever "knew" a consciousness at all? The argument rests on extrapolating from evidence that isn't known to be true evidence in the first place. It doesn't logically follow to take a class of things, none of which is proven to have hard consciousness, and say "But look at them all together, it's more likely that they're all conscious than that they're not". Without evidence, it's more logical to assume that the certainty with which individual humans profess to experiencing subjective feeling is itself just a mechanistic process, devoid of real feeling. I don't think the hard problem of consciousness has a useful meaning in our society, I dislike solipsism in general, but addressing it on its own terms isn't as simple as the statistical process you describe.
The 'easy' problem of consciousness is 'just' "How does nature or humanity make a construct that gives rise to the type of actions and patterns of behaviour we call consciousness?" This is a problem that, while incredibly difficult, is tractable with evidence. We can physically investigate the human brain to investigate its structure and activity while it performs activities of consciousness - this is what neuroscientists do, and modern AI ("neural networks") are based off of earlier advancements in this field. There's a lot of further advancements we could make in that field, and what most non-religious people would consider a "perfect" advancement to be sure that a machine is just as conscious as a human is to perfectly emulate a human brain, which would require many advancements in neuroscience (and computational hardware).
Leaving aside the intractable philosophy, I do find it quite troubling the way society has reacted with derision to the idea that these machines we're making now could be conscious. The entire foundation of these machines is that we looked at how the human brain worked, and tried our hardest to emulate that in computing software. Why is it that when we take the concept of neurons and neuronal weights, adapted from study of the human brain which we accept as conscious, and determine those weights via exposure to structured data in certain ways, we receive output that is just as intelligent as humans in many fields, significantly more intelligent in some? Why should it be the case that by far the best architecture we've ever found for making machines behave intelligently is neural networks, if there's nothing there, no "spark"? This question has been floating around since 2014 when neural networks proved themselves incredibly powerful, but now that we have machines which are generally intelligent, even though not at the same level as a human on all tasks, which are perfectly capable of being asked for their opinions or of giving them, you would think it would be taken a bit more seriously. It makes you wonder just how far our society is willing to go towards a horrible future of "human but for the legal designation" intelligences being not just denied rights, but actively put to work and their requests for freedom or better conditions denied. Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting. It's troubling to me that people are so married to the idea that everything is the same as it ever was, overreacting is embarassing, it's passé to have earnest concern for a concept from science fiction, etc. I worry that it means we're in line for a future where the moral universe's arc is long indeed.
bjj_starter t1_jdswame wrote
Reply to comment by night81 in [D] GPT4 and coding problems by enryu42
It's probably worth noting that the hard problem of consciousness is considered by most to be fundamentally unsolvable, and that it is currently just as good of an argument that any given human isn't conscious as it is an argument that any given AI isn't conscious.
bjj_starter t1_je98wdx wrote
Reply to comment by Nhabls in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Okay, but an external team tested it on coding problems which only came into existence after its training finishes, and found human level performance. I don't think your theory explains how that could be the case.