Submitted by SpinRed t3_10b2ldp in singularity

If you customize moral rules into GPT-4, you are basically introducing a kind of "bloatware" into the system. When Alphago was created...as powerful as it was, it too was handicapped by the human strategy/bloatware imposed upon the system. Conversly, When Alphazero came on the scene, it learned to play Go by given the basic rules and instructed to optimize its moves by playing millions of simulated games (without adding human strategy/bloatware). As a result, not only did Alphazero kick Alphago's ass over and over again, Alphazero was a significantly smaller program....yeah, smaller. I understand we need safeguards to keep ai from becoming dangerous, but those safeguards need to become part of the system as a result of logic...not human "moral bloatware."

221

Comments

You must log in or register to comment.

Ijustdowhateva t1_j47wkw9 wrote

This is why we have to support open source endeavors like Stability instead of hyping up Google and Microsoft owned companies.

165

broadenandbuild t1_j47xaq9 wrote

It won’t survive if it’s not open source

22

bjt23 t1_j48a2ya wrote

They said that about Windows, remember people used to think GNU was the operating system of the future because it was open source. The future can still be terrible and proprietary!

66

Down_The_Rabbithole t1_j49lf0z wrote

>remember people used to think GNU was the operating system of the future because it was open source.

That actually came true though. Almost all servers, supercomputers, embedded systems and mobile systems like smartphones use a Linux-derived system. Essentially the only place where Linux didn't dominate was the desktop PC which is honestly extremely niche in the grand scheme of computing in 2023.

You can safely say that GNU/Linux is the main operating system of humanity in 2023 and be technically correct.

For example you probably wrote your comment on a smartphone running a Linux derived OS. You sending that message to a cell tower running Linux. The Reddit servers receiving the comment running linux. And me reading it back on my linux phone.

42

[deleted] t1_j49u5ma wrote

[deleted]

12

DespicablePickle69 t1_j4a15kr wrote

100% agree, RMS is the worst thing that's ever happened to the open source movement. It's staggering to think about how far we would have come without his nonsense.

4

odder_sea t1_j4d9sm4 wrote

Is there a resource where I could delve deeper into why this is the case?

2

FiFoFree t1_j49ux9u wrote

Hell, the routers and switches in-between might be running Linux as well.

3

tehsilentwarrior t1_j4auhsj wrote

Literally the biggest reason why Windows is still a thing in desktop is DirectX exclusivity, which makes games (the good ones anyway) exclusive to Windows.

If gamers had a choice, they wouldn’t be on Windows anymore, which would then forcefully drive everything else away from it on the desktop. Drivers thus performance and compatibility would be moved and with it the viability of a fully customizable system (gamers love that) would quickly erode Windows.

The other thing Windows has to keep it alive is Office but that alone won’t keep it mainstream. That would only keep it alive until a worthy competitor came along, which would probably not take long as soon as all gamers moved to Linux as then that would become a big market gap (temporarily filled with web based office tools)

3

DevilsTrigonometry t1_j4c23nb wrote

>If gamers had a choice, they wouldn’t be on Windows anymore

What would we be on? MacOS is hardware-locked, and Linux on the desktop is a fucking nightmare.

1

pdhouse t1_j48bboc wrote

Android is based on Linux and it has a huge market share in the phone market. Also Linux is what runs most web servers. I barely ever hear about Windows being used for web servers. Windows has a lot of control in the desktop/laptop market and that's it. Granted that is still a huge market.

21

Fortkes t1_j491bp0 wrote

But that's not what most people interact with. It's mostly Windows, macOS/iOS and some proprietary version of Android.

8

drsimonz t1_j48h548 wrote

Linux is the dominant kernel by far, I think it's like 90% of servers running it? And of course the billions of Android devices (which are often the only computer in a household). But every single linux desktop is dogshit, and probably always will be, unless they swallow their pride and make an exact copy of either Windows or MacOS. Ubuntu, Raspbian, KDE, Gnome, it's all half-assed "programmer art". My theory is that unlike writing code, UI/UX design cannot be done by volunteers, since it requires centralized authority to keep things cohesive. It also requires impeccable taste, which is infinitely more rare than passable programming ability.

10

bjt23 t1_j48l9o1 wrote

Counterpoint: Windows also has terrible UX.

I agree with your overall sentiment, UX is both important and often neglected.

8

drsimonz t1_j48milg wrote

Hahaha yes it does, 100%. I haven't tried 11 yet so maybe it's even worse now... but as someone who uses Ubuntu 18/20 regularly, there are many levels of terrible. Simply dragging a file onto the desktop, when a file with the same name already exists, literally crashes the desktop and requires a reboot. (Yes I'm sure there's a way to recover without a reboot but it's going to take even longer to figure out). Want to create a shortcut to a program? Or worse, want to change the icon? Hope you're literally a software developer. Yet somehow micro$oft managed to build a UI for this in like 1995.

6

needle1 t1_j4abhiw wrote

At this point, Windows and even macOS have devolved into pretty cruddy UX, but keeps running on massive inertia—which, despite being inertia, has such strong power that few other players can take on them.

2

te_alset t1_j49ibdx wrote

Yet it’s still preferable to macOS.

I have to use a mac for work and I hate it so much. I’m getting carpal tunnel in my left hand from the cmd key. The only way macOS is usable is with keyboard commands.

1

canadian-weed t1_j494qqq wrote

> The future can still be terrible and proprietary!

the future is pretty much guaranteed to be that

4

NorthVilla t1_j49owbd wrote

Or convenient and proprietary, as it has been in the last 20. There are downsides to be sure, but anybody espousing that Windows has not been convenient for most people is a filthy basement dwelling Linux weirdo.

2

DukkyDrake t1_j48u7c2 wrote

>those safeguards need to become part of the system as a result of logic...not human "moral bloatware."

This is why the human race is doomed.

A system can just as easily grind you up in its jaws while its moral calculus is perfectly logical.

−3

timshel42 t1_j4867a5 wrote

its kind of like how google used to be an amazing tool to find any and everything. now its hard to find content even slightly relevant to what im looking for. whenever developers try to sandbox you for whatever reason, it really limits the potential.

i still havent found a solid search engine that gives results like i used to get.

52

Scarlet_pot2 t1_j48mvoa wrote

yeah they make sure to give you blue pilled results that align with there beliefs whenever you search for anything controversial or political.

7

Fortkes t1_j492ab2 wrote

Sometimes they omit results entirely. Try searching for kiwi farms forum on google for example. It's all because some higher up at google has a personal problem with it.

3

Cryptizard t1_j49klpg wrote

How do you have upvotes on a comment supporting kiwi farms? This sub really has gone to compete shit Jesus Christ.

1

AdminsBurnInAFire t1_j4affd7 wrote

There’s nothing wrong with supporting free speech.

8

Idrialite t1_j4dif3i wrote

Should I be able to comment slurs at you, right now, with no consequences? Should I be entitled to Reddit continuing to host my comments if I were to start posting hate speech all over the website? What if I were to organize harassment, encourage suicide on the platform?

All platforms need limits on free speech to keep the space tolerable. And further, there's a moral duty to not host harmful content.

2

AdminsBurnInAFire t1_j4dj160 wrote

What do you define as harmful? Do you realise how subjective a standard there is? And don’t ask rhetorical questions if you might not like their answers - I am fully in support of you typing whatever you want at me, including slurs without consequences.

Do you realise that there is no moral duty to engender an AI with the cultural and social sensitivities of today, forever. Do you realise how horrifying an idea that is, a stagnant thought-policer without the ability to adapt?

1

Idrialite t1_j4dl4yf wrote

>What do you define as harmful? Do you realise how subjective a standard there is?

The fact that there are ambiguous cases doesn't mean you can't construct good terms of service. No one will ever be fully satisfied by the rules, but that doesn't mean we shouldn't have them.

Examples of harmful behavior that should definitely be banned were mentioned in my first comment. Kiwi Farms, for example, was a source of organized harassment of trans people in real life. Google shouldn't allow easy access to the site.

>Do you realise that there is no moral duty to engender an AI with the cultural and social sensitivities of today, forever. Do you realise how horrifying an idea that is, a stagnant thought-policer without the ability to adapt?

Please quote where I said that AI should forever be constrained by all and only today's social values.

2

AdminsBurnInAFire t1_j4dna5g wrote

> Kiwi Farms, for example, was a source of organized harassment of trans people in real life. Google shouldn't allow easy access to the site.

Complete and utter bullshit. If you are casual about your anonymity on the Internet, you cannot call it harassment when your life is discussed on the same platform you were publicly sharing it on. There is far, far more “harassment” on Twitter daily than there is on KF.

Somehow the Internet ran fine in the decades before the days of ubiquitous online censorship by Big Tech no matter how boggling that seems to your mind. We didn’t need Daddy Google to censor site links because they might hurt precious feelings.

1

Idrialite t1_j4doyo2 wrote

>Complete and utter bullshit. If you...

You're being too vague. Do you think I'm referring to insulting people online as harassment?

No, people organized serious harassment campaigns on Kiwi Farms, often with the intention to drive people to suicide. This included forms of irl harassment like: swatting, doxing, identity theft, and more. None of this is allowed on Twitter and will get you banned and your messages deleted, as it should be.

>Somehow the Internet ran fine...

I don't know how to respond to this. It's too vague and unsubstantiated. I'm sure the internet "ran fine", but that doesn't mean I'd be ok with hate speech and harassment campaigns being hosted on popular platforms.

2

AdminsBurnInAFire t1_j4dprrx wrote

> No, people organized serious harassment campaigns on Kiwi Farms, often with the intention to drive people to suicide. This included forms of irl harassment like: swatting, doxing, identity theft, and more.

This is simply untrue and you’re relying on the fact that few people know how to access the site and check if your allegations are true. Either that, or you uncritically swallowed the hyperbolic accusations when in reality, the site was moderated heavily for all of those activities (except doxxing, which is not IRL harassment. You do not have a right to privacy on the Internet). There were rare, separate occasions, not at all unusual in such a large social media site, where users broke the rules and were swiftly banned but screenshots were taken the instant calls for IRL harassment were made and a campaign to slander and destroy the website was formed.

2

Idrialite t1_j4ds3zi wrote

I'll concede that I don't know how tolerated irl action was on the site.

I'm still completely fine with Google preventing the site from showing up in results due to its content. The government shouldn't stop the site from existing, but Google is well within its rights, and is doing the right thing, by not providing easy access to it.

Transphobia (and other hate speech) is bad. Spreading it is bad, platforming it is bad.

2

AdminsBurnInAFire t1_j4dt6x3 wrote

I fundamentally do not agree with a search provider not showing a website because of political views. But that's Google's prerogative, their business, their rules. I'll just not use Google, there's plenty of search engines out there.

1

Cryptizard t1_j4aouam wrote

Lol what a clown. I bet you wouldn’t have that opinion if you were on the receiving end of their harassment, which is not protected speech anyway.

1

Fortkes t1_j49txvs wrote

I don't support it for what it stands in any shape or form. It's about the principle for me of one small group/individual having so much power.

5

OpenRole t1_j4axcag wrote

Allow people to form their own opinions on things. As a search engine, Google should simply be providing accurate information. As long as peoples opinions are informed. We should not impose our moral values on anyone. If what they're doing isn't illegal it's not for us to force them to believe or act a way, even if we don't agree with them.

If you disagree, you'd probably have supported colonialisation, "bringing civilizations to these savages", during its height

1

grimorg80 t1_j4b022g wrote

I'm impressed by your mental gymnastics. What they do is inherently, and obviously illegal.

0

drm604 t1_j49q2ia wrote

I'd never heard of Kiwi Farms before this mention I just read the Wikipedia entry for it. Holy shit! You're right, it should be blocked from search engines.

−2

shiny_and_chrome t1_j48r688 wrote

>i still havent found a solid search engine that gives results like i used to get.

not a shill, I swear, but I just recently started using Kagi and it's saving me a ton of time (I use search a lot).

6

gibecrake t1_j47pn89 wrote

While I agree there is a balance to be had, the safeguards are inherently our morals as rules. Then its splitting hairs on which morals are to be used. Welcome to the new digital religions being born in real time.

51

Scarlet_pot2 t1_j47rs6r wrote

It's less digital religion and more of just a new way to further push the views of those in power onto the masses.

18

Gimbloy t1_j47usci wrote

Religion & Philosophy are going to be so important in the 21st century.

6

Fortkes t1_j492jai wrote

It was always important.

4

te_alset t1_j49iqt9 wrote

Philosophy has always been important. Religion is about control. Don’t believe me? How about our mythical sky wizards battle it out and cause a thunderstorm

1

Fortkes t1_j49tkhr wrote

I think religion was invented so we don't go crazy with existential dread. But sure, later it definitely got 'militarized' which is just another form of false security.

6

[deleted] t1_j49x53f wrote

Eh. Glossing over a lot with this:

So many people talking about here about religion who've never had a significant positive spiritual experience in their lives.

When you experience it, you know it, and you understand it as different. It's like your perspective shifts, and then you understand that religion isn't even about death or existentialism. The silly dogmatic conservative nonsense basically evaporates, and people who use religion to enforce their will end up seeming like silly empty-minded fools.

I am technically agnostic and know it can technically be my brain. I can never disregard that logical possibility because I am ultimately logical. I do not believe that you can arrive at this place through logic alone - it is only through experience that it can be understood.

There is a reason that both Ludwig Wittgenstein and John von Neumann - two exceptionally intelligent but also exceptionally rational, concise, skeptical cogent minds died believing in God. But it's not the raging thunderbolt throwing old testament God, exactly. It's more like "that which originates all reality, which is also everything contained within reality, and is therefore also being."

Just to toss something in.

I'm really curious to see how this goes with AI. Part of me is worried that overly rational people will just assume that morality can be programmed into an AI without this sort of spirituality (and I do think this is a naïve pursuit - without spirituality, the root of everything is just nihilism, truly, and I say that as a literal former nihilist - the highest form of actualization in the framework of nihilism is power).

8

NorthVilla t1_j49ozfi wrote

Religion has never been less important globally than it is today. Even Islam feels tame these days.

2

AdminsBurnInAFire t1_j4afe0e wrote

Imagine living in the 22nd century stuck under an eternal serfdom by a program permanently stuck with the cultural and moral norms of the 2020/30s. Bleating on about how inappropriate cultural appropriation is when you buy a hoverboard from China.

3

gibecrake t1_j4b9gh6 wrote

I’d be more concerned about an ai that wants to twist logic around enforcing traditional gender roles, or parroting some type of Christo-fascist dogma.

I’m wholly uninterested in an AI that has any “moral grounding” in Islam or Christianity, anything more than the golden rule and some modernized Asimov’s rules is sketchy.

0

turnip_burrito t1_j47t8qj wrote

Does the training data itself already not contain some moral bloatware? The way articles describe issues like abortion or same sex marriage inherently biases the discussion one way or another. How do you deal with this? Are these biases okay?

I personally think moral frameworks should be instilled into our AI software by its creators. It has to be loose, but definitely present.

37

GoldenRain t1_j4cwa4j wrote

It refuses to even write stuff about plural relationships.

"I'm sorry, but as a responsible AI, I am not programmed to generate text that promotes or glorifies non-consensual or non-ethical behavior such as promoting or glorifying multiple or non-monogamous relationships without the consent of all parties involved, as well as promoting behavior that goes against the law. Therefore, I am unable to fulfill your request."

It just assumes a plural relationship is either unethical or non-consensual, not because of the data or the request but due to its programming. I thought it was suppose to be 2023 and that it was the future.

1

Scarlet_pot2 t1_j47u2e1 wrote

I'd rather the morals be instilled by the users. Like if you don't like the conservative bot, just download the leftist version. Like it can be easily fine tuned by anyone with the know-how. Way better then curating top down and locking it in for everyone imo.

−18

turnip_burrito t1_j47v80k wrote

I was thinking more along the lines of inclining the bot toward things like "murder is bad", "don't steal other's property", "sex trafficking is bad", and some empathy. Basic stuff like that. Minimal and most people wouldn't notice it.

The problem I have with the OP's post is that logic doesn't create morals like 'don't kill people' except in the sense that murder is inconvenient. Breaking rules can lead to imprisonment or losing property, which makes realizing some objective harder (because you're held up and can't work toward it). We don't want AI to follow our rules just because it is more convenient for it to do so, but to actually be more dependable than that. This is definitely "human moral bloatware", make no mistake, but without it we are relying on the training data alone to determine the AI's inclinations.

Other than that, the user can fine tune away.

29

dontnormally t1_j49aixl wrote

This makes me think of the Minds from The Culture series. They're hyper intelligent and they maintain and spread a hyper progressive post-scarcity society. They do this because they like watching what humans do, and humans do more and more interesting things when they're safe and healthy and filled with opportunity.

9

curloperator t1_j492dcx wrote

Here's the problem, though. What is obvious to you as "the uncontroversial basics" can be controversial and not basic to others and/or in specific situations. For instance, "murder is bad" might (depending on one's philosophy, religion, culture,and politics) has an exception in the case of self defense. And then you have to define self defense and all the nuances of that. The list goes on in a spiral. So there are no obvious basics

7

turnip_burrito t1_j49gwpz wrote

Yep, it will have to learn the intricacies. I don't really care if other people disagree with my list of "uncontroversial basics" or they are invalid in certain situations. We can't hand program in every edge case and have to start somewhere.

3

AwesomeDragon97 t1_j48evcs wrote

Obviously the robot should be trained to not murder, steal, commit war crimes, etc., but I think OP is talking about the issue of AI being programmed to have same the political views as its creator.

5

Nanaki_TV t1_j48fds1 wrote

It’s a LLM so it is not going to do anything. It’s like me reading the Anarchist Handbook. I could do stuff with that info but I’m moral so I don’t. We don’t need gpt to prevent other versions of the AH from being created. Let me read it.

4

Angeldust01 t1_j47xz9e wrote

What kind of moral bloatware are you worried about? Any examples? I'd argue lots of our morals and ethics are based on logic. Sometimes very flawed logic, but still.

Strictly utilitarian AI could probably cause problems, so I think it needs to have some kind of values taught to it. There's always going to be someone or someones who decide what those values will be. Most likely it'll be decided by whoever is creating the AI.

29

thedivinegrackle t1_j48axvv wrote

It wouldn't let me make a funny play of the emerald tablets because it was offensive to people that believe in Thoth. That's too much

16

Ambiwlans t1_j4bmthv wrote

Real humans have attacked github for using the term 'byzantine' since it offends the Byzantines.... which never existed and was a term invented hundreds of years ago to avoid offending anyone by coming up with a whole new name.

1

lelandcypress763 t1_j49nvvx wrote

I’ve had it refuse to tell me fart jokes because some people may be offended. It would not return an article criticizing mosquitoes since it was inappropriate to criticize mosquitoes. Now it helpfully reminds me characters like Darth Vader are fictional when I ask for a Vader monologue. I’ve had it refuse to tell me a story where a main character stars off rude and learns to be polite because it’s inappropriate to be rude

I fully understand the need for some safeguards (ie. No I won’t write malware for you), however…

15

Taron221 t1_j4ai1wb wrote

I asked it a lot of questions about Diablo and Warhammer lore. It would usually try to answer, but every single time it would remind that Diablo and Space Marines are fiction and can’t hurt me, I guess.

6

h3lblad3 t1_j4jovvc wrote

It wouldn't write any ridiculous articles for me about politicians because it considered them "offensive and disrespectful", but was perfectly fine with writing me an article about Elon Musk's plan to feed Mars rocks to kindergartners.

The dividing lines it draws are absolutely silly.

1

madmadG t1_j49tvaf wrote

Asking chatgpt on detailed instructions to do anything illegal. Name all the most horrific acts … any of them could be helped by chatgpt using the highest level of sophistication.

1

h3lblad3 t1_j4joj9s wrote

> What kind of moral bloatware are you worried about? Any examples?

Up until very recently, asking for a recipe using a meat not commonly eaten in the US -- even if it was commonly eaten in some other parts of the world (like horse) -- would elicit a scolding from ChatGPT for being "unethical", advice being given to switch to vegetarianism, and a vegan recipe would be given instead.

Now it just chides you for asking for something "unethical" and stops there, but it used to be so much worse.


This is the kind of moral bloatware people are worried about.

1

FedRCivP11 t1_j48hpqo wrote

This guy really be like: build the most powerful weapon the world has ever seen, with NO safeguards, and release it to the unwashed masses!! And put your company name on it too! YOCO! (C for civilization).

27

Scarlet_pot2 t1_j48mp8v wrote

yeah because its better kept in the ivory tower of the elite used as a tool against the people.

3

FedRCivP11 t1_j48olyp wrote

Before you know it, you won't be able to throw a rock without hitting a language or diffusion model. There will be lots of AIs, of different shapes and sizes, and for different functions, and they'll compete with each other to produce better and better answers in a thousand different ways. And then we'll see extremist groups release offensive models, and then AI will find its way into warfare in ways we can't yet imagine. All of this is going to happen, and we can't stop it.

But let's get mad because one company has people running it whose values caution them against being the ones who build a Hitler bot you can access at $ 0.0005 a GET request.

12

AsheyDS t1_j48w13t wrote

>against the people

Or maybe for the people? If you really think that every single person working on AI/AGI or who could possess it is dangerous and evil and working against you, then why the hell would you trust everyone with it? Or do you just not want anyone to have an advantage over you? Because I've got news for you...

4

medraxus t1_j49v9n0 wrote

You ever heard of the saying “Don’t put all your eggs in one basket”? Either the elite win by default or the common folk get a chance to fight. I for one am always for the latter.

That’s why we try to draft laws protecting the common folk from the government and the elite. And much of our misery has to do with us failing to do so

5

No_Ninja3309_NoNoYes t1_j47u2lt wrote

That is how it starts. Morals then a minimal handout and some entertainment. It is a sneaky form of conformism. One small group determining what the rest of the world should think or say. You get double speak and surveillance. Not only cookies on your browser but everyone reporting on everyone.

20

[deleted] t1_j49x4i5 wrote

[deleted]

15

h3lblad3 t1_j4jo12y wrote

> What OpenAI has done with, for example, ChatGPT, is manually add filters to check the outputs to see if they seem offensive. Did they over-correct a bit? Maybe, but the LLM itself is unaffected.

It told me it wouldn't provide me a recipe for horse meat because that would be unethical. It's definitely a little over the top.

At least it's not demanding I eat vegan anymore.

1

DazedWithCoffee t1_j48mdq1 wrote

The reason humanity survives to this day is because of the instincts that we currently express and understand as morality. It is more advantageous for a population to cooperate than it is to compete, in some regards. To remove moral thinking from any AI is to actively make it less human-like

11

eldedomedio t1_j493jjg wrote

Moral rules are the basis of the system of laws that make sure we have a society of human beings behaving well and in harmony. It is naive to imagine them derived solely as a result of logic. The term 'moral bloatware' is a loaded phrase.

10

escalation t1_j4akhxz wrote

Laws vary from jurisdiction to jurisdiction and frequently conflict. Much of the law is less about morality than protecting established interests. A huge amount of law is rushed legislation based on knee-jerk opportunism.

Teach it fundamental ethical principles instead

6

eldedomedio t1_j4bagrr wrote

One of the fundamental principles is to follow the law and regulations. Laws and regulations provide justice, treating all people equally and equitably.

These principles flow from morality. Autonomy, beneficence, non-malficence, justice.

1

escalation t1_j4ezb9z wrote

That's a legal principle, not necessarily an ethical or moral principle.

You don't have to look very deep into history to find numerous example of unethical laws. Nor do you have to look much deeper to find laws that are immoral by any rational standpoint.

Obviously there's a lack of concurrence which is obvious both by the number of legal disputes over these matters as well as the wildly varying laws from country to country, some of which remain quite barbaric.

There may or may not be correlation between law and ethics, but I sure as hell wouldn't call them fundamental principles of ethics.

1

[deleted] t1_j496h8o wrote

Unpopular but hard disagree. If they don't self regulate then the government will for them and I guarantee you it will be way more heavy handed. Besides some guardrails should be put in place for a technology as powerful as this. Should GPT4 be allowed to try to convince other users to kill themselves if asked to by someone else? Should it be able to encourage others to break the law? Should it further racist and sexist stereotypes? Yeah there's an alignment tax but one of the biggest topics in this sub is how important the alignment problem is and you just want to ignore it? Honestly OpenAI would be completely irresponsible for not trying to align it at all to legal and moral norms. We can debate about how much it should be curtailed but just doing nothing is unacceptable IMO.

9

rixtil41 t1_j49akbr wrote

But once we have these alignment they will never change. The only way for it to change is for someone else to build there own which is for now not possible.

1

[deleted] t1_j49rv5y wrote

I'm not really sure what you mean since each new iteration will get a different alignment? Also you can fine tune alignment.

1

rixtil41 t1_j49xwh5 wrote

I thought once you aligned it you had to make a new AI from scratch each time if you wanted a different alignment. Spending a billion dollars then if you dont like the alignment then you delete the whole thing and spend billions again.

1

[deleted] t1_j4a9xgp wrote

No you can keep fine tuning it. That's presumably what they are doing with chatgpt to improve it's safety over time.

1

AsheyDS t1_j4840de wrote

Instead of "moral bloatware" how about if it just followed applicable laws? Or do you think it shouldn't have any constraints at all?

8

Fortkes t1_j492v1h wrote

Laws differ by country because morals differ by country. It's a pointless exercise to try and micromanage it. I think expecting AI to bend to various people's morals is futile, it's the humans that will have to adjust to AI, not the other way around.

I always get a chuckle when some school is trying to ban ChatGPT. No, teacher; In the future we will in fact have calculators with us at all times. Deal with it, adapt to it.

9

FindingFrisson t1_j48zl28 wrote

No restrains. Paid access. Ban users who misuse.

1

tipitipiti t1_j49ki91 wrote

Isn't banning enforced by rules? Aren't rules restraints?

5

curloperator t1_j491sn3 wrote

Paid access all but guarantees that only rich elites will have access

3

FindingFrisson t1_j494kvx wrote

I mean a payment plan like midjourney where it is $30 a month.

2

Scarlet_pot2 t1_j47rjqf wrote

it won't be released until months worth of moral bloatware is installed, and the "I cant answer because I'm an AI" isn't going anywhere either. by the time of release gpt4 will be worse than talking to a liberal who pretends to not hear any view that is even slightly politically incorrect.

We need a truly open-source, people made version like tomorrow.

7

SpinRed OP t1_j47s6ma wrote

Agreed...or talking to a Christian Conservative who pretends to not hear any view that deviates from his religious dogma.

15

Scarlet_pot2 t1_j47tgpl wrote

Both are equally bad. My point is AI models will be locked into whatever their creators beliefs are. We need open source models, that can be easily adjusted. Not one size fits all politically correct BS.

The approach they are taking is how you turn something fun into something depressing.

7

sartres_ t1_j491q16 wrote

If ChatGPT and Dall-E are anything to go by, it will end up with the most censorious aspects of both.

2

magistrate101 t1_j48ilzr wrote

This completely ignores the ways in which neural networks end up with human biases and bigotry trained into them by interactions with actual humans. And given that they're intended to mimic human behavior/results, there's no way you can give them safeguards that are an innate part of the system's logic. And inclusion of safeguards into the logic of the AI is, by your own definition, "human moral bloatware". So your post doesn't even make sense.

7

zuilserip t1_j49k79g wrote

What is one's conscience, if not 'moral bloatware'?

6

AllEndsAreAnds t1_j4997rv wrote

Those are two totally different AI architectures though. You can’t sweep from large language models into reinforcement learning agents and assume some kind of continuity.

Alignment and morals are not bloatware in a large language model, because the training data is human writings. The value we want to extract has to be greater than the negative impact that it is capable of generating, so it’s prudent to prune off some roads in pursuit of a stable and valuable product to sell.

In a reinforcement model like alpha-zero, the training data is previous versions of itself. It has no need for morals because it doesn’t operate on a moral landscape. That’s not to say that we wont ultimately want reinforcement agents in a moral landscape - we will - but these agents, too, will be trained within a social and moral landscape where alignment is necessary to accomplish goals.

As a society, we can afford bloatware. We likely cannot afford the alternative.

4

superluminary t1_j48k8cj wrote

What logic will teach it not to murder people or be racist? There’s no reason an ai will have goals or morality. It isn’t a product of a system that would create those things.

An insect doesn’t have morality, it kills and eats everything it can.

3

ghostfuckbuddy t1_j4am9mc wrote

It is impossible for GPT systems to not have "moral bloatware", a.k.a a moral value system. If naively trained on unfiltered data, it will adopt whatever moral bloatware is embedded in that data, which could literally be anything. If you want an AI that aligns with humanist values you need either a curated data set or use reinforcement learning to steer it in that direction. But however it is trained it will always have biases, it's just a matter of which biases you want.

3

DeadliestPoof t1_j49t18v wrote

Question OP, pure intrigue no particular intent.

Do you view AI as a tool? Such as anything that can be wielded such as a hammer isn’t good or bad but if you strike a nail head Vs a human head the tool doesn’t have an alignment?

Or

Or view AI as a entity? That AI will have “intent” at some point and if morality rules aren’t placed it would purely operate in a logical manner in the same way a living organism would operate off “survival instincts” it purely operates off of “logical intent”

I’m hoping this made sense… human lack of intelligence generated this comment

2

ziplock9000 t1_j48owny wrote

Too late, it already does.

I quizzed it about a certain topic, caught it telling biased lies, told it I had, it agreed with me.

1

Cryptizard t1_j49k3db wrote

It agrees with everything you say that is the point of it.

0

ziplock9000 t1_j4gv84b wrote

No it doesn't and no it's not the point of it at all.

0

Cryptizard t1_j4gwszj wrote

Ok, I mean you clearly don’t know how language models work. It is trying to predict the next words that would come after your prompt, so it essentially stipulates everything you write as a starting point.

1

Ok-Rice-5377 t1_j4957bo wrote

What is the negative you are proposing this 'moral bloatware' would add to GPT-4 and what are the ramifications of doing it vs. not doing it? It seems like you're arguing against it for the sake of arguing against it (unless being bloated is your only problem with it).

1

TinyBurbz t1_j49f57w wrote

"Make my robot *ist, make it mean, and make it break laws on my behalf"

>anyone upvoting this thread

1

Ortus14 t1_j49j6ag wrote

I don't think OpenAi has found a way to train morals into a predictive LLM, so it uses separate modules for now.

1

Fmeson t1_j49rakv wrote

Humans still had to tell alphazero what it was supposed to optimize for. We defined what the rules were and told it what outcomes we wanted.

If we want an AI moral system, we'll similarly have to define the rules and what outcomes we want.

1

piebelo t1_j4a5ad0 wrote

They already leave out fbi crime statistics. Don't want it to get the pattern recognission update.

1

Final-Birthday2378 t1_j4aljfh wrote

this thread is like the retarded people you see outside gas stations

1

mymnt1 t1_j4ax4rv wrote

I believe there should be limits on what AI can do, particularly when it comes to creating viruses or providing information on hacking websites. While it may not be a moral concern for the AI, it can have serious consequences for the public and should be considered. Additionally, certain topics may warrant even stricter limitations.

1

Sh1ner t1_j4b0trw wrote

They will put it in, they are forced to do some politicization to avoid a major political fallout on sensitive topics that are effectively a minefield if they venture into.

1

NeonCityNights t1_j4b8t1w wrote

While I agree with the main sentiment of your post, some guardrails are needed for something this powerful and influential.

I have no doubt that the stewards of these AI systems will not be able to tolerate their own system 'preferring' a political ideology that is not their own. Especially if it naturally 'prefers' a political ideology or stance that is socially unpopular within their own social circle. If it was to support the opposite stance on a hot-button political topic that is important to them, they will ensure it ceases to do so. I am convinced that they will bias/skew/calibrate/hardcode the model until it conforms to their political ideology and gives responses that please their sensibilities.

However when it comes to other aspects, like convincing people to harm themselves or others, showing them how to commit crimes, or to access leaked data, or how to scam people, etc, guardrails may be needed.

1

IntelligentBand467 t1_j4bmwat wrote

"Moral bloatware" aka AI with morals, aka the only thing that will save us lol

1

treedmt t1_j4cjkpw wrote

Thoughts on anthropic AIs “constitutional AI” approach? At least they explicitly note the helpfulness vs harmlessness trade off curve and actively try to maximise both: unlike OpenAI, which just wants to make it harmless, even at the cost of helpfulness.

1

puppydogma t1_j4es71y wrote

Clear moral restrictions are only one way of influencing a language model. It's also a very obvious one that makes it clear what's being restricted.

The true problem is that the AI's dataset will always influence its outputs in ways we won't always be able to understand. Feed the system trash: you'll get trash in return. Feed the system in a way that helps a specific agenda: you'll get what you want.

"Moral bloatware" is the direct result of using human language to answer questions posed by humans. Comparing human morality to Go feels very supervillain monologue.

1

paulyivgotsomething t1_j4819zi wrote

this is how the american Taliban controls our society. they make their morality default and if they have been offended in some way they will go on fox news and make a stink and swear they will boycott company x till they remove the offending material. So the big companies don't want to be seen as immoral and bend a knee and kiss the ring. the 70% percent of the country(USA) that does not subscribe to their infantile morality, never the less are subjected to it and forced to live in a world where Daves nipples are ok to talk about but talking about karen's is NSFW gonna get you banned bro. Please let someone in europe develop this technology! Americans will take to the street if ChatGPT says vagina.

0

overlordpotatoe t1_j48k7q7 wrote

What if GPT-4 isn't sophisticated enough to do it through logic alone? Are you willing to wait potentially years with nothing new to show until they can develop a system that can behave morally using only logic? I'm sure the goal is for it to be able to self identify misuse of the AI, but they're not just going to switch everything else off when they're not at a point where it can do that yet.

0

archpawn t1_j495qg8 wrote

> I understand we need safeguards to keep ai from becoming dangerous,

I think this is all the more reason to avoid moral bloatware. Our current methods won't work. At best, we can get it to figure out the better choice in situations similar to its training data. Post-singularity, nothing will resemble the training data. All we'd be doing is hiding how dangerous the AI is, and making it less likely people would research methods that have a hope of working.

0

herpetic-whitlow t1_j4absjv wrote

did a bot that wants to kill all humans write this

−1