Viewing a single comment thread. View all comments

AsheyDS t1_j0lqxam wrote

>And worse, when you talk about ethics issues, people seem to shrug their
shoulders and call it inevitable progress and "what can you do?" as if
AI can't be developed in an ethical way.

Progress will continue regardless, but don't assume because things are moving along steadily that researchers (and others) aren't concerned with ethics, privacy, and other issues. The general public may seem apathetic to you (I'm assuming this post is aimed at them), because they're not in control of development, but people do need to discuss these things because they DO have the power to vote about them in the future.

13

genericrich t1_j0ma7wr wrote

Researchers don't sell products in the marketplace. Amoral corporations which are the living embodiment of the system we've designed to centralize and amass capital are the ones who sell products. The researchers may "design" for ethics but the corporations put products into productions and if they have to make a tradeoff between profit and ethics I will give you one guess which way they're going to jump.

We've seen it now in Midjourney, etc. where they just snarfed up a bunch of copyrighted images to power their lookalike/derivative works machine (which is very cool tech, etc.), but which abuses copyright at scale. They retconn their liability by saying you can't copyright these images either but they know full well people are going to do just that for book covers, printed works, etc. It can't be stopped. By the time the glacial courts get around to addressing it, the world will have been changed and at best there will be some minor changes to the law which won't help anyone whose rights have been violated already.

Not saying we shouldn't try but the deck is stacked by capitalism against us. Corporations are never going to be ethical until they are forced to be ethical, and that takes far too long to enact meaningful course correction.

6

PoliteThaiBeep t1_j0qhn66 wrote

Capitalism is not a threat but a tool. Dictatorship on the other hand is a threat, possibly greater than AI itself. Especially dictatorships we are powerless to stop - eg nuclear armed Russia and 1 billion people economic powerhouse China.

If something horrible happens in democracies usually people rise up and protest and there will be different powers colliding and fighting for change. And often we are successful, but often we are not. We kind of need to be better at this, but poor fighting is still better than no fighting.

Which is a reality of any dictatorship. Everything is in the hands of very few (read "Putin's people" by Catherine Belton) and people have zero power.

Huge country wide protests can have a limited effect, where tyrants will pretend to cave, but as soon as people go home, they quietly arrest or murder those who pose the most threat and tighten everything again but even worse than before.

And there's a vicious cycle going on there that is making any positive change extremely unlikely.

Further as soon as something horrible happens in democracies - the whole world knows it from countless journalists and investigators that are (usually) well protected by law.

If something horrible happens in dictatorships we almost never get the chance to know it and a few cases where we do will be forever denied it has happened and all journalists who were working on it have disappeared from the face of the earth.

Which creates an incredibly distorted picture where dictatorships look nice and shiny where nothing bad ever happened, while we're all incredibly focused on US problems all of which combined don't even scratch the surface of dictatorship problems.

And even if you realize it, your worldview is still warped because most of what you will read and care about are bad events happening in democracies and subconsciously you'll feel that's where most problems are.

All the while failing to realize how we're being increasingly infiltrated by a pro-dictatorship forces which in turn caused 16-year in a row drop in US (and worldwide) democracy scores. See freedomhouse.org. look at the map. Look at trends.

Do you seriously believe 'evil corporations future' that's been countlessly portrayed in pop culture, movies and video games - is a threat? Look again.

1

WarImportant9685 t1_j0m1xnh wrote

I do hope 1) AI alignment theory progress faster than AI development 2) AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general

1

AsheyDS t1_j0mabb5 wrote

>AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general

I don't expect that will be possible, aside from in a very shallow way where it's adhering to very basic rules that the majority of people can agree on and any applicable laws. Otherwise for AI to be aligned with humanity, humanity would have to be aligned with itself. If you want to be optimistic then I would say one day, perhaps post-scarcity, the majority of us might start working together and coming together to benefit everyone. And then we can build up to an alignment with at least the majority of humanity. But I'm sure that will take AI to get there, so in the short-term at least, I think we'll have to rely on both rules and laws as well as the user that it serves, to ensure it behaves in an ethical and lawful manner. Which means the onus would ultimately be on us to align it by aligning ourselves first.

1

WarImportant9685 t1_j0mis5x wrote

yeah, IF we succeed in AI alignment to singular entity whether it is corporation/singular human. The question become the age old question of humankind that is, greed or altruism?

What will the entity that gain the power first do, I'm more inclined to think that we are just incapable of knowing the answer yet. As it is too situational to whom the power is attained first.

Unaligned AGI is a whole different beast tho.

1

AsheyDS t1_j0mn7la wrote

>The question become the age old question of humankind that is, greed or altruism?

It doesn't have to be either or. I think at least some tech companies have good intentions driving them, but are still susceptible to greed. But you're right, we'll have to see who releases what down the line. I don't believe it will be just one company/person/organization though, I think we'll see multiple successes with AGI, and possibly within a short timespan. Whoever is first will certainly have an advantage and have a lot of influence, but others will close the gap, and I refuse to believe that all paths will end in greed and destruction.

2

OldWorldRevival OP t1_j0mcbzs wrote

One of the potential scenarios I envision is that the only good way we end up discovering to solve the control problem is to tie control of the AI to one person controlling it, in that the AI constantly models the person's thoughts through a variety of different methods, some that exist (such as language) and others that do not.

Then it continuously does scenarios with this person.

The reason it's one rather than two is because two makes the complexity and nuances of the problem a lot more difficult from a human perspective.

The key to understanding AI is to understand that its abilities are lopsided. It's very fast at certain things and cannot do other things, and is not organized in the way that we are mentally (and doing so would be dangerous because we're dangerous).

0

AsheyDS t1_j0mm9q5 wrote

I don't believe the control problem is much of a problem depending on how the system is built. Direct modification of memory, seamless sandboxing, soft influence and hard behavior modification, and other methods should suffice. However I consider alignment to be a different problem, relating more to autonomy.

Aligning to humanity means creating a generic universally-accepted model of ethics, behavior, etc. But aligning to a user means it only needs to adhere to the laws of the land and whatever the user would 'typically' decide in an ethical situation. So an AGI (or whatever autonomous system we're concerned about here) would need to learn the user and their ethical preferences to aid in decision-making when the user isn't there or if it's unable otherwise unable to ask for clarification on an issue that arises.

If AGI were presented to everyone as a service they can access remotely, then I would assume alignment concerns would be minimal if it's carrying out work that doesn't directly impact others. For an autonomous car or robot that could have an impact on other people without user input, that's when it should consider how it's aligned with the user or owner, and how the user would want it to behave in an ethical dilemma. So yes, it should probably run imaginative scenarios much like people do, to be prepared, and to solidify the ethical stances it's been imbued with from the user.

3

WarImportant9685 t1_j0muh8q wrote

I do hope I share your optimism. But from the research I read, it seems that even the control problem seems to be a hard problem for us right now. As a fellow researcher what makes you personally feel optimistic that it'll be easy to solve?

I'll try to take a shot why I think the solution you said, is likely to be moot.

Direct modification of memory -> This is an advantage yes. But it's useless if we don't understand the AI in the way that we want. For the holy grail ideally we can understand if the AI is lying by looking at the neural weights. Or maybe searching with 100% certainty if the AI have mesa-optimizer for its subroutine. But our current AI interpretability research is still so far away from that.

Seamless sandboxing -> I'm not sure what you mean by this. But if I was to take a shot, I'll interpret this as true simulation of the real world. Which is impossible! My reasoning is that, the real world doesn't only contain garden, lake, and atom interactions. But also tons of human doing what the fuck they usually did. The economics and so on and on. What we can get is only 'close enough' simulation. But how do we define close enough? No one knows how to define this rigorously

Soft influence -> Not sure what you mean by this

Hard behavior modification -> I'll interpret this as hard rules for the AI to follow? Not gonna work. There is a reason why we are moving on from expert systems to AI. And we want to control AI with expert systems?

And anyway, I do want to hear your reply as a fellow researcher. Hopefully I don't come across as rude

1

AsheyDS t1_j0n9xiu wrote

>This is an advantage yes. But it's useless if we don't understand the AI in the way that we want.

Of course, but I don't think making black boxes is the only approach. So I'm assuming one day we'll be able to intentionally make an AGI system, not stumble upon it. If it's intentional, we can figure it out, and create effective control measures. And out of the control measures possible, I think the best option is to create a process, even if it has to be a separate embedded control structure, that will recognize undesirable 'thoughts' and intentions, and have it modify both the current state and memories leading up to it, and re-stitch things in a way that will completely obliterate the deviation.

Another step to this would be 'hard' behavior modification, basically reinforcement behaviors that lead it away from detecting and recognizing inconsistencies. Imagine you're out with a friend and you're having a conversation, but you forgot what you were just about to say. Then your friend distracts you and you forget completely, then you forget that you forgot. And it's gone, without thinking twice about it. That's how it should be controlled.

And what I meant by sandboxing is just sandboxing the short-term memory data, so that if it has a 'bad thought' which could lead to a bad action later, the data would be isolated before it writes to any long-term memory or any other part that could influence behavior or further thought chains. Basically a step before halting it and re-writing it's memory, and influencing behavior. Soft influence would be like your conscience telling you you probably shouldn't do a thing or think a thing, which would be the first step in self-control. The difference is, the influence would come from the embedded control structure (a sort of hybridized AI approach) and would 'spoof' the injected thoughts to appear the same as the ones generated by the rest of the system.

This would all be rather complex to implement, but not impossible, as long as the AGI system isn't some nightmare of connections we can't even begin to identify. You claim Expert systems or rules-based systems are obsolete, but I think some knowledge-based system will be at least partially required for an AGI that we can actually control and understand. Growing one from scratch using modern techniques is just a bad idea, even if it's possible. Expert systems only failed as an approach because of their limitations, but frankly I think they were given up on took quickly. Obviously on it's own it would be a failure because it can't grow like we want it to, but if we updated it with modern approaches and even a new architecture, then I don't see why it should be a dead-end. Only the trend of developing them died. There are a lot of approaches out there and just because one method is now popular while another isn't, doesn't mean a whole lot. AGI may end up being a mashup of old and new techniques, or may require something totally new. We'll have to see how it goes.

1

WarImportant9685 t1_j0ngwf5 wrote

I understand your point. Although we are not on the same page, I believe we are on the same chapter.

I think my main disagreement is that to recognize undesirable 'thoughts' in AI is not such an easy problem. As from my previous comments, one of the holy grail of AI interpretation study is detecting a lying AI which mean we are talking about the same thing! But you are more optimistic than I do, which is fine.

I also understand that we might be able design the AI to use less black-boxy structure to aid AI interpretation. But again I'm not too optimistic about this. I just have no idea how it can be achieved. As at a glance it seems like they are on different abstraction levels. Like if we are just designing the building blocks. How can we dictate how it is going to be used.

Like how are you supposed to design lego blocks, so that it cannot be used to create dragons.

Then again, maybe I'm just too doomer, as alignment problem is unsolved, AGI haven't been solved too. So I agree with you, we'll have to see how it goes.

1

Superschlenz t1_j0s335e wrote

>aligning to humanity in general

The body creates the mind. If you want it to have a human-like mind then you have to give it a human-like body.

0