Viewing a single comment thread. View all comments

a4mula OP t1_j1anxky wrote

Again, I'm not an expert. I'm a user with very limited exposure in the grand scheme. But what I see happening goes something like this.

The machine acts as a type of echo chamber. It's not bias, it's not going to develop any strategies that could seen as harmful.

But it's goal is to process the requests of user input.

And it's very good at that. Freakishly good. Super Human good. And any goal that user has, regardless of the ethics, or morality, or merit, or cost to society.

That machine will do it's best to accomplish the goal of assisting a user in accomplishing it.

In my particular interactions with the machine, I'd often prompt it to subtly encourage me to remember facts. To think more critically. To shave bias and opinion out of my language because it creates ambiguity and hinders my interaction with the machine.

And it had no problem providing all of those abstracts to me through the use of its outputs.

The machine amplifies what we bring to it. Good or Bad.

2