s1L3nCe_wb

s1L3nCe_wb OP t1_jdm2lc2 wrote

The word ideology just describes a set of ideas, beliefs and values. The problem is when some of these elements continuously produce social fracture and we are not willing to sit down and revised them (which is precisely what creates polarisation). Use whatever term you like to describe this problem but that is the very thing that we need to address.

The reason for this post was simply to express my believe that a solid AI model based on epistemological analysis could have the potential to alleviate the problem. It's just an opinion. I don't understand why some people get so confrontational over a mere opinion on a possible usage of AI that could be useful to reduce the social fracture. It really proves my point of how desperately we need a solution to this increasingly dangerous problem.

1

s1L3nCe_wb OP t1_jdjahvj wrote

In this model that I'm proposing, the AI is the one that would be making the questions, generally speaking. Let me give you a short example.

Subject: I think that being a woman in Spain has more advantages than being a man.

AI: When you say that women have more advantages, what factors are you taking into consideration?

Subject: Well, for starters, women have legal advantages (affirmative action) that men don't have access to.

AI (after a search): In terms of laws, it is true that women have some advantages over men. I think that in order to critique the validity of these discriminatory measures, we could start by taking one example and start from there. Could you give me one example?

Subject: Sure. Women have more incentives to become freelancers or entrepreneurs.

AI: Could you be more specific? Try to specify what kind of incentives are we talking about.

And so on...

1

s1L3nCe_wb OP t1_jdiousi wrote

But my point is that the agent that will be doing the effort of genuinely trying to understand your ideas/values/beliefs would not be human in this case; it would be an AI, which is precisely why I think this could work substantially better than the average human exchange of ideas.

When a debate emerges, most people are accustomed to take a confrontational approach to the conversation, where the external agent or agents are trying to disprove your point and you try to defend yourself by either defending your point and/or disproving their point. But when the external agent invests its time in fully understanding the point you are trying to make, the tone of the conversation changes dramatically because the objective is entirely different.

My main point regarding the human aspect of this discussion is that when we show real interest in understanding a point someone is making, the quality of the interaction changes dramatically (in a good way). And, like I said, in my line of work I've seen this happen very often. Maybe that's why I'm more hopeful than the average person when it comes to this subject.

1

s1L3nCe_wb OP t1_jdi0e9h wrote

>I'm a bit cynical.

I can see that haha

The reason why many people show a lot of resistance to question their own ideas and open their minds to other viewpoints is that their average interaction with other people is confrontational. When we show genuine interest in understanding the other person's point of view, most of that resistance vanishes and the interaction becomes very beneficial for both parties.

But we are not used to this kind of meaningful interactions and we tend to be very inexperience when we try to have them. That's why I think that having a model like this as an educational application could be very useful.

0

s1L3nCe_wb OP t1_jdhwpz8 wrote

>engaging with AI that just sort of agrees with your world view

I don't know if I'm failing to explain my point but I really cannot explain it any better.

Just watch a video of what Peter Boghossian does in these debates and you might get an idea of what I'm talking about. Peter does not "sort of agree" with anyone; he just acts as an agent to help you analyse your own epistemological structure.

1

s1L3nCe_wb OP t1_jdhv7vh wrote

You are probably making those assumptions based on your experience with chatGPT but that's not what I tried to explain in my post.

The goal of the AI model I'm proposing is not to agree with the user but to question the user's ideas, beliefs and values, offer feedback regarding those views and encourage creative thinking to help the user come up with alternative viewpoints or even offer them if needed. At the same time, the user should be able to question the feedback or alternative views given by the AI.

In order to have a better understanding of what I'm proposing, I would highly recommend watching content or reading books that take this kind of approach to debates and other forms of exchanging ideas.

0