Viewing a single comment thread. View all comments

JenMacAllister t1_j73zk7c wrote

It's easy to program out the bias. We have seen just how hard that is to that with humans. (over and over and over ....)

−3

__OneLove__ t1_j74d80j wrote

So who exactly is 'program[ming] out the bias'? 🤔

7

[deleted] t1_j740ayi wrote

Yes, you are technically correct. But around half of society live in a place where feelings are more important than facts. Remember the AI that was profiling potential criminals? Well, that feely segment of society didn't like the factual outcome and the AI was pulled. You will never get an objective outcome while feelings beat hard facts.

2

Fake_William_Shatner t1_j74ao1i wrote

>Remember the AI that was profiling potential criminals?

Oh, it doesn't sound like you are the "rational half" of society either.

I can definitely predict the risks of who will become a criminal by zip code. Predicting crime isn't as important as mitigating the problems that lead to crime.

Feelings are important. If people feel bad, you need to convince them, or, maybe have some empathy.

It's not everyone being entitled. Some people don't feel any control or listened to. And the point of not having "bias" is because cold hard logic can create bias. If for instance, you ONLY hire people who might 'fit the culture in tech support' -- then the bias would inherently look at who already has tech support jobs and who already goes to college for it. So, you have more of those demographics and reinforce the problem.

It's not necessarily LOGIC -- it's about what you are measuring and your goals. What is the "outcome" you want? If you ONLY go on merit, sometimes you don't allow for people to get skills that didn't yet have merit. Kids will parents who went to college do better in college -- so, are you going to just keep sending the same families to college to maximize who logically will do better? No. The people enjoying the status quo already have the experience -- but, what does it take to get other people up to speed? Ideally, we can sacrifice some efficiency now, for some harmony. And over time, hopefully it doesn't matter who gets what job.

Society and the common good are not something we are factoring in -- and THAT looks like putting your finger on the scale.

1

[deleted] t1_j74hh9v wrote

Cancel the AI project, some dude on reddit can predict by zip codes. Well, I guess that one is done! (joking!)

Feelings are important? Yes they are and that is why we should have real humans, with real families and real life experience acting as judges and juries, my reasoning follows.

But the Tech sector DOES employ people who fit the culture, just not in the way you suggest. Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

THIS is the key problem, the AI is being developed and maintained exclusively by this group, lets say they make up half of the population - where does that lead?

I feel AI is incredible but I really think it needs to be given bounds, building better mouse traps (or cars, planes, energy generation, crop rotation etc, etc) NOT making decisions directly for human beings.

−1

Fake_William_Shatner t1_j77j8u5 wrote

>Take a wild guess on how many people employed in Silicon Valley who vote the same way, who feel the same about Trans issues, who feel the same about gun control, who feel the same about Christianity, who feel the same about abortion.

They vote the way educated people tend to vote. Yes -- it's a huge monoculture of educated people eschewing people who ascribe light switches to fairy magic.

>THIS is the key problem,

No, it's thinking like yours that is the key problem when using a TOOL for answers. Let's say the answer to the Universe and everything is 42. NOW, what do you do with that?

>NOT making decisions directly for human beings.

That I agree with. But not taking advantage of AI to plan better is a huge waste. There is no putting this Genie back on the bottle. So the question isn't "AI or not AI" the question is; what rules are we going to live by, and how do we integrate with it? Who gets the inventions of AI?

It's the same problem with allowing a patent on DNA. The concept of the COMMON GOOD and where does this go in the future has to take priority over "rewarding" someone who owns the AI device some geek made for them.

1

JenMacAllister t1_j741tz5 wrote

Yes it did. Anything created by humans will contain the biases of those humans. However others will recognize this and point it out so it could be removed in future versions.

I don't expect this to be 100% non bias on the first or even 100th version. I do not think all the humans on this planet could agree even what that would mean.

But over time I'm sure we could program an AI to be far more non bias than any human and most humans would agree that it was.

−1