thedude0425

thedude0425 t1_j3c19x7 wrote

Good answer.

The only thing that I’ll point out are that AI do develop biases for reasons we don’t quite understand yet. It’s apparently incredibly difficult to look under the hood of a neural network and understand what data points (or lack thereof) are triggering these biases.

An example are facial recognition AIs struggling to identify unique black faces.

Another would be an AI program focused on college student admissions filtering out literal poor people from an admissions process.

1

thedude0425 t1_j2xl74a wrote

Isn’t ChatGPT more of a language simulator that doesn’t have any real knowledge of what it’s talking about?

IE it’s not trained in biology? Or history?

It’s seeks to understand what you’re asking, and can provide the best answer possible, (and it can craft creative answers with proper tone, etc) but it doesn’t really know what it’s talking about? Yet?

It sounds like it knows what it’s talking about, though.

1