MagicManTX84

MagicManTX84 t1_j9l9zo7 wrote

Messengers will stay around for a while until their AI can figure out that your AI is filtering out the messages you don’t want to see. Good AI - takes you off the list and politely goes away. Bad AI - tries to do something radical to force you to see their messages. Which wolf wins?

2

MagicManTX84 t1_j9f93vw wrote

Freud speaks of the ego or “id”. I think to be sentient, AI would need this and would, at a minimum, be interested in self preservation and probably a lot more. In humans, behavior is regulated through morals, values, and social pressure. So how does that look for AI. If 1,000,000 social posters tell AI to “kill itself”, will it do it?

1

MagicManTX84 t1_j3byziz wrote

AI is very good and faster than the human brain in very narrow and focused contexts. It can find specific patterns in thousands or millions of images in seconds if it has been trained to look for those patterns. It wins at games like chess and go because the rules to play are relatively simple, and the job simply going through the combinations to find the best optimized choice to play. Things computers do well. In image processing AI, given large objects with no contextual clues it cannot tell a refrigerator from a stand up freezer or some other large object of similar size. With training and contextual clues, AI can be taught to tell the difference. AI does a terrible job at interpreting context, much like small children. A chatbot can really only interpret words it understands. When someone types words or phrases it cannot handle, it either ignores them or asks the customer to rephrase the question. So, in the end, AI is really no more than “smart programming” because it cannot interpret things that it is not programmed to do. Yes there are heuristic algorithms where the computer effectively “guesses” and then can use the historical info to change future patterns, but again it can only do it within the context, the rules, the values it was programmed to follow. I’m talking commercial AI here. The time to get a real machine learning system to fully understand all contextual clues is years, like a child learning to grow up in the world. And where you can take a human being, drop him or her in a new country, they will learn the language and contextual clues over time, AI does not do so well at that. The AI we have today, perhaps excepting some secret government labs, is nowhere close to human thinking. It is great to analyze patterns and follow a “script” to interact with people in those patterns, it is terrible going off script. It costs more than well written “normal software”, like choice prompts backed by decision trees. That can be done without AI, and should not be called AI. Sorry for the rant, but so many companies use AI to hide what they are really doing in software, and I think it’s a crime to customers and users. They want to treat AI like some “magic genie” and have a customer ohh and ahh over it and just believe what the computer tells them. That is the real risk of AI, and AI making poor decisions, because the script is written by a human author with values, and sometimes those values are twisted to optimize company profit over customer or user benefit.

2