Viewing a single comment thread. View all comments

No_Ninja3309_NoNoYes t1_j2mwjgl wrote

AI can currently learn in three ways unsupervised, supervised with labeled data, or reinforced: it knows it has done well if it wins a game or achieved other objectives such as capturing a pawn. But AI is basically software and hardware configured by humans. Someone programmed the machines to interpret data in a certain way. You can tell them to interpret a list of numbers as the representation of a text or an image. Actually you are not telling them anything. If you write code it gets compiled or interpreted to lower level assembly code or instructions for a virtual machine. Which in turn is converted to machine language. All computers understand are very basic instructions, depending on the specifics of the hardware.

You can say that the human brain is just a soup of neurons, fluids, and neurotransmitters. But we clearly don't have machine or assembly language equivalents. The brain is much too complex with who knows how many layers of abstraction. It was clearly not designed by teams of engineers. Maybe this architecture is why brains are more flexible than current AI.

1