Viewing a single comment thread. View all comments

jsseven777 t1_jdxsfkc wrote

Exactly. People keep saying stuff like “AI isn’t dangerous to humans because it has no goals or fears so it wouldn’t act on its own and kill us because of that”. OK, but can it not be prompted to act like it has those things? And if it can simulate those things then who cares if deep down it doesn’t have goals or fears - it is capable of simulating these things.

Same goes like you said about the AI vs LLM distinction. Who cares if it knows what it’s doing if it’s doing these things. It doesn’t stop someone from customer service being laid off if it is just acting like an LLM vs what we think of as AI. It just matters if the angry customer gets the answer that makes them shut up and go away. People need to be more focused on what end results are possible and not semantics on how it gets there.


pavlov_the_dog t1_jdyxl60 wrote

Having goals could happen as an emergent behaviour.

The best computer scientists do not know how Ai can do what it does.