odigon

odigon t1_jae3t3b wrote

I see lots of would be singers who cant hear themselves for some reason and are off key, or mumbly, or just terrible. Gives me nightmares that I might be as oblivious as them and no one will tell me. I started singing because my bands actual singer left, but I hadn't got comfortable yet so I asked the lead guitar guy if I sounded alright (I knew I didn't), and he said "Dont worry, the guitar drowns you out anyway".

These days I'm a lot better at keeping in key and I mostly worry about expression and dynamics. But it took a while.

1

odigon t1_j5s65g9 wrote

I have really no idea what you are saying in your reply. Your original statement was that "A.I. is only as intelligent, beneficial, disciplined, or dangerous as it's creators". That's like saying a racing car isnt any faster that its creators.

We have in the past found a way to make machines that can go fast, can fly, can go underwater, and to see incredible distances, far beyond what its creators can do, which is the entire point of them. Now we are attempting to create a machine that can reason and if we are successful it will reason with much more ability than we can, in the same way that the best chess grandmasters are no match for a chess computer set at a high enough level. Will it have the same goals as us? Why should it? If it doesn't and becomes a danger can we stop it? How? It will be able to outwit us at every turn.

1

odigon t1_j5rbs4d wrote

By far the greatest danger of artificial intelligence is that it will be achieved before we know how to do it safely. General AI isnt here yet. Very good narrow AI is here: AlphaGo. StockFish, ChatGPT. General AI that can at least do everything a human can may be some decades away, maybe much more, maybe less. We know General AI is possible because we humans can do it and humans are not magic, they are physical system. It seems logical that if we can build a human level AI then we can increase the resources and build something that can outperform humans. What will this look like? What will it do? Whatever it does, will we be able to stop it if we dont like the result? I honestly dont think we will be able to; it will be able to fool us or force us to not stand in its way.
Here is a genuinely frightening series on AI safety by a guy called Robert Miles.
https://www.youtube.com/watch?v=pYXy-A4siMw&ab_channel=RobertMiles

1

odigon t1_j5r7it0 wrote

I can't imagine any way that that statement can be any more incorrect. We already dont understand how neural networks solve specific problems, we just let them loose on training data and reinforcement and get them to figure it out. Narrow AI already vastly outperforms humans in very narrow domains such as Chess and Go, and the best human masters struggle to explain what they are doing. AIs trained to play computer games often exploit glitches that humans didnt know existed to the extent that they do something that satisfies the program, but wasnt at all what was intended. They find solutions that humans will never have thought of and there is no reason to think that a general AI that has human level flexibility wont do the same in the real world. This may be a good thing or a very, very, very bad thing.

1

odigon t1_iybkkf4 wrote

Reply to comment by hooplah in [image]Do your best by thirtyVerb

I believe I have also seen somebody make the point that if you are always doing your best, then it becomes your norm, not your best. Your 'best' will always be rare occasions to be remembered and celebrated, where you have achieved some personal pinnacle of achievement, rarely repeated. I dont know if that makes things any better, but maybe it tempers expectations a little?

1