Viewing a single comment thread. View all comments

diabeetis t1_j9gob12 wrote

Very smart guy, huge douche, makes good doomer points regarding x risk

32

sticky_symbols t1_j9gvp91 wrote

I think slightly douchy is more fair. I've read a ton of his stuff and only a subset is offensive to anyone. But yeah, he's not as considerate as he probably should be.

11

FirstOrderCat t1_j9hxjzo wrote

I argued with him on hacker news, and he is very reactive when reading something he doesn't like.

9

sticky_symbols t1_j9i0yw8 wrote

Well, he's the father of a whole field that might determine the future of humanity. It would be tough to keep it cool the 1009th time you've seen the same poorly thought out dismissal of the whole thing. If I were in his shoes I might be even crankier.

9

FirstOrderCat t1_j9ifrw4 wrote

I don't know much about his practical achievements in this area.

12

sticky_symbols t1_j9itrli wrote

Founding a field is a bit of a rare thing

5

FirstOrderCat t1_j9j9qlg wrote

which field? ai danger awareness? It was in the terminator movie.

19

sticky_symbols t1_j9m3uus wrote

Good point, but those didn't convince anyone to take it seriously because they didn't have compelling arguments. Yudkowsky did.

3

FirstOrderCat t1_j9m6fj2 wrote

>but those didn't convince anyone to take it seriously

Lol, I totally got the idea that rogue robot can start killing humans long before I learn about Yudkowsky existance.

> Yudkowsky did.

could you support your hand-waving by any verifiable evidence?

4

sticky_symbols t1_j9m6t5d wrote

Well, I'm now a professional in the field of AGI safety. Not sure how you can document influence. I'd say most of my colleagues would agree with that. Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.

2

FirstOrderCat t1_j9m8bhd wrote

> Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.

happened what exactly? what are the material results of his research?

I think Azimov's with his rules produces earlier and much stronger impact.

> I'm now a professional in the field of AGI safety

Lol, you adding AGI makes my bs detector beeping extremely loud.

Which AGI exactly you are testing for safety?

2

sticky_symbols t1_j9m8yn3 wrote

Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.

One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.

As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.

If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.

1

FirstOrderCat t1_j9ma8lr wrote

> Asimov's rules don't work

you jump to another topic. Initial discussion was that Azimov rules brought much more awareness, and you can't point on similar material results from Yudkovsky.

1

sticky_symbols t1_j9mbzia wrote

Sorry; my implication was that Asimov introduced the topic but wasn't particularly compelling. Yudkowsky created the first institute and garnered the first funding. But of course credit should be broadly shared.

1