RobbinDeBank t1_ixhu47c wrote

Probably only when he founded SpaceX. All his other companies are bought. Now all of them could be better off without him grinding his engineers to exhaustion. Imagine what Tesla or SpaceX can achieve if the best engineers don’t try to jump off the moment they have an offer at any other big tech due to their terrible work culture. Common consensus on all the cs subs rate tesla to have even worse culture and WLB then amazon.


RobbinDeBank OP t1_ixhmsqs wrote

I don’t mean this very post but his attitudes overall on this topic. There are definitely breakthrough out there where authors don’t know about the existence of Schmidhuber’s related works from a long time ago under different terminologies. He’s probably the most brilliant mind in this field with the amount of original ideas he has, but most of those aren’t popularized and might be independently rediscovered decades later.


RobbinDeBank OP t1_ixhk2f7 wrote

Better play it safe by citing him in your introduction:
“In recent years, machine learning [1] has achieved….
[1] Schmidhuber et al. (Dawn of time)”

On a side note: he’s a brilliant mind with so many ideas that deserve more recognition, but on the other hand, he can’t just claim that nobody else has original ideas. I’m sure many of his ideas are now independently rediscovered in recent breakthroughs by many other researchers with no knowledge of some vaguely related papers from decades ago.


RobbinDeBank t1_it8eogo wrote

Let’s say we have this problem with 100 sides, the public and 99 interest groups. In the ideal world, we want to maximize public good (low unemployment, high economic growth, high income, low financial and social inequality, etc) at all costs and interest groups should not have any more power than the average individual. However, we all know this is not the case in the real world, and all those interest groups have disproportionate amount of political power. Now the problem becomes a constraint optimization problem. We still need to maximize public good, but now we have to take into account the constraints caused by these interest groups. This constraint could be or related to the amount of votes (maybe we need 51% of votes, maybe we need more like 60% or 66%). So that’s our main constraint that must be met: partially satisfy the interest groups just enough to achieve majority votes to pass the policy. This is essentially a trade off, sacrificing a part of the ideally optimized public good to gain enough votes to get the policy passed. This constraint then has to be broken down even further to account of each of the 99 interest groups. Together this is a huge and complex constraint optimization problem. The solution we get could be sth like “let’s give in to most of the demands of 90 groups, fuck the other 9, now we have enough votes and the public will benefit a whole lot from this.”

That is a rough idea from me without expert domain knowledge. With the funding of major AI labs like DeepMind and the expertise knowledge they can have, the problem can definitely be solved, in a real world case. Human economists can only write a solution to a smaller problem within 1 industry for example, and not a solution to this complex a problem.


RobbinDeBank t1_it865zk wrote

I would say the complicated nature of real world policy is why AI will eventually be capable of making better policy than humans. While economists can still produce optimized social and economic policies, they just can’t account for all the 100 different interest groups with different political motives in a real world scenario. AI system can do that due to the computing power they process. I think AI can be a key to making incremental societal progress. Instead of the current situation where oligarchs get all the pie, the AI solution could leave them with a good chunk of the pie while the public can now have a decent chunk too. That’s incremental progress, not ideal, but achievable.