dansmonrer

dansmonrer t1_jeg67bc wrote

Not at all made up in my opinion! There just doesn't seem to be any consensual framework for the moment, and diverse people are scrambling to put relevant concepts together and often disagree on what makes sense. It's particularly hard for ai alignment because it requires you to define what are the dangers you want to speak of, and so to have a model of an open environment in which the agent is supposed to operate which currently we do not have any notion nor example of. This makes examples that people in ai alignment brought up very speculative and poorly grounded which allows for easy critic. I'm curious though if you have interesting research examples in mind!

1

dansmonrer t1_jedya4g wrote

I don't think intelligence in general is something machine learning people even want to define. Psychologists do, with different schools of thought, including behaviorism (which has heavily influenced reinforcement learning, and of which BF Skinner was one of the main figures) and then cognitivism, theory of mind... The few attempts I have seen at the interesction of psychological science and ML have been heavily backlashed from both sides, for both justified but also unjustified reasons. The truth is some people will probably have to go against the tide at some point but they will also need to ground very well their approach in existing frameworks. Conclusion: try to be excellent in both psycho and ML, the field you are describing has yet to become scientific.

1