Submitted by Feeling_Card_4162 t3_10sw0q1 in MachineLearning

I’ve been developing this idea since I first thought of it in mid December last year. Here’s the elevator pitch (skip to how for technical details):

Why?

Existing models and learning algorithms are extremely static and unable to generalize across tasks as well as humans or to adapt well to new / changing business requirements. This even applies to the final solutions in recent AutoML (see An Empirical Review of Automated Machine Learning, AutoML: A survey of the state-of-the-art). Beyond being static, most suffer from a need for high-performance systems with large amounts of compute and/or memory. This static and bloated nature not only limits the reusability of code, pipelines and all the computations that went into previous versions of a model architecture upon finding a better one. It also forces our preconceptions of what type of learning is best for the task and which degrees of freedom are needed onto the solution. Instead of perpetuating all these assumptions, I want to create a sort of AutoML capable, under the right conditions, of even developing a learning algorithm / model combination that can dynamically add or remove inputs and outputs subsequently incorporating them into the network with adaptive online self-directed learning.

​

How?

Basically, the idea in a nutshell is to use some form of NEAT (neuro evolution of augmenting topologies) and have special nodes in the network that will be activated based on different criteria (depending on the node’s allele for that gene). When activated, however, these special nodes would not send any input forward but instead apply some property change(s) to their connected nodes and/or edges (yes they can connect to an edge and they could choose a subset of their connections or just apply the change(s) to all or use a maximum number of connection hops, etc). It could also create and destroy nodes depending on the effects defined by the allele. There would also be different firing policies (like the normal always fire or thresholding with or without decay, etc.) for all nodes to allow for better leveraging of temporal dynamics. Basically every property of all these policies, including the policy template itself is a potential target for modification by the special neuromodulatory nodes along with the normal properties of a “neuron” like bias, input weights, activation function, aggregation function, etc. The fitness function would either be abstracted away by using rtNEAT in a simulated environment or just be a combined score over a set of simulated tasks. This should add a regularizing force if the tasks are similar enough to help enforce generalization of the evolved algorithms. There should be no limitation placed on cycles in the graph, in fact I would expect cycles to be part of the evolved solutions, which would make them dynamical systems. To reduce the computational complexity of finding a viable solution, the initial population should also be implementations of existing algorithms in the form of the self-modifying neural networks mentioned. It might even be possible to generate a computational graph from open-source implementations as a starting point for the initial population. All of this together should also allow for different parts of the network to use different learning strategies. Theoretically, this can even allow for the evolution of and incorporation of self-organizing criticality and percolation. This could even evolve something that can dynamically add or remove inputs and outputs then incorporate them into the network with adaptive online learning. The network could literally change the learning paradigm for different portions of itself on the fly in different ways depending on the situation.

​

For further clarity, I'm also attaching this mock up of a design I've started working on for an analysis tool

Thoughts? Please feel free to chime in. Science should be a public discussion.

8

Comments

You must log in or register to comment.

Feeling_Card_4162 OP t1_j744rzv wrote

As I stated, either a combined score over a set of tasks or abstracted away by using rtNEAT. In the case of rtNEAT, it would be up to the agent when to reproduce depending on the provided dangers, etc. in the simulated environment

1

ID4gotten t1_j74esuq wrote

I think you might be a little too in love with words like "neuromodulatory", while overlooking whether a simple deep FF network might be able to achieve what you're proposing. Just add a layer, nodes, and weights and you get this "modulatory" effect through linear combinations of the subsequent layers. Maybe I'm not grasping your intent, but I think if you can reduce it to math, you can then try to prove this is something that isn't already achieved through FF and backprop.

6

yldedly t1_j75rw5b wrote

Speaking as someone also working on an ambitious project that deviates a lot from mainstream ML, I encourage you to do the same thing I'm struggling with:

Try to implement the simplest possible version of your idea and test it on some toy problem to quickly get some insight.

Maybe start with one type of modulatory node and see how NEAT ends up using it?

5

dancingnightly t1_j76uuee wrote

In this goal, you may find Mixture of Experts architectures interesting.

I like your idea. I have always thought too that in ML we are trying to replicate one human on one task with the worlds data for that task, or one human on many tasks, more recently.

But older ideas and replicating societies and communication for one or many tasks could be equally or more effective. Which this heads in the direction of. There is a library called GeNN which is pretty useful for these experiments, although it's a little slow due to deliberate true-to-biology design.

3

Feeling_Card_4162 OP t1_j77oir0 wrote

Is that the mixture of experts sparsity method? I’ve looked into that a little bit before. It was an interesting and useful design for improving representational capacity but still imposes very specific constraints on the type of sparsity mechanisms available and thus limits the potential improvements to the design. I haven’t heard about the GeNN library. It sounds useful though, especially for theoretical understanding. I’ll check it out. Thanks for the suggestion 😊

2