TobusFire
TobusFire OP t1_jao0dm8 wrote
Reply to comment by drplan in [D] Are Genetic Algorithms Dead? by TobusFire
Interesting, thanks for sharing your thoughts! I'm a bit curious about why genetic algorithms might be better for these strange objective functions, as compared to something like simulated annealing. I can understand that a pure gradient method could easily be insufficient, but do the underlying components of genetic algorithms (like cross-over, etc.) really provide a distinct advantage here? Especially when the fitness is probably directly related to the gradient anyways
TobusFire OP t1_janzsj9 wrote
Reply to comment by lmericle in [D] Are Genetic Algorithms Dead? by TobusFire
> In my experience, the crossover implementation is the most important one to focus on
I've heard this as well
TobusFire OP t1_janzoec wrote
Reply to comment by serge_cell in [D] Are Genetic Algorithms Dead? by TobusFire
Cool! I'd never heard of the building block hypothesis before, thanks for sharing.
TobusFire OP t1_janzia9 wrote
Reply to comment by extracensorypower in [D] Are Genetic Algorithms Dead? by TobusFire
Agreed. That being said, I think the prior is that you still need to have enough understanding of the state space to be able to design good mutations, cross-over, and fitness. This can easily add a lot of overhead. In contrast, I think that other cool methods like swarm optimization and ant colony optimization are also promising and in some ways simpler.
TobusFire OP t1_janz44w wrote
Reply to comment by EducationalCicada in [D] Are Genetic Algorithms Dead? by TobusFire
Absolutely, lots of great work is being done in this domain right as we speak. Neuromorphic computing and analog computing I personally think are some of the most exciting things to look out for in the next 10 or so years.
TobusFire OP t1_janyxnp wrote
Reply to comment by Kitchen_Tower2800 in [D] Are Genetic Algorithms Dead? by TobusFire
> isn't RL agent-competitions approaches (i.e. simulating games between agents with different parameter values and iterating on this agents) a form of genetic algorithms?
Hmm, I hadn't thought about RL like that. I guess the signal from a reward function based on competition could be considered "fitness", and then perhaps some form of cross-over is done in the way we iterate on and update the agents. Interesting thought.
TobusFire OP t1_jany7dr wrote
Reply to comment by csinva in [D] Are Genetic Algorithms Dead? by TobusFire
Cool idea, thanks for sharing!
TobusFire OP t1_janxqya wrote
Reply to comment by sobe86 in [D] Are Genetic Algorithms Dead? by TobusFire
My thoughts too. Simulated annealing and similar strategies seem to intuitively be better is most cases where traditional gradient methods aren't applicable. I can imagine a handful of cases where genetic algorithms MIGHT be better, but even then I am not fully convinced and it just feels gimmicky.
TobusFire OP t1_jamcrd2 wrote
Reply to comment by [deleted] in [D] Are Genetic Algorithms Dead? by TobusFire
This is a reasonable question but I believe you are misunderstanding. The randomization of parameters in a neural network (I assume you are talking about initialization?) is certainly not the same as a mutation in a GA. Mutation occurs randomly, sure, but is selected for and crossed over, whereas hill-climbing and gradient descent simply move on the gradient and do not use either random mutations or cross-over so are not genetic.
TobusFire OP t1_jao5dsz wrote
Reply to comment by rflight79 in [D] Are Genetic Algorithms Dead? by TobusFire
I love it! Great to see these kind of applications