Viewing a single comment thread. View all comments

Randommaggy t1_j8oi1yp wrote

How is a genetic algorithm that optimizes for a set of constraints fundamentally different from a GAN or reinforcement learning model except in implementation details and resource-efficiency?
The discriminative network in a GAN is the provider of constraints aka part of the training dataset or the measurer of fitness.
The generative network proposes solutions and refines it's weights based on the fitness of the output.

There are differences but the premise is more similar than dissimilar.

Your funding would also likely be better if you could convince people that it is a form of AI maybe branded as a subcategory of supervised reinforcement learning.

1

TrumpetSC2 t1_j8onhk8 wrote

There is a big terminology issue going on here.

A GA is fundamentally different from AI because a GA does a very specific thing: It evaluates a set of solutions (called a population) and uses some method to choose some of those to reproduce (selection) and then recombines some of them (crossover), and applies random changes (mutation) to generate the next population, and iterates hopefully increasing fitness over time. It is an algorithm for optimizing solutions, and is not specific to things like learning systems or neural nets.

GANs are neural networks trained in a specific process, where there are networks that are solving the problem and networks that are trying to generate difficult input, to put it simply.

Reinforcement learning is a broad learning approach that covers a ton of different learning algorithms all with their own secret sauce, and it can be applied to decision making agents of many kinds, including neural nets and AI systems, but also other things like simple robots with state machines.

It would be incredibly disingenuous to say GAs are AI/ML, equivalent to GANs or a kind of reinforcement learning because those things are all very different and specific in ways that they aren't compatible ideas.

For example, some GA researchers use GAs to generate patches to buggy code. This has nothing to do with learning, there is never a model of the program, the evolved solution is purely a patch description of code. It bears no resemblance to these other methods and has nothing to do with neural nets/ai/etc. It makes no sense to try to lump these things together when some are concepts, some are algorithms, some are specific neural network designs, all with different components, purposes, and applications.

Now they can be used in conjunction. Like if you have ever heard of NEAT, it is a GA for evolving neural networks, and the neural networks are AI/ML. Also you can evolve an agent for a reinforcement learning process, but they would be separate steps. Neither is a subset of the other.

4