Viewing a single comment thread. View all comments

visarga t1_iqsob64 wrote

Cool down. It's not that revolutionary as it sounds.

First of all, they reuse a code model.

> Our model is initialized with a standard encoder-decoder transformer model based on T5 (Raffel et al., 2020).

They use this model to randomly perturb the code of the proposed model.

> Given an initial source code snippet, the model is trained to generate a modified version of that code snippet. The specific modification applied is arbitrary

Then they use evolutionary methods - a population of candidates and a genetic mutation and selection process.

> Source code candidates that produce errors are discarded entirely, and the source code candidate with the lowest average training loss in extended few-shot evaluation is kept as the new query code

A few years ago we had black box optimisation papers using sophisticated probability estimation to pick the next candidate. It was an interesting subfield. This paper just takes random attempts.

76

ThroawayBecauseIsuck t1_iqt8f30 wrote

If we had infinite computational power random evolution would probably be good enough to create things smarter than us. Unfortunately I believe we have to find something more focused

24

GenoHuman t1_ir038nj wrote

That's assuming these NNs have the capability to be truly smart in the first place.

1

magistrate101 t1_iqt48ez wrote

So it's an unconscious evolutionary code generator, guided by an internal response to an external assessment. I suppose you could try to use it to generate a better version of itself and maybe come across something that thinks... After years... You'd really have to stress it out with a ton of different domains of problems to make something that flexible though

10