Viewing a single comment thread. View all comments

fluffy_assassins t1_jb6vmgz wrote

I had a kind of a theory.

There used to be self-modifying code in assembler because computing power was more expensive than programmers' time. So programmers took more time to get more out of the more expensive hardware.

I'm thinking, when transistors can't shrink anymore(quantum effects and all), we're going to need to squeeze out all the computing power we can get to the point where... right back to self-modifying code. Though probably done by AI this time. I don't think a human could debug that though!

3

visarga t1_jb79kst wrote

Back-propagation is self-modifying code. There is also meta-back-propagation for meta-learning, which is learning to modify a neural network to solve novel tasks.

At a higher level, language models trained on code can cultivate a population of models with evolutionary techniques.

Evolution through Large Models

4

NothingVerySpecific t1_jb92tmn wrote

I understand some of those words

3

ahtoshkaa2 t1_jb9czan wrote

Same) Haha. Thank god for ChatGPT:

The comment is referring to two different machine learning concepts: back-propagation and meta-back-propagation, and how they can be used to modify neural networks.

Back-propagation is a supervised learning algorithm used in training artificial neural networks. It is used to modify the weights and biases of the neurons in the network so that the network can produce the desired output for a given input. The algorithm uses gradient descent to calculate the error between the predicted output and the actual output, and then adjusts the weights and biases accordingly.

Meta-back-propagation is an extension of back-propagation that is used for meta-learning, which is learning to learn. It involves modifying the neural network so that it can learn to perform novel tasks more efficiently.

The comment also mentions using evolutionary techniques to cultivate a population of models in language models trained on code. This refers to using genetic algorithms to evolve a population of neural networks, where the best-performing networks are selected and combined to create new generations of networks. This process is known as evolution through large models.

7