bgighjigftuik

bgighjigftuik t1_je7if31 wrote

I told GPT-4 to write the code for a small 4-wheeled robot to act as a Roomba-like device. It wrote the MicroPython code for doing so (I did not know that the project existed). Bought the board (I was using Arduino), re-hooked everything together, and got it to work as expected on the second try. It even created what I believe it is a kind of memory module for long-term storage of my dorm's shape, so the robot has memorized and optimized the cleaning routes on itself.

Not bad for 3 mins of prompting

2

bgighjigftuik t1_iswonep wrote

Old-fashioned is a strong word.

When a research breakthrough happens (e.g. Diffusion models nowadays), every lab jumps in to get (or publish) the low-hanging fruit, so publications about that topic explode.

Then, we meet the limits of the technology and the law of diminishing returns kicks in. All of a sudden, most researchers shift gears and move to something more popula. Echo chambers on the former topic dissolve.

Research is still being done on those topics, but at another pace (probably for the better, as published improvements are more fundamental and less incremental).

Another good example of this is reinforcement learning. It was all the jazz when Deepmind published the Atari paper, but as innovation slowed down, lots of researchers moved away from it to publish on generative models.

This is why I dislike research. It feels like it has less to do with convictions and researcher's real interests, and more with becoming an influencer and get lots of citations in a short period of time.

1