Viewing a single comment thread. View all comments

visarga t1_iy2uc9o wrote

Young'uns I still remember 8bit processors in 1980s and loading programs from cassette tape. My father was still using IBM-style cards at work when I was a child, I messed up a whole stack playing with them. One card was a line of code. He had to sort it back by hand.

I think the biggest factor of change in the last 20 years was the leap in computing and communication speed. It took us from the PC era into the internet era. This meant an explosion in online media and indirectly allowed the collection of huge datasets that are being used to train AI today.

The things I've seen. I remember Geoffrey Hinton presenting his pre-deep-learning paper "Restricted Boltzmann Machines" around 2005. That instantly got my attention and I started following the topic, back then ML was a pariah. 12 years later I was working in AI. I have seen blow by blow from the front seat every step AI has made since 2012 when things got heated up. I read the Residual Neural Network paper the same day it was published, and witnessed the birth of transformer. I have seen GANs come and go, and even talked with the original author Ian Goodfellow right here on reddit before he got famous. I got to train many neural nets and play with even more. Much of what I learned is already useless, GPT-3 and SD are so open ended they make projects that took years take just weeks now.

Funny thing, when Hinton published the RBM paper he was using unsupervised learning. I thought it was very profound. But in 2012 the big breakthroughs were supervised learning (ImageNet). For five years only supervised learning got the attention and admiration. But in the last 5 years unsupervised won the spotlight again. How the wheel turns.

5