Viewing a single comment thread. View all comments

visarga t1_ivipogr wrote

In 2012 NLP was in its infancy. We were using recurrent neural nets called LSTMs but they could not handle long range contextual dependencies and were difficult to scale up.

In 2017 we got a breakthrough with the paper "Attention is all you need", suddenly long range context and fast/scalable learning was possible. By 2020 we got GPT-3, and in this year there are over 10 alternative models, some open sourced. They all trained on an amazing volume of text and exhibit signs of generality in their abilities. Today NLP can solve difficult problems, in code, math and natural language.

2

FrankDrakman t1_ivjjg3h wrote

Yes, I agree the recent breakthroughs are staggering, and NLP is moving along rapidly. But my point still stands: this guy's firm had it working well in 2012, and it was being secretly used by the US government.

1

visarga t1_ivkee3z wrote

I am sure he had something, but I don't believe it was comparable to what we have today. Lots of research has been done in the last 10 years.

1