Submitted by singularpanda t3_1060gfk in MachineLearning
TeamRocketsSecretary t1_j3fmzxw wrote
Fusion of LLM and vision models is something I’m noticing more work on. Also, embodied feedback with human in the loop, especially towards robotics applications. The vision field def seems to be co-opting language models and there is research on making inference with them faster (recurrent-transformers) and bringing back recurrence into the transformer which is interesting since transformers succeeded them naturally once the power of attention came to light.
Also a lot of work to be done on using them for mission critical applications (healthcare) as well as “robustifying” them (transformers using raw byte sequences showing much more robustness to noise.)
So I guess a lot of the native NLP tasks that LLM were made for are being used more for non-NLP tasks, especially now in reinforcement learning.
Viewing a single comment thread. View all comments