Submitted by WobblySilicon t3_zz0tua in MachineLearning
Hello Everyone!
Are there any research problems in language comprehension and summarization tasks which don't require much compute? I wish to play with NLP/NLU now but compute requirements are enormous.. After reading around, i found that text to video problem is being actively researched and may not require as much compute as bare language models do. Are their any novel ideas in text to video domain not requiring much compute?
Mefaso t1_j29980m wrote
>i found that text to video problem is being actively researched and may not require as much compute as bare language models
There are always opportunities for research with little compute, usually this means your research has to avoid training new models, or at least avoid training from scratch.
However, text to video models are typically very compute extensive