localhost80
localhost80 t1_jdcrfd0 wrote
Reply to comment by _Arsenie_Boca_ in [P] Open-source GPT4 & LangChain Chatbot for large PDF docs by radi-cho
GPT charges per token so it depends on the length of the document
localhost80 t1_jdbmrtf wrote
Reply to [P] One of the best ChatGPT-like models (possibly better than OpenAssistant, Stanford Alpaca, ChatGLM and others) by [deleted]
Nice clickbait title!
I wish you the best of luck on your journey as a student but no need to hype up your project with insane claims.
localhost80 t1_it2pnor wrote
Reply to comment by livremente in [R] MIT releases all slides for efficient ML course by That_Violinist_18
It's perfectly efficient because the python is mostly configuration by code. Just because you call a function in python doesn't mean the function is executed in python. GPU routines run in CUDA, for example.
localhost80 t1_jdct42q wrote
Reply to comment by Different_Prune_3529 in [P] Open-source GPT4 & LangChain Chatbot for large PDF docs by radi-cho
It will have better performance relative to the knowledge in the documents. It's the comparison of GPT-4 with global knowledge vs GPT-4 with local knowledge.