Submitted by TFenrir t3_10vm6fh in singularity
I have so many thoughts about what we are seeing, and I am starting to think about the near future of a few different ways we interact with LLMs - I'm wondering what you all think!
Search
This is the one most pressing and one we will probably see more on in the next couple of days. The reality is, we already have some language model functionality in Google Search. BERT has played an increasingly relevant role in searches, generally by attempting to improve the quality of searches and the search result 'answer' google provides now, by attempting to better understand the query as well as the results.
The thing about search is that, you generally don't want or need a conversation in this particular context. Like if I am looking for a website, Adept's let's say, I go to Google and type in 'adept ai' or something to that effect, and the first result is going to be be their website. I think actions like this take up a significant portion of my current searches. Maybe a quick math question, or another boring question I have that doesn't need elaboration. In these cases I either get what I need from the snippet of text up-top, or from the top link.
Both Bing and Google already do the 'summary snippet' thing, which I think we all are using at this point. Will LLM's improve this functionality? If so, how do we think? What are the risks?
Now here's the thing - I am used to my searches being FAST. Like really fast. I don't wan't these low effort questions to take very long, and I think that the majority of answers that I want provided to me in this manner don't really need an LLM.
Honestly when it comes to what I traditionally use searches for, when it comes to instances like I mentioned above I don't think LLM's have a lot of opportunities. I use ChatGPT every day and it usually actually is not only much slower than a Google search in these instances, it's often just incorrect, and its inability to search the web becomes an impediment.
Really, what I want more is a Chatbot that I can do search in.
Chatbots
So Chatbots are currently quite impressive, even though they still have their issues. I primarily use mine to help me with work. Usually by summarizing notes, expanding upon some ideas I have or helping me write up documentation, writing code (although CoPilot covers me for a significant portion of my use cases), creating mock data, converting different data formats, and that last one is the most amazing. Converting from a natural language document to formatted data that can generate things like graphs is really impressive.
The best Chatbot we currently have is ChatGPT, and it's amazing. I use it all the time, and I tell everyone I know to use it as well. But for the sake of this post I am more thinking about what we are missing and what we will start seeing available soon.
I think we'll see search inside of chatbots. I can picture a couple of different designs for this, but I think Google is going to need to do this to keep their search relevant. I imagine something like the current interface that we see with ChatGPT, but some questions return what essentially looks like an iFrame inside of the chat window. We see a lot of different designs being tried right now, for example You.com's implementation, but I increasingly think the chat interface should be the primary interface for LLM integration, and Search should find it's niche in this world.
I think Chatbot's need to be faster. I feel in my gut that a faster back and forth, plus having a search engine embedded would make Search interfaces suddenly much less useful. I know Google is looking to employ a small LamDA model for their first foray into LLM chatbot territory (Google Bard), and my gut is that their hope that the speedup and scalability of a model that small will outweigh the quality a larger model will be able to provide. We'll see, and soon I think, if that's true.
Chatbot's are still not quite the ideal interface for me, of course not - so what are the ways I think LLM's and LLM interfaces need to improve? I'm going to describe what I think we'll see over the next few years - like 1 or 2, in terms of a personal assistant.
Personal Assistant
I think we'll see over the next few weeks, months, and years, something that will turn into a personal assistant of science fiction. But also an incredibly disruptive tool that will shake the foundation of our relationship with technology. I sound crazy, but I think you might be the only audience who won't think I am. Let me make my case!
We're going to increase quality. It's really going to be confusing knowing exactly how this is going to play out... are we going to see weekly 'updates'? Monthly? Who knows, but I am pretty sure we'll see many jumps in 'general quality' over the next few years, as the learnings and techniques employed in research papers today are implemented in our current models and out future models. I think this also is going to include improvements in context windows as well. It will be really interesting to see just how long an LLM can remain coherent for, as we see research papers that increase model sizes to anywhere from 20 to 100k tokens. For comparisons sake, we currently see an upper limit of about 4k tokens in the 'context window' of a model, which I'd say is something like 3.75k words. What happens when we can hit even.... 50k words? Seriously - share your thoughts!
After that though, we're going to eventually cave, as a society, and connect these things to our private information. I don't know how they will do it - maybe some device onboard LLM, some security promises, or maybe an open source effort will be the first that is brave enough. https://github.com/LAION-AI/Open-Assistant - I am keeping an eye on this for example. Regardless, someone will do it and I think it will be practically useful enough that it will become popular, but also very controversial. Your answers will be more personalized, your searches will be more personalized. And this information will be stored in some long term way - maybe just a vector map like we currently do, or maybe one of the techniques like RETRO (from a deepMind paper that allows LLMs to essentially interact with a database that it knows how to query, read/write) will make its way into our assistants in the next year or so - and honestly I'd bank on that. Literally - how much would you pay for a language model that can talk to you about all your financial information, and help you organize your life with that level of understanding? Your calendars, your expenses, your emails - all that private information is a lot of power to give to something else, and that means so many good and dangerous things.
I think these assistants will start to utilize 'actions' to interact with the world on our behalf. Eg - interacting directly with the world around you. This is discussed a lot in that above LAION effort, but essentially eventually your smart home devices will connect to something that you can talk to, or over text, and it will be able to control your home devices entirely. It's not just that you will be able to tell it to turn off your lights. It'll be smart enough that you can ask it things like... hey can you make the lights festive in here? And if you have smart bulbs that can change colour - it'll understand that it is Christmas, that you celebrate it, and will set the lights to red and green. That is a simple example, but it highlights the underlying power of having language models able to interact with the world. I think we'll even see some efforts soon when Google releases the script editor. It won't be just smart home, I think it will navigate browsers and web apps on our behalf.
Conclusions
I think all these things are going to fundamentally change everything from our day to day jobs, to how we interact with our smarthomes, to even how we manage our lives. And soon - it will be clearer and clearer as we integrate these models and their future versions into more of our tools. We have two events coming up over the next few days, and we have many companies that are still waiting to release their own products - Anthropic's claude, DeepMind's Sparrow, Adept.ai's Act-1 - all companies with a pedigree. What are all of your theories and predictions? What do you think is lacking? What papers do you think are going to impact the models we will be using daily in the near future?
just-a-dreamer- t1_j7i9rxj wrote
Well, there is no such thing as personal assistent. That is why all big companies will cut off their employees from using this tool.
An AI assistant is still connected to the provider, whatever information you give it, ends up at in their data bank.
Amazon is already ordering employees to stop ChatGTP, for it doesn't want the inner workings of amazon exposed to the competitor Microsoft.
In the grand scope of things, AI will probably not be used as a tool to aid humans in their jobs. It is aimed to replace humans, take away all jobs, eventually.