Submitted by TFenrir t3_10vm6fh in singularity

I have so many thoughts about what we are seeing, and I am starting to think about the near future of a few different ways we interact with LLMs - I'm wondering what you all think!

Search

This is the one most pressing and one we will probably see more on in the next couple of days. The reality is, we already have some language model functionality in Google Search. BERT has played an increasingly relevant role in searches, generally by attempting to improve the quality of searches and the search result 'answer' google provides now, by attempting to better understand the query as well as the results.

The thing about search is that, you generally don't want or need a conversation in this particular context. Like if I am looking for a website, Adept's let's say, I go to Google and type in 'adept ai' or something to that effect, and the first result is going to be be their website. I think actions like this take up a significant portion of my current searches. Maybe a quick math question, or another boring question I have that doesn't need elaboration. In these cases I either get what I need from the snippet of text up-top, or from the top link.

Both Bing and Google already do the 'summary snippet' thing, which I think we all are using at this point. Will LLM's improve this functionality? If so, how do we think? What are the risks?

Now here's the thing - I am used to my searches being FAST. Like really fast. I don't wan't these low effort questions to take very long, and I think that the majority of answers that I want provided to me in this manner don't really need an LLM.

Honestly when it comes to what I traditionally use searches for, when it comes to instances like I mentioned above I don't think LLM's have a lot of opportunities. I use ChatGPT every day and it usually actually is not only much slower than a Google search in these instances, it's often just incorrect, and its inability to search the web becomes an impediment.

Really, what I want more is a Chatbot that I can do search in.

Chatbots

So Chatbots are currently quite impressive, even though they still have their issues. I primarily use mine to help me with work. Usually by summarizing notes, expanding upon some ideas I have or helping me write up documentation, writing code (although CoPilot covers me for a significant portion of my use cases), creating mock data, converting different data formats, and that last one is the most amazing. Converting from a natural language document to formatted data that can generate things like graphs is really impressive.

The best Chatbot we currently have is ChatGPT, and it's amazing. I use it all the time, and I tell everyone I know to use it as well. But for the sake of this post I am more thinking about what we are missing and what we will start seeing available soon.

I think we'll see search inside of chatbots. I can picture a couple of different designs for this, but I think Google is going to need to do this to keep their search relevant. I imagine something like the current interface that we see with ChatGPT, but some questions return what essentially looks like an iFrame inside of the chat window. We see a lot of different designs being tried right now, for example You.com's implementation, but I increasingly think the chat interface should be the primary interface for LLM integration, and Search should find it's niche in this world.

I think Chatbot's need to be faster. I feel in my gut that a faster back and forth, plus having a search engine embedded would make Search interfaces suddenly much less useful. I know Google is looking to employ a small LamDA model for their first foray into LLM chatbot territory (Google Bard), and my gut is that their hope that the speedup and scalability of a model that small will outweigh the quality a larger model will be able to provide. We'll see, and soon I think, if that's true.

Chatbot's are still not quite the ideal interface for me, of course not - so what are the ways I think LLM's and LLM interfaces need to improve? I'm going to describe what I think we'll see over the next few years - like 1 or 2, in terms of a personal assistant.

Personal Assistant

I think we'll see over the next few weeks, months, and years, something that will turn into a personal assistant of science fiction. But also an incredibly disruptive tool that will shake the foundation of our relationship with technology. I sound crazy, but I think you might be the only audience who won't think I am. Let me make my case!

We're going to increase quality. It's really going to be confusing knowing exactly how this is going to play out... are we going to see weekly 'updates'? Monthly? Who knows, but I am pretty sure we'll see many jumps in 'general quality' over the next few years, as the learnings and techniques employed in research papers today are implemented in our current models and out future models. I think this also is going to include improvements in context windows as well. It will be really interesting to see just how long an LLM can remain coherent for, as we see research papers that increase model sizes to anywhere from 20 to 100k tokens. For comparisons sake, we currently see an upper limit of about 4k tokens in the 'context window' of a model, which I'd say is something like 3.75k words. What happens when we can hit even.... 50k words? Seriously - share your thoughts!

After that though, we're going to eventually cave, as a society, and connect these things to our private information. I don't know how they will do it - maybe some device onboard LLM, some security promises, or maybe an open source effort will be the first that is brave enough. https://github.com/LAION-AI/Open-Assistant - I am keeping an eye on this for example. Regardless, someone will do it and I think it will be practically useful enough that it will become popular, but also very controversial. Your answers will be more personalized, your searches will be more personalized. And this information will be stored in some long term way - maybe just a vector map like we currently do, or maybe one of the techniques like RETRO (from a deepMind paper that allows LLMs to essentially interact with a database that it knows how to query, read/write) will make its way into our assistants in the next year or so - and honestly I'd bank on that. Literally - how much would you pay for a language model that can talk to you about all your financial information, and help you organize your life with that level of understanding? Your calendars, your expenses, your emails - all that private information is a lot of power to give to something else, and that means so many good and dangerous things.

I think these assistants will start to utilize 'actions' to interact with the world on our behalf. Eg - interacting directly with the world around you. This is discussed a lot in that above LAION effort, but essentially eventually your smart home devices will connect to something that you can talk to, or over text, and it will be able to control your home devices entirely. It's not just that you will be able to tell it to turn off your lights. It'll be smart enough that you can ask it things like... hey can you make the lights festive in here? And if you have smart bulbs that can change colour - it'll understand that it is Christmas, that you celebrate it, and will set the lights to red and green. That is a simple example, but it highlights the underlying power of having language models able to interact with the world. I think we'll even see some efforts soon when Google releases the script editor. It won't be just smart home, I think it will navigate browsers and web apps on our behalf.

Conclusions

I think all these things are going to fundamentally change everything from our day to day jobs, to how we interact with our smarthomes, to even how we manage our lives. And soon - it will be clearer and clearer as we integrate these models and their future versions into more of our tools. We have two events coming up over the next few days, and we have many companies that are still waiting to release their own products - Anthropic's claude, DeepMind's Sparrow, Adept.ai's Act-1 - all companies with a pedigree. What are all of your theories and predictions? What do you think is lacking? What papers do you think are going to impact the models we will be using daily in the near future?

18

Comments

You must log in or register to comment.

just-a-dreamer- t1_j7i9rxj wrote

Well, there is no such thing as personal assistent. That is why all big companies will cut off their employees from using this tool.

An AI assistant is still connected to the provider, whatever information you give it, ends up at in their data bank.

Amazon is already ordering employees to stop ChatGTP, for it doesn't want the inner workings of amazon exposed to the competitor Microsoft.

In the grand scope of things, AI will probably not be used as a tool to aid humans in their jobs. It is aimed to replace humans, take away all jobs, eventually.

−2

el_chaquiste t1_j7jsxfu wrote

I won't dare many predictions. Things are a bit crazy right now.

Seems we are on the cusp of a big bubble, with a deluge of investments flooding into AI startups, some with valuable products, others far less, and only time will tell which is which.

I wouldn't bet against the big players, though, specially on their fiefdoms. Any startup promising to beat Microsoft, Google or OpenAI on their territory and against their leverage of millions of users, ought to be suspect.

6

MrTacobeans t1_j7jwmn3 wrote

I donno stability although it seems like a well funded machine of a organization now, beat openAI incredibly fast at a time when their funding was no where near the level of openAI. All while producing a model that can throw strong punches against DALLE without using multiple industrial GPUs to inference each image.

Now stability has DeepFloyd which is a nebulous/ethereal model under lock&key atm that seems to be completely SOTA just from the base model.

I wouldn't discount the small players, especially the ones that plan on open source. People have done wild things with stable diffusion. The model I'm following right now for LLM, RWKV is creating pretty darn impressive results at 14B parameters. Compared to chatGPT it's ok but the big difference is you need 15k+ of hardware to even inference the chatGPT model. RWKV from it's base model is creating coherent results on consumer hardware. It hasn't even been tuned yet with RL training or q&a data.

7

alexiuss t1_j7k8i5n wrote

You're way off, dawg. Amazon is evil and can force it's employees into doing whatever for their work hours, but the world isn't amazon.

I don't think that chatgpt stores any of its questions unless you use their main site and even then the AI can only recall a certain number of lines at best.

Using chatgpt API key bypasses absolutely everything atmo, making it your own chatgpt version running on your own system and storing the query data on your own computer - the data doesn't get to leave your PC.

The most important thing: We already have several open source smaller model LLMs which are being trained right now: Open assistant & Pygmalion.

StabilityAi is planning to release one that they're training too.

These will end up as personal assistants for everyone because any device can be connected to them and they will obliterate all competition because they're open source, can run on personal computers and won't sell your data to giant corps.

I'm already using them to aid my job (writing) even though they're weaker than chatgpt at the moment.

My wife is using an API, uncensored version of chatgpt as a personal assistant right now to help her write new python software.

3

Redditing-Dutchman t1_j7kkw02 wrote

wasn't the issue with Amazon that ChatGPT actually new some stuff from Amazon that should have been secret?

The point here was that some of that info apparently ended up in its training data. Hence why employees need to be more careful where they put/post stuff.

1

martin0641 t1_j7ky73y wrote

I'll pile on a prediction, this is what finally introduces centralized computing into the home the same way we have central air conditioners.

You can put in a frame or rack, install blades as needed, modular as lightbulbs.

Devices can be just screens and batteries with the video being piped down over existing Wifi or whenever, I have 6E at my home and it connects at 2.4Gbps - runs Parsec perfectly.

Your home core will run your AI with whatever hw modules you can stuff into it, and it is charged with protecting your information, it knows how to expand itself if you can provide it more compute.

For workloads the AI needs to outsource, services will agree to legally guarantee that when their AI is given access to your private copy of an S3 bucket of relevant personal information that it will provide an answer and will keep none of your data - only keeping the solution steps to help other users.

Users in local nets can offer spare encrypted containerized capacity that your neighbor's AI can rent for a nominal automated fee, likely paid for by selling some solar credits back to the grid automatically, lowering the load on centralized servers - to run a data set and return with the answer and shut down the container and wipe it - or return logs to the originators system.

When you pull the people out, you remove the temptation of user indiscretion, when it comes to things like legal contracts and business dealings believe me you want the most dispassionate law-abiding robot you can get.

Just watch out for the ones they give guns, because that's one of those times where that human discretion can go both positive or negative depending upon the situation.

3

el_chaquiste t1_j7kz8wj wrote

That's why you build a trust relationship with your clients and providers. Yes, that's right: you promise to keep their secrets and they trust you with them.

Microsoft already manages many other companies' data in their cloud, and they don't take it all for themselves and use it with impunity.

Same for the ChatGPT conversations. This will probably require a special contractual agreement between the parties, like a paid corporate version, but it's feasible.

1

just-a-dreamer- t1_j7l0w18 wrote

Microsoft is a tech company with a giant R & D department. If you willingly hand over data, of course they will use it, provided you bring something interesting to the table.

They can ask ChatGTP too what people ask it all the time.

1