Hands0L0 t1_jckbm7h wrote

Reply to comment by Akimbo333 in Those who know... by Destiny_Knight

No, because the predictive text needs the entire conversation history context to predict what to say next, and the only way to store the conversation history is in RAM. If you run out of RAM you run out of room for returns.


Hands0L0 t1_jck7ifi wrote

Reply to comment by Akimbo333 in Those who know... by Destiny_Knight

Not if there is a token limit.

I'm sorry, I don't think I was being clear. The token limit is tied to VRAM. You can load the 30b on a 3090 but it shallows up 20/24 gb of VRAM for the model and prompt alone. That gives you 4gb for returns


Hands0L0 t1_jck1kg0 wrote

Reply to comment by liright in Those who know... by Destiny_Knight

Llama is a LLM that you can download and run on your own hardware.

Alpaca is, apparently, a modification of the 7b version of Llama that is as strong as GPT-3.

This bodes well for having your own LLM, unfiltered, run locally. But still, progress needs to improve.


Hands0L0 t1_j9r49i6 wrote

I think you may be overstating human creativity. There are plenty of visionaries among us who create new concepts, but the vast many of us are -boring-. We share the same memes and when we try to make our own memes they fall flat. How many people do you know have tried to write a book, and it ends up being rife with established tropes? How many hit songs use the same four chord progression? When was the last time you experienced something -truly- unique? It's been a long time for me, that's for sure.

So I don't think "making something totally unique" is the best metric for AGI. Being able to infer things? That's where I'm at. But I'm not an expert, so don't take what I'm claiming as gospel


Hands0L0 t1_j9pzdya wrote

I feel like the best metric I can think of that is totally feasible is this: When we are able to show an AI a video without dialogue, with all of the concepts being delivered strictly by how human actors are interacting in the video, if the AI is able to tell you all about the video in precise detail, we're right there. I honestly think this isn't very far off (10-20 years). There's plenty of Python APIs that are able to detect what objects are in live video, the next step is understanding interactions and once it can comprehend something that it itself can't ever reproduce, AGI is imminent.


Hands0L0 t1_j01xkig wrote

I think you are going to see a lot of companies releasing AI software who helps you do your jobs. It will provide suggestions but AI won't be able to do the job for you. Let's say someone sends you an e-mail asking about a project. The AI will be to read the e-mail, look at your calendar and details about the project, and provide suggestions for an e-mail response.