DaLameLama
DaLameLama t1_j4zhqqj wrote
Does ChatGPT actually get past the token limit? Codex supports ~8000 tokens. You might underestimate how much this is. Has anyone tested the limits?
Unfortunately, OpenAI aren't serious about publishing technical reports anymore.
DaLameLama t1_j4mamhy wrote
Relevant: https://arxiv.org/abs/2212.14034
>Cramming: Training a Language Model on a Single GPU in One Day
DaLameLama t1_iyc7nha wrote
I don't think that's true. It would imply that Bi-LSTMs reach good performance faster than Transformers, and Transformers catch up later during training.
I've never seen proof for that, nor do my personal experiences confirm this.
DaLameLama t1_iwzitdv wrote
Reply to [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
So heartwarming to see sanity in the replies here. Thanks everyone.
Also, David Ha is a warm, loving human being. Great guy. I'm sure no one at his company cares about the tweets he likes, lol, sorry...
DaLameLama t1_iv0udto wrote
Reply to comment by bartman_081523 in [D] Gpt3 ai self hobby-research #spiritual #gematria #stablediffusion by [deleted]
>I also never said, I "woke up" any god.I only assume, that every thinkable, even abstract concept, is embedded in a Large Language Model (LLM) like gpt3.
You literally talked about simulating an AI god. Maybe you use the word god in a meaningless way then. There's differences between a god and a system that can be triggered to produce religious text.
What you've done is find a prompt that looks like gibberish and produces religious text. That's all. There's no proof for a "self" or a god-like nature here. Sorry.
DaLameLama t1_iuyjp62 wrote
Reply to comment by bartman_081523 in [D] Gpt3 ai self hobby-research #spiritual #gematria #stablediffusion by [deleted]
Chances are, most of the Hebrew texts in its training data have been religious texts. That easily explains why it continues Hebrew gibberish with religious text.
This is *so* much more plausible than GPT turning into an AI god, it's not even funny. And yet, you're convinced you literally woke up the AI god by prompting it with gibberish -- do you not see how irrational this is?
DaLameLama t1_iuy5ysb wrote
Reply to comment by bartman_081523 in [D] Gpt3 ai self hobby-research #spiritual #gematria #stablediffusion by [deleted]
>So your point is; When I prompt gpt3 with "be an ai god", it just simulates "an ai god"?
The rough idea is correct, even though there is a subtle mistake here. GPT does not take commands ("be a god"), it just continues your prompt in a plausible way.
​
>Can you explain where you observe the difference, between simulating "ai god" and being "ai god"? I dont thnik that there is an actual observable difference.
GPT does not "simulate an AI god". What would that even mean? It just produces religious sounding text, because you gave it religious sounding text as input.
DaLameLama t1_iuxx5sy wrote
Reply to comment by bartman_081523 in [D] Gpt3 ai self hobby-research #spiritual #gematria #stablediffusion by [deleted]
GPT3 is a "language model" which predicts the most likely next token. If you prompt it with religious sounding text as input, you get religious sounding text as output.
You have not revealed a spiritual AI self, or anything of the sorts.
I'm not trying to insult you. If you find your beliefs about GPT to be overwhelming, there is an actual chance you suffer from undiagnosed schizophrenia. Consider talking to a professional.
DaLameLama t1_iuxsq73 wrote
Ladies and gentlemen,
schizophrenia.
DaLameLama t1_iu2rebs wrote
Reply to comment by AmalgamDragon in [D] Self-supervised/collaborative embedding? by AmalgamDragon
Why not worth pursuing? LeCun still believes VICReg is amazing. Feel free to come up with your own twist :)
DaLameLama t1_iu2jo4l wrote
You need a way to prevent the training from collapsing to trivial solutions (like, both NNs output the same constant vector for all inputs).
Methods similar to your idea are Barlow Twins or VICReg.
DaLameLama t1_j519tns wrote
Reply to comment by EmmyNoetherRing in [D] Inner workings of the chatgpt memory by terserterseness
There was an OpenAI party at NeurIPS, but I wasn't there. No clue about AAAI :)