Pro_RazE

Pro_RazE t1_je8wgqs wrote

Wevolver App, 1X tech, Mira Murati, Nat Friedman, Clone Robotics, Mikhail Parakhin, MedARC, Adam.GPT, Jeff Dean (Google Research), DeepFloyd AI, John Carmack, Robert Scoble, Smoke-away, Wojciech Zaremba (OpenAI), roon, hardmaru, Joscha Bach, Nando de Freitas, Mustafa Suleyman, Andrej Karpathy, Ilya Sutskever, Greg Brockman, nearcyan, Runwayml, CarperAI, Emad Mostaque, Sam Altman, AI Breakfast, Aran Komatsuzaki, Jim Fan (Nvidia), Jack Clark (AnthropicAI), Bojan Tunguz, gfodor, Harmless AI, LAION, stability AI, pro_raze (my account is based on AI/Singularity).

I didn't list big research labs here. Follow all these, use For You page to stay updated, and you will be good :)

2

Pro_RazE t1_je4u9o9 wrote

On Twitter, follow all the major research labs (as well as the ones you're interested in), and also follow people who work in the field and post about AI progress every day. Relying solely on subreddits may cause you to miss out on a lot. That's how I keep up.

Definitely follow ak's account for the best paper updates https://twitter.com/_akhaliq

If you want account suggestions let me know :)

3

Pro_RazE OP t1_j9u3sra wrote

Man announced it through Instagram channels lmao. There's no paper or anything else posted yet.

Edit: They posted. Here's the link: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/?utm_source=twitter&utm_medium=organic_social&utm_campaign=llama&utm_content=blog

"Today we're publicly releasing LLAMA, a state-of-the-art foundational LLM, as part of our ongoing commitment to open science, transparency and democratized access to new research.

We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens"

There are 4 foundation models ranging from 7B to 65B parameters. LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B

From this tweet (if you want more info) : https://twitter.com/GuillaumeLample/status/1629151231800115202?t=4cLD6Ko2Ld9Y3EIU72-M2g&s=19

35

Pro_RazE t1_j8x9wmn wrote

They did the right thing. It's a conversational agent that helps with search and isn't supposed to talk about falling in love with you or threatening you.

OpenAI announced a day ago that they will soon allow users to customize ChatGPT according to their own preferences. So anyone will be able to create their own version of "Sydney". When GPT-4 will officially release they will upgrade ChatGPT to it anyways.

In a few months everyone will forget about this and the Sydney they liked will become outdated.

36

Pro_RazE OP t1_j8mpa4r wrote

This will be a huge success if companies like Google, Microsoft, Apple etc. makes this because people have a long relationship with them. They share a lot of personal data with them already so will be okay using this feature too (not everyone ofc). They can simply integrate the bot into their own notes app.

Suppose a random startup launches an app that can do this today, I doubt many people will trust it with their personal data. Personally I won't either.

In the long term scenario with Multimodal AIs. Google can integrate it into their Android OS. Easily access all your life updates through natural conversation with the bot. It can basically scan photos, videos, sound etc. everything. Your phone is your second brain.

2

Pro_RazE OP t1_j8lzjt8 wrote

There is another solution but this may take time. And it is that you run the bot locally with no access to the Internet. You put your files in a folder which the bot can learn from and then use it like that. Better open source models are currently being worked on so we will see something like this in a few years (and by that I mean a bot that you can feed information to which runs on a consumer gpu).

5

Pro_RazE OP t1_j8ly19p wrote

Yes that should be possible too. It can guide us for the future as it learns from the past data. Like some mistake we made in the past about something, it can tell us on how to do it again in a better way. A lot of possibilities!

I was just using OneNotes from Microsoft and I thought what if they integrate GPT into that. A very personalized bot that knows everything about your notes so you can easily get information. People who write about their daily life on it can benefit greatly from it.

Edit: this solves the privacy problem. If you can trust Microsoft OneNotes and write about your daily life, I think it wouldn't matter if a bot can learn from it and answer questions. Everything stays inside the app.

6

Pro_RazE OP t1_j7ox1lr wrote

Let's say you generate an image of a cat. CLIP can convert what is in the image into words and then use them to find similar images that was in the LAION dataset which Stable Diffusion uses. So if it was let's say an orange cat, it can find similar images of that cat that was used in the training. Without those original pictures, Stable Diffusion cannot generate pictures of an orange cat (poor example i know lol). It is not always accurate. And also generated images are always different to the original ones. But one recent paper kinda proved it wrong (very rarely happens)

I hope this helps.

3