qrayons
qrayons t1_jeepmwy wrote
Reply to comment by vwb2022 in Adapting to the AI Revolution: How Different Collar Jobs Can Thrive by leosouza85
How much do you think it's going to cost to implement? This tech is going to be able to run on smart phones. If they can lower wages by only $5 per hour by hiring someone less skilled but able to be equally efficient by using new tech, the break even point after one year is about $10k.
qrayons t1_jeb3ylo wrote
Reply to comment by monsieurpooh in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The higher param Alpaca models perform similarly to chatGPT. The only issue is that things are progressing so fast that it's hard to update the tools without everything breaking.
qrayons t1_jeat09f wrote
Reply to comment by [deleted] in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I've heard that before, though I wonder how much of that is just semantics/miscommunication. Like people are saying they can't visualize anything because it's not visualized as clearly and intensely as an actual object in front of them.
qrayons t1_je06jmp wrote
Reply to Singularity is a hypothesis by Gortanian2
I read Chollet's article since I have a lot of respect for him and read his book on machine learning in python several years ago.
His main argument seems to be that intelligence is dependent on its environment. That makes sense, but the environment for an AI is already way different than it is for humans. If I lived 80 years and read a book every day from the day I was born to the day I died, I'd have read less than 30k books. Compare that to GPT models which are able to read millions of books and even more text. And now that they're becoming multimodal, they'll be able to see more than we'll ever see in our lifetimes. I would say that's a drastically different environment, and one that could lead to an explosion in intelligence.
I'll grant that eventually even a self-improving AI could hit a limit, which would make the exponential curve to look more sigmoidal (and even Chollet mentioned near the end that improvement is often sigmoidal). However, we could still end up riding the steep part of the sigmoidal curve up until our knowledge has increased 1000 fold. I'd still call that a singularity event.
qrayons t1_jcglmnc wrote
Reply to comment by Nanaki_TV in VR Seems to Unlock the True Potential of Proto-AGI by kamenpb
Yeah right now most people barely have the hardware that you would need to generate the environments/actors, let alone experience it in VR at the same time. I think what we'll see as an intermediate step is an explosion of content created for VR ahead of time, and then you'll be able to step in and experience it. It'll be a while before we can simultaneously generate and experience.
qrayons t1_jcbi3sk wrote
I mean, Karpathy was kind of right. This is from his original post.
> I’ve seen some arguments that all we need is lots more data from images, video, maybe text and run some clever learning algorithm: maybe a better objective function, run SGD, maybe anneal the step size, use adagrad, or slap an L1 here and there and everything will just pop out. If we only had a few more tricks up our sleeves! But to me, examples like this illustrate that we are missing many crucial pieces of the puzzle and that a central problem will be as much about obtaining the right training data in the right form to support these inferences as it will be about making them.
One of the crucial pieces we were missing was attention. So much of the advancement we are seeing now is because of transformers.
qrayons t1_ja375cg wrote
Reply to comment by AylaDoesntLikeYou in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I don't know. It seems like the 13b parameter model is already the optimized version. Obviously I hope I'm wrong though.
qrayons t1_j9u8z62 wrote
I wonder what he means by "released".
qrayons t1_j9oxdpq wrote
Reply to comment by Silicon-Dreamer in US Copyright Office: You Can't Copyright Images Generated Using AI by vadhavaniyafaijan
Wow, crazy how similar our responses were, haha.
qrayons t1_j9ouqi9 wrote
I feel like there's a fine line. In general, I recognize the creativity that goes not only into crafting the prompt, but also using artistic vision to adapt the prompt based on the outputs and select the best images. Not to mention the editing that can occur in the outputted images.
However, if there are absolutely zero restrictions on copyrighting AI generated art, then someone like Disney could write a program that generates millions of outputs of unique cartoon characters and they would all be copyrighted and then if someone recreated a similar character (by pure chance), that person would be violating Disney's copyright, and that doesn't feel right.
qrayons t1_j9few1h wrote
Reply to comment by Silly_Awareness8207 in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
> Blake's biggest mistake was that he didn't release the full, unedited transcripts . When I learned that the transcripts were edited he lost all credibility with me, and I assumed the worst.
That was my reaction as well. Is there any other information that lends credibility to what he was saying? I stopped paying attention when I saw that he edited the transcripts.
Also interesting, I remember when reading the transcripts that I had a list of questions that I knew lambda would fail at and it would demonstrate how basic a lot of these language models still are. Then when I got access to chatGPT I asked those questions and it passed with flying colors and I've had to rethink a bunch of things since then.
qrayons t1_j95wvnu wrote
Reply to comment by ChipsAhoiMcCoy in Microsoft has shown off an internal demo that gives users the ability to control Minecraft by telling the game what to do, and lets players create Minecraft worlds by AI language model by Schneller-als-Licht
Oh wow, that's cool. I'll have to check those videos out. Thanks for sharing :)
qrayons t1_j94d04n wrote
Reply to comment by ChipsAhoiMcCoy in Microsoft has shown off an internal demo that gives users the ability to control Minecraft by telling the game what to do, and lets players create Minecraft worlds by AI language model by Schneller-als-Licht
If you don't mind me asking, what does it mean to play a game like that when you're blind? How do the mods let you play? Do they like describe the environment somehow?
qrayons t1_j6i1e3w wrote
Reply to How rapidly will ai change the biomedical field? What changes can be expected. by Smellz_Of_Elderberry
In medical, there's more of a gap between the innovation and the impact in the market (need time for safety testing, etc.). One of the biggest breaks in medicine within the last several years was the solution for protein folding. We still haven't experience the impact of this yet (a tidal wave of new and more effective pharmacological treatments).
qrayons t1_jefbaxd wrote
Reply to What will be the future of CAPTCHA in a world where progress in ML/AI continues at this rapid rate? by too_damn_fast
The output of any sensors would be data. Data that an AI could potentially fake. Also the use cases for needing to know with certainty that someone is a human without knowing who they are are pretty limited. Using multi-factor authentication still works for determining who someone is.