visarga
visarga t1_isozvm1 wrote
<offtopic> Where do I get a large-ish list of company names? Also, product names. </>
visarga t1_isij2xr wrote
Reply to [R] UL2: Unifying Language Learning Paradigms - Google Research 2022 - 20B parameters outperforming 175B GTP-3 and tripling the performance of T5-XXl on one-shot summarization. Public checkpoints! by Singularian2501
I'm wondering what is the minimum hardware to run this model, is this really the portable alternative of GPT-3?
visarga t1_isbgl2o wrote
"anime girl with blue eyes" -> Generated image contains NSFW content
visarga t1_is9mzus wrote
Reply to comment by londons_explorer in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
We need a learned physics model, there's so much video to train on, it's one of the most neglected modalities.
visarga t1_is9mk63 wrote
Reply to comment by [deleted] in [R] Mind's Eye: Grounded Language Model Reasoning through Simulation - Google Research 2022 by Singularian2501
Not just simulation, LLMs can also benefit from other toys: search, code execution/REPL, sub-requests, calling external APIs.
visarga t1_is0j3lv wrote
Reply to comment by polygon_lover in NovelAI Improvements on Stable Diffusion by Dr_Singularity
Even art students suck at hands.
visarga t1_is0fpyb wrote
I became aware of AI in 2007 when Hinton came out with Restricted Boltzmann Machines (RBMs, a dead end today). I've been following it and started learning ML in 2010. I am a ML engineer now, and I read lots of papers every day.
Ok, so my evaluation - I am surprised with the current batch of text and image generators. The game playing agents and the protein folding stuff are also impressive. I didn't expect any of them even though I was following closely. Two other surprises along the way were residual networks, which put the deep into deep learning, and the impact of scaling up to billions of parameters.
I think we still need 10,000x scaling to reach human level both in intelligence and efficiency, but we'll have expensive to use AGI in a lab sooner.
I predict the next big thing will be large video models, not the ones we see today but really large like GPT-3. They will be great for robotics and automation, games and of course video generation. They have "procedural" knowledge - how we do things step by step - that is missing in text and images. They align video/images with audio and language. Unfortunately videos are very long, so hard to train on.
visarga t1_is0ek0y wrote
Reply to comment by BigMemeKing in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
I agree, the problem is deeper. We have low level of trust in each other, so we ignore things.
visarga t1_is0cb6i wrote
Reply to comment by onyxengine in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
> Which will make it easy for people to write off the truth.
Wouldn't it be nice if there was a place where Truth was written so we can all check things up. But unfortunately that is not possible, so we're left with a continually evolving social truth.
visarga t1_is0c489 wrote
Reply to comment by BigMemeKing in Everyone seems so worried about mis/disinformation created by AI in the future and what it could cause people to believe, but I feel the opposite is true. by sidianmsjones
AI works both ways, it's a tool. You can use it to counter disinformation.
visarga t1_irzod3c wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
Not a whole team, not even a whole job, but plenty of tasks can be automated. By averaging over many developers there is a cumulative impact.
But on the other hand software has been cannibalising itself for 70 years and we're still accelerating, there's always space at the top.
visarga t1_irznvdp wrote
Reply to comment by RobbinDeBank in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
The original PILE.
visarga t1_irzidqj wrote
Reply to comment by vman512 in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
> you'd need a gigantic dataset for this to work
If that's the problem then OP can use Lexica.art to search their huge database with a picture (they use CLIP), then lift the prompts from the top results. I think they even have an API. But the matching images can be quite different.
visarga t1_irziac5 wrote
Reply to comment by milleniumsentry in [D] Reversing Image-to-text models to get the prompt by MohamedRashad
Now is the time to convince everyone to embed the prompt data in the generated images, since the trend is just starting. Could be also useful later when we crawl the web, to separate real from generated images.
visarga t1_irzdrho wrote
Reply to comment by _Arsenie_Boca_ in [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
> if LSTMs would have received the amount of engineering attention that went into making transformers better and faster
There was a short period when people were trying to improve LSTMs using genetic algorithms or RL.
-
An Empirical Exploration of Recurrent Network Architectures (2015, Sutskever)
-
LSTM: A Search Space Odyssey (2015, Schmidhuber)
-
Neural Architecture Search with Reinforcement Learning (2016, Quoc Le)
The conclusion was that the LSTM cell is somewhat arbitrary and many other architectures work just as well, but none much better. So people stuck with classic LSTMs.
visarga t1_irta9lz wrote
Reply to comment by MassiveIndependence8 in Why does everyone assume that AI will be conscious? by Rumianti6
It's not just a matter of different substrate. Yes, a neural net can approximate any continuous function, but not always in a practical or efficient way. The result has been proven on networks of infinite width, not on the finite networks we are using in practice.
But the major difference comes from the environment of the agent. Humans have the human society, our cities and nature as environment. An AI agent, the kind we have today, would have access to a few games and maybe a simulation of a robotic body. We are billions of complex agents, more complex than the largest neural net, they are small and alone, and their environment is not real but an approximation. We can do causal investigations by intervention in the environment and apply the scientific method, they can't do much of that as they don't have access.
The more fundamental difference comes from the fact that biological agents are self replicators and artificial agents are usually not (AlphaGo had an evolutionary thing going). Self replication leads to competition leads to evolution and goals aligned with survival. An AI agent would need something similar to be guided to evolve its own instincts, it needs to have "skin in the game" so to speak.
visarga t1_irt7w5u wrote
Reply to comment by HeinrichTheWolf_17 in Why does everyone assume that AI will be conscious? by Rumianti6
> Have you heard of Integrated Information Theory?
That was a wasted opportunity. It didn't lead anywhere, it's missing essential pieces, and it has been proven that "systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data" have high IIT (link).
A theory of consciousness should explain why consciousness exists in order to explain how it evolved. Consciousness has a purpose - to keep itself alive, and to spread its genes. This purpose explains how it evolved, as part of the competition for resources of agents sharing the same environment. It also explains what it does, why, and what's the cost of failing to do so.
I see consciousness and evolution as a two part system of which consciousness is the inner loop and evolution the outer loop. There is no purpose here except that agents who don't fight for survival disappear and are replaced by agents that do. So in time only agents aligned with survival can exist and purpose is "learned" by natural selection, each species fit specifically to their own niche.
visarga t1_irt4m6q wrote
Reply to AI art 256x faster by Ezekiel_W
An important observation to make is that it's only been demonstrated on images sized 32x32 and 64x64. A long way away from 512x512. Papers that only test on small datasets are usually avoiding a deficiency.
visarga t1_irloorr wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
> You can just split a large text to parts and feed each one of them
This won't capture long range interactions between passages or care about their ordering.
visarga t1_irllww0 wrote
Reply to comment by [deleted] in [D] Why can't language models, like GPT-3, continuously learn once trained? by SejaGentil
Oh yes we do. A whole internet of expired text will not help when you need factually correct recent data.
visarga t1_irdctls wrote
Is the library specific to CV? How does it compare to https://sbert.net/docs/package_reference/losses.html
visarga t1_ir12jh9 wrote
Reply to comment by fignewtgingrich in AI Generated Movies/TV by fignewtgingrich
I agree it's gonna be "abused" by humans as well.
visarga t1_iqzerze wrote
Reply to AI Generated Movies/TV by fignewtgingrich
> If we assume AI can eventually create a movie that is oscar nomination worthy every 10 seconds for essentially no cost
It's not gonna be a "movie" but more like a sim or a game, and we're not going to make it for entertainment but as a training ground for AI. Simulation goes hand in hand with AI because real world data is expensive and limited but sims only cost electricity to run.
We are already seeing generative models as source of training data. link
visarga t1_iqxstx5 wrote
Reply to comment by dalledoeswalle in When will our lives get better collectively. The clock is ticking!! by ObjectiveDeal
Just remember how you use your phone and explain that to a person from 200 years ago, I bet they'll think you are already deep into the singularity by their standards.
Having food, water, toilet, electricity and internet is nothing to brag about, even the poorest of us should have them. But just a couple of centuries ago these things would have been off the scale.
If you look back over decades or a couple of centuries life has been getting steadily better. It wasn't fake progress, but we're busier than ever.
Many people think after the singularity we'll have nothing to do anymore. On the contrary, I think we'll have more than before. We'll still compete and we'll often be unhappy like before.
Who said the purpose of AI should be to improve our lives? The purpose of life is to expand and exist despite the challenges it meets. That means competition and exploration, not peace and detachment. We didn't come out on top of nature by being nice, we exploited every advantage and knowledge along the way.
visarga t1_isq5mvf wrote
Reply to comment by tooold4urcrap in Is this imagination? by Background-Loan681
I believe there is no substantial difference. Both the AI and the brain transform noise into some conditional output. AIs can be original in the way they recombine things - there's space for adding a bit of originality there, and humans can be pretty reliant themselves on reusing other styles and concepts - so not as original as we like to imagine. Both humans and AIs are standing on the shoulders of giants. Intelligence was in the culture, not in the brain or AI.