visarga
visarga t1_iwdy4ug wrote
Reply to comment by Several-Car9860 in Theories of consciousness - Seth, A.K. and Bayne, T. (2022). by Singularian2501
You can explain qualia (the subjective or qualitative properties of experiences) - they are perceptions and their emotional charge, in the context of learning how to achieve goals. They key is the last part. The environment+goal feeds the learning process and gives shape to our emotional reactions.
visarga t1_iwdt41n wrote
Reply to Meta AI Has Built A Neural Theorem Prover That Has Solved 10 International Math Olympiad (IMO) Problems — 5x More Than Any Previous Artificial Intelligence AI System by Shelfrock77
Sometimes people say "Language models are like parrots. They learn patterns, but could never do something novel or surpass their training data."
This is proof that it is possible. What you need is to learn from validation. This process can be applied to math and code because complex solutions might have trivial validations.
When you don't have a symbolic way to validate the solution, you can ensemble a bunch of solutions and choose the one who appears most frequently.
visarga t1_iwb604s wrote
Reply to comment by AkaneTori in Ai art is a mixed bag by Nintell
Art correctness is in the eye of the beholder, I feel like you're gate keeping the new art kids. Let them eat cake.
visarga t1_iwanxre wrote
Reply to comment by Quealdlor in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
There is a tool to search images used to train Stable Diffusion. It has semantic search, so you can type in a "prompt" and it will find you the closest matches between real images, including art. You can also search by image.
visarga t1_iwamsbt wrote
You say it's enough to import a pretrained transformer from HuggingFace. I say not even that, you don't need to create a dataset and train a model in most cases, just try a few prompts on GPT-3.
In the last 4 years I worked on an information extraction task, created in-house dataset, and surprise - it seems GPT-3 can solve the task without any fine-tuning. GPT-3 is eating the regular ML engineer and labeller work. What's left to do, just templating prompts in and parsing text out?
visarga t1_iw91m20 wrote
Reply to comment by Tanglemix in Ai art is a mixed bag by Nintell
With NVIDIA's eDiff-I you can paint a sketch additionally to your text prompt.
visarga t1_iw90hqd wrote
Reply to comment by BearStorms in Ai art is a mixed bag by Nintell
What would happen if we loop this a few times?
visarga t1_iw8wgfn wrote
Reply to comment by Cultural_League_3539 in Ai art is a mixed bag by Nintell
An asshole because it gave everyone illustration superpowers?
visarga t1_iw8v2tm wrote
Reply to comment by AkaneTori in Ai art is a mixed bag by Nintell
> non artists invading the space
Many people using AI art generators do it for personal enjoyment, it's one-use art then throw it away, sightseeing, imagination fun. Or to see themselves and their loved ones in all sorts of imaginary situations and costumes. Not trying to take over professional art.
visarga t1_iw8udqf wrote
Reply to comment by Kaarssteun in Ai art is a mixed bag by Nintell
I don't believe it's a purge, it is a transformation. There is more potential for art now than before, but more evenly spread out.
visarga t1_iw8tf9r wrote
Reply to comment by plywood747 in Ai art is a mixed bag by Nintell
I bet you can use it to fish for ideas.
visarga t1_iw8l43h wrote
Reply to Ai art is a mixed bag by Nintell
You forgot the third element here: technology marching forward. Discoveries coming one by one from everywhere, USA, Europe, China, from universities, from companies, from hackers teaming up with visionary investors. It's impossible to get everyone to stop developing these models. If one of them disagrees, then releases a trained model, it becomes impossible to control how it is used. We already have pretty powerful models into the wild, nobody can put them back. What I mean is that technology, through 1000 forces, will march progress ahead no matter if we like it or not.
It might not be apparent but a ML engineers jobs are being "taken away" by GPT-3 at a huge speed. What used to take months to code and years to label can be achieved with a prompt and no training data today. No need to know PyTorch, Keras or Tensorflow. No need to know exactly the architecture of the network or how it was trained. This used to be the bread of many ML engineers. So it's not just artists. We all have to be assimilated by the new technology and find our new place.
visarga t1_iw6cj8l wrote
Reply to comment by Rezeno56 in Will this year be remembered as the start of the AI revolution? by BreadManToast
That's easy.
Neural nets before 2012 were small, weak and hard to train. But in 2012 we got a sudden jump in accuracy by 10% in image classification. In the next 2 years all ML researchers switched to neural nets and all papers were about them. This period lasted 5 years in total and scaled models from the size of an "ant" to that of a "human". Almost all fundamentals of neural nets were discovered during this time.
But in 2017 we got the transformer, this led to unprecedented scaling jumps, from the size of a "human" to that of a "city". By 2020 we had GPT-3 and today, just 5 years later from transformer, we have multiple generalist models.
On a separate arc, reinforcement learning, we got the first breakthroughs in 2013 with Deep Q-Learning from DeepMind on Atari games and by 2015 we had AlphaGo. Learning from self play has been proven to be amazing. There is cross pollination between large language models and RL. Robots with GPT-3 strapped on top can do amazing things. GPT-3 trained in self-play like AlphaGo can improve its ability to solve problems. It can already solve competition level problems in math and code.
The next obvious step is a massive video model, both for video generation and for learning procedural knowledge - how to do things step by step. YouTube and other platforms are full of video, which is a multi-modal format of image, audio, voice and text captions. I expect these models to revolutionise robotics and desktop assistants (RPA), besides media generation.
visarga t1_iw6ab1o wrote
Reply to comment by Reddituser45005 in Will this year be remembered as the start of the AI revolution? by BreadManToast
Maybe state of the art foundation models are hard to do without deep pockets, but applications built on these models are 100x easier to make now than before. I mean, you just tell it what you want. That's lowering the entry barrier for the public. Everyone can get in on it.
Used to be necessary to collect a dataset, create a custom architecture, train many models, pick the best, iterate on the dataset, etc to get to the same results. The work of months or years compressed into a prompt. It's not just artists that are being automated, traditional ML engineers too.
The only solution for ML eng is to jump on top of GPT-3 and its family, no more work left to do at a lower level. I am talking from personal experience, 4 years old project with 5 engineers and 3 labellers was solved at first sight by GPT-3 with no tuning. Just ask it nicely, it's all you have to do now.
visarga t1_iw6a1qa wrote
Reply to comment by green_meklar in Will this year be remembered as the start of the AI revolution? by BreadManToast
Maybe it was 2017, the year when "Attention is all you need" was published. This changed deep learning completely and everything we do today uses transformers.
visarga t1_iw4kyc3 wrote
Reply to comment by Quealdlor in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
> There are billions of images on the web and you could spend your whole life browsing through what has been uploaded to this point, without even considering what will be uploaded in the coming years
That's a very good argument why this whole reaction against AI art is overblown. What's a few billion extra AI images on top of the billions already out there? Not like we were lacking choice before.
But AI to the rescue - have you seen how nice it is to browse lexica.art by selecting "Explore this style" on an image? It's like an AI Pinterest. AI can help you find the art you like among the billions of images out there.
visarga t1_iw4jdkc wrote
Reply to comment by IndependenceRound453 in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
Copyright law generally protects the fixation of an idea in a “tangible medium of expression,” not the idea itself, or any processes or principles associated.
Neural networks don't store images inside, they decompose these images into elementary concepts and then recompose new images from such concepts. Basically they learn the unprotected part of the training set.
Think about it in size: 4 billion images shrunk into 4GB, that means a measly byte per input image. Not even a full pixel! It certainly has no space to store those images. It can only store general principles.
Getting offended for having a single byte learned from one of your images seems unjustified. On the other hand it looks ugly how pre-AI artists are gatekeeping the new wave of AI assisted artists. Let people eat cake.
visarga t1_iw4gbng wrote
Reply to comment by ReadSeparate in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
Before the PC there were plenty of professional typists and secretaries. Their jobs disappeared or were transformed, and we got an even larger number of office jobs on PC.
Generative AI will support jobs in many fields - medicine, design, advertising, hobbies and fan fiction. Art itself might get a paradigm shift soon, as humans strive to find something AI can't do. The same happened when photography was popularised, and look how many more uses photography has then painting used to have.
visarga t1_iw4g75u wrote
Reply to comment by spazzadourx in DeviantArt AI Update: Now Artists Will Be "Opted Out" For AI Datasets by LittleTimmyTheFifth5
> those jobs will be gone now
But new jobs will appear, and new applications that were too expensive will become possible.
visarga t1_iw0l8it wrote
Reply to comment by AlgaeRhythmic in Will Text to Game be possible? by Independent-Book4660
- BCI (brain signals) to human context and behaviour.
Imagine how detailed and massive could this dataset be.
visarga t1_iw01ajr wrote
Reply to comment by ihateshadylandlords in 2023: The year of Proto-AGI? by AdditionalPizza
There are some classes of problems where you need a "tool AI", something that will execute commands or tasks.
But in other situations you need an "agent AI" that interacts with the environment over multiple time steps. That would require a perception-planning-action-reward loop. It would allow interaction with other agents through the environment. The agent would be sentient - it has perception and feelings. How could it have feelings? It actually predicts future rewards in order to choose how to act.
So I don't think it is possible to put a lid on it. We'll let it loose in the world in order to act as an agent, we want to have smart robots.
visarga t1_ivzy68m wrote
Reply to comment by milkteaoppa in [D] Current Job Market in ML by diffusion-xgb
> ML can be cut and replaced with heuristic rules with a trade off in reduced performance.
Then it all depends on what was more expensive - the ML team or the trade-off.
visarga t1_ivv8pk4 wrote
Reply to comment by AdditionalPizza in Let's assume Google, Siri, Alexa, etc. start using large language models in 2023; What impact do you think this will have on the general public/everyday life? Will it be revolutionary? by AdditionalPizza
> So in that case, it could be for most people the "middle man" between user and internet.
A big danger to advertising companies, hence the glacial release pace of these language models in assistants.
> they could blast productivity and general knowledge
Already happening: you can't draw? StableDiffusion. You need help with coding? Copilot. They take skills learned from some of us and make them available to others. That makes many professionals jealous and angry.
visarga t1_ivt8r4i wrote
Reply to comment by [deleted] in They Put GPT-3 Into That Robot With Creepily Realistic Facial Expressions and Yikes by vom2r750
More recently GPT-3 can load 4000 tokens in the context. If you have a dataset of texts you can make a search engine that will put the top results in the context. Then GPT-3 can reference that and answer as if it was up to date.
Using this trick a 25x smaller model could have similar results with a big model, they had 1 trillion tokens of text in the reference.
visarga t1_iwdz0jc wrote
Reply to comment by -ZeroRelevance- in Ai art is a mixed bag by Nintell
But if you have a selection process it might become a virtuous cycle. An evolutionary art system based on humans and AI.