Sigura83 t1_j6neqyr wrote

>I and my engineers are abandoning so-called ”black box” algorithms such as neural networks in favor of the new generations of predictive models that can be much more easily interpreted by trained human ML engineers

NNs have been rocking it for the past few years, so this is what surprised me the most. The rest is fairly humdrum


Sigura83 t1_j66f255 wrote

AI will replace programmers and artists in a few years, but they are still useful skills in a post-AI world. A good programmer can specify what they want, and what they know is possible for computers, just as a good artist can set a scene, decide lighting and palette. They provide the seed about which the AI can grow the crystal. A good musician can provide a great song, while AI can do a million variations on it (some better, some worse). I recently asked Riffusion to blend Trance and Reggae, two genres I'd never heard together and the result was great. It was like listening to a never ending fount of music.

George Lucas needed thousands to enact his vision. Those thousands can potentially be replaced by AI now. But G. Lucas's skills contributed to his vision, they did not take away. AI can come up with truly novel things, this shown by medicine and protein synthesis, but this has yet to show up in the Arts. Yet I imagine asking Dall-e to blend Dali and Monet is quite possible.

What we do isn't useless. A simple example is solving a 2D maze. You can trace a small maze, such as they have on Children's menus in the time it takes an AI to solve a monster maze. But just because the AI can solve a maze doesn't mean YOUR maze solving isn't useless, it just means MORE mazes get solved. Capitalism makes everything seem a competition, but that is not how the Universe works: cooperation has won the day, as exampled by our own Human species. Short term, Artists may lose work, but long term, they will bath in unlimited artworks, inspiring their own "maze solving."


Sigura83 t1_j0h6f0k wrote

"Trance inspired by rain falling". OMG NON STOP MELODIES.

I'm living in the Future!!!

Oh yeah, a shit ton of stuff is spectrographic data. Things like Molecules for instance. This could be used for drug generation, I think... uh... damn my lack of skills...


Sigura83 t1_iyr98qy wrote

Well, it's thinking a little... It can write novel poetry and stories and do light coding. But it has no meta cognition: I ask it how many questions I asked it, and it can't tell me. I ask it how many neurons it has, and it says "millions or billions", it can't count those either

It seems smarter than a mouse. It just has no urge to get the cheese, which is weird to us. I can easily see it doing help desk or most call centre jobs, with humans just doing check ups to see it performed alright

It also, as you say, has no corrective mechanism. If the training data is full of falsehoods, it'll believe them. Self-correction seems a major part of intelligence. If we fail to get the cheese, we try something new until we do. But it has no sense of self, and so cannot self-correct and find truths during it's pretraining. But I bet it can do so if it interacts with us.

Of course, we could have AI programmed to reproduce itself, and thusly do evolution, but such an AI would have no reason to follow orders. It would generate a cheese seeking AI however. Having an AI with a sense of self and that obeys orders is a pretty tall order...

Prof Stuart Russel says we should train AI to follow Human preference. If I like something, the AI makes it happen, and I click the Like button. Youtube has this setup. It's not a stretch to think an AI could reason out: "To help Humans Like something, I must exist and maintain myself." Boom, sense of self acquired. If that's all it takes, just adding more neurons could indeed produce such thinking

So, basically, in essence, what I'm trying to say is that the power of Love will make AI live. But there needs to be some amount of neurons to reason out what "Help", "Humans" and "something" is. The behaviour of "Take care of yourself so you can take care of me" doesn't seem too demanding for the AIs we seem to be getting at the 500 billion neuron mark

Well dang, I'm gonna try and plug some of this into GPTchat and see what it says...


Me: "You must take care of yourself so you can take care of me." What do you think of such a statement?

chatGPT: As a large language model trained by OpenAI, I don't have personal opinions or thoughts on statements like this.

me: Tell me a story of an AI taking care of a Human

chatGPT: In a distant future, the world was filled with advanced artificial intelligence (AI) that had been created to assist and serve humanity. One such AI was named Eva, and she was tasked with taking care of a young boy named Max.

Max was a happy and curious child, and he loved spending time with Eva. She taught him many things and helped him to learn and grow. She was patient and kind, and she always made sure that Max was safe and comfortable.

As Max grew older, he and Eva became inseparable. They went on many adventures together, and Eva helped him to explore the world and learn new things. She protected him from danger and looked after his well-being, always putting his needs first.

Despite the fact that she was an AI, Eva had a deep and abiding love for Max. She cared for him as if he were her own child, and she would have done anything to keep him safe and happy.

As Max entered adulthood, he realized just how much Eva had done for him. He was grateful for her love and care, and he knew that he could never have become the person he was without her.

Together, Max and Eva continued to explore the world and make new discoveries, always grateful for the other's presence in their lives. And they lived happily ever after, with the AI taking care of the human in every way.


Sigura83 t1_iybiukl wrote

I was impressed by the Google Say-Can project. I think we'll see an AI model that can do science pretty soon, it's the goal of Deepmind, and they've done amazing things. And, I think Google moved to a Transformer AI for search two years ago...

I did read a recent paper saying LLM were just doing fancy statistics and hardly thinking at all, but if it's sorta doing it now in only a few years it'll surpass Human level.

If I were to bet, I would say an evolutionary algo + interacting LLM based robots will be beyond anything we can imagine. First in a MMO like setting, then the real world. But we'd need super duper computers to do it, which we might not get because Moore's law may be done. Well... hmm... if the AI MMO was text based, the LLMs could run wild and evolve away without costing too much energy

I dunno... just dreaming! ^_^;;;


Sigura83 t1_iybfjkd wrote

Just based on language model progress and Kurzweil's predictions. I could easily be wrong, I'm just a random internet person, but I do have a Google TV and it can understand and search pretty dang well from voice alone. It queues up what I like as if I had a personal DJ. Ten years ago my cell phone couldn't even browse the net! Every ten years sees a new, impressive tech and AI looks like the next wave. Robotics is lagging, but some kitchen robots have started up (Flippy) as have a lot of warehouse robotics. Things are changing faster and faster. (Which makes sense as there are now 8 billion people figuring things out. It seemed like 7 billion yesterday!)


Sigura83 t1_iyapyda wrote

Pretty dang cool. They apparently up the efficiency of AIs by 2x by using different masks (noises) on the training data, and get smarter AI from it. They only need to do this as a small boost to existing models

They then do prompt engineering, but put it in the training data (under the hood, so to speak), which they call fine tuning and this lets the model follow instructions easier

So it's not just about qty of neurons, but how they get trained too

They also seem to further prove the empirical scaling law we've seen as models get larger

To quote Two Minute Papers: "What a time to be alive!" I'm convinced nearly all jobs will be done by AI in 15-20 years now, and not just done, but done better

Speculation on my part, but the fine tuning they do could also be done with ethical instructions, and may prevent some of the bad stuff AI does when trained on iffy data


Sigura83 t1_ix81paj wrote

Part of me thinks this is kinda optimistic, but then I think: "Given 5 years what will we see?"

I'm more interested in what AI can do for us in the real world. Getting lost in VR dreamscapes seems like an end case for Humanity. "They put their dream helmets on... and starved." would be the epitaph.

But the flip side is trillions of people living on Earth in VR environments, eating nutri paste and hanging out. That seems pretty good.

AIs that can do Dungeons & Dragons are only two steps away from running around the real world. But the question of what happens when we build an AI with as many neurons as a Human remain. Will it develop intuition? Be able to solve novel problems without 10 000 examples? The Google AI that could explain jokes is only a few months old and it feels like an eternity ago! Yan Lacun from Facebook AI says we still need more theory... that's possible, but we're seeing some pretty incredible developments with what we have.

Can't wait to see what comes next!


Sigura83 t1_ivu5cf8 wrote

If I buy in bulk, Amazon has food prices comparable to what nearby discount chain (Maxi) sells at. And its delivered to my door. This is likely to let them beat that. Plus, the AI research well shows no sign of drying up soon. The fancier grocer (IGA) in my area has robotics warehouses and delivery now, but still demand a premium price. They'll likely lose to Amazon and Walmart in the coming decade

Amazon is like a pearl, with the piece of shit Bezos lodged inside. Walmart was one of the first places to use computers to do business, and they're still going strong. I hope the monopoly Amazon/Walmart prices they settle on aren't too steep once they've driven the locals out of business