No_Ninja3309_NoNoYes

No_Ninja3309_NoNoYes t1_jegpb5t wrote

Well, you have analog to digital converters in the digital world. This gives you binary and machine language. Higher up you have assembly language which is basic instructions like load byte, store byte. Very tedious but simple. And then you have higher programming languages where you don't have to worry about low level details, the ones on the bits and bytes level.

I suspect that we operate on a high level too, but the language we use in public, written and spoken, is lower than what we use in our heads. It's like assembly language or even machine language. I think it would be really hard to translate from English to French if that wasn't the case. Or from Python to Java. Obviously programming languages have some resemblance to mathematics. For instance the concept of functions. If you never learned the pure concept of functions, it's hard to understand it with all the other things that you have to deal with like programming tools, editor, and assignments. So I think there's a more abstract language inside ourselves, but it's part of our hardware, so we can't express it.

1

No_Ninja3309_NoNoYes t1_jegeqgy wrote

There's one thing you learn pretty quickly about programming: programs almost never do what you want on the first try. So we can expect AI to fail in some way that we can't predict too. If it's a simple program with nothing at stake, it's no big deal. But if you expose young children and adults with issues, known or unknown at the moment, it could lead to bad outcomes. Furthermore organised crime and terrorist groups are a threat we shouldn't underestimate.

If history has taught us anything, it's that almost anything can be turned into a weapon. Each weapon will be used sooner or later. Personally, I need AI, but not at any cost. For example if third world countries suffer because they can't compete, I think we have to fix this issue first.

1

No_Ninja3309_NoNoYes t1_jeef1v1 wrote

For me the singularity equals simulation. If the singularity is possible then simulation is possible. And if something is possible, you can't rule it out. So I hope we don't get the literal Singularity because I don't want to be a NPC. There's a chance AI will be banned in several countries which could slow progress considerably.

5

No_Ninja3309_NoNoYes t1_jebobu3 wrote

If we all sign the LAION petition, we will get whatever we want. But someone will disrupt the game industry soon for sure. I hope they get rid of the text prompts too. Why can't we just upload some rough sketches instead? Or other files.

1

No_Ninja3309_NoNoYes t1_je90yip wrote

Formally it means minimizing error like curve fitting. For example fitting to a line. There's some steps like:

  1. Defining the problem

  2. Choosing architecture

  3. Getting data

  4. Exploring the data

  5. Cleaning the data

  6. Coding up some experiments

  7. Splitting the data into training and test data. The test is only used to evaluate the errors like an exam. And you need some data to tweak hyperparameters. The train data set is bigger than the other sets.

  8. Setting up the infrastructure

  9. Doing something that is close to the real training project for a while like a rehearsal just to make sure.

Once the training starts you have to be able to monitor it through logs and diagnostic plots. You need to be able to take snapshots of the system. It's basically like running a Google search, but one that takes a long time. Google has internal systems that actually do the search. No one can actually know all the details.

Adding more machines is limited by network latency and Amdahl's law. But it does help

1

No_Ninja3309_NoNoYes t1_je8rjyg wrote

Well, you have to consider the fact that many of the jobs, including mine are not strictly necessary in a if 'I don't do it people will die' way. There's many nice to have products and services. The must have are actually few. But here's a list of possible newish jobs of the future:

  1. Prompt engineers

  2. Prompt testers

  3. Prompt architect

  4. Prompt teacher

  5. Gladiator

  6. Gladiator cheerleader

  7. Gladiator coaches

  8. AI testers

  9. Testers of AI generated drugs

  10. AI babysitters

  11. Government AI inspector

  12. Government AI policy makers

So I think that the jobs will be related to our inability to trust AI. And also they will come and go as AI advances. The whole prompt industry might disappear if AI has digested enough prompts to know what we really want.

1

No_Ninja3309_NoNoYes t1_je6rd7x wrote

This is wrong on so many levels that I don't even know where to start. The obsession to reduce humans to computers leads to absurdity. It dehumanises us and leads to modeling the world as though we're all automatons. If a person becomes nothing more than a value producer, and groups are just factories, what's the point of having humans around? Just get yourself a truckload of GPUs!

−4

No_Ninja3309_NoNoYes t1_je1yvlu wrote

The usual onboarding process I'm familiar with is a combination of reading and listening. Plus, doing a basic project to understand how everything works. Obviously the prerequisite is some general technical knowledge otherwise all the jargon would be a mystery. But if AGI has this and the ability to listen and read, what else would it require? I mean, AGI doesn't go on holidays. It doesn't need to rest. It can learn and ask for information without feeling embarrassed or upset. Having no ego has its benefits. Oh yeah it does help to actually be able to physically manipulate objects because no one is going to open doors for you at work.

1

No_Ninja3309_NoNoYes t1_je1shvw wrote

Nothing. I'm the one getting advice. Someone told me to invest in Nvidia. I'm not doing that because my money is managed, and I'm not prepared to follow the markets too closely myself. My friend Fred wants me to work for him. But he can't offer me much except for the promise of billions at a one to a trillion chance.

I'm worried that poor countries will be left behind. Eventually the people will try to leave, and then who knows what will happen? I think that the situation will escalate quickly. If anyone needs to do something, it's the politicians in those countries.

3

No_Ninja3309_NoNoYes t1_je1at2h wrote

  1. Education

  2. Prompt engineers, AI babysitters

  3. All of them

  4. We can but they won't take it seriously.

I only know about Andrew Yang. Life without a job is not something that most people are actively preparing for. Society can handle short lived famine, but I'm more worried about riots and civil unrest. Andrew Yang thinks that UBI is possible. We'll have to wait and see...

3

No_Ninja3309_NoNoYes t1_je188qg wrote

Corruption is widespread amongst the elite. Self managed teams are not such a big deal. Most open source communities operate fine with limited leadership. The business hierarchy is based on the military, but it's not very agile. If people know what they're doing, and with good AI that should be the case, you have no need for strict discipline.

3

No_Ninja3309_NoNoYes t1_je15b34 wrote

Well, right now you can sort of tell if something was written by a skilled person. If AI can reach that level, does it matter? After all in capitalism, it doesn't matter how something was produced, as long as it was legal.

But the elite won't be interested in custom products. If everyone can get whatever they want just by asking, the elite will want things the old fashioned way. So for them it will matter. They won't read AI news. Paper newspapers delivered by little humans is what they want. No robots for them but human servants etc...

1

No_Ninja3309_NoNoYes t1_je0s2o9 wrote

If you can run AI on CPU instead of GPU or even Raspberry pi. Gerganov is trying to make local voice chat. If he adds images, and it works on cheap hardware, hang onto your stuff because the drones will be zooming around. Little robots will be constantly bumping into you. I mean, R2D2...

6

No_Ninja3309_NoNoYes t1_jdz3zf1 wrote

Well, AFAIK there's nothing in history that resembles the singularity. We have evolution, but it took a long time. We can claim that computers are faster than evolution, but that's just something we think is plausible. Humans reproduce every 20+ years, so if computers can do it faster, we're going to witness a cyber evolution.

And add quantum computers. Quantum computers have the potential ability to search exponentially growing space just by adding logical qubits. There's no equivalent in nature of quantum computers AFAIK. This doesn't rule out quantum computers.

The other option is biological computers using neural tissue. This doesn't seem as spectacular as quantum computers but still could potentially beat human evolution. I mean, this is not a religious argument. I'm not trying to prove the existence of God. There could be a path to AGI or not. It's more of an engineering question like can you break the sound barrier?

By the way I don't believe in the literal Singularity. There are many hard limits IMO that would prevent it.

TLDR Singularity might be achievable, but maybe not the literal Singularity. Technology could be developed to get us close.

2

No_Ninja3309_NoNoYes t1_jdur0m2 wrote

I'm doing this thing everyone is recommending about rating your prompts and getting feedback. It kind of works because you get suggestions how to make prompts more specific. I know a few people who dropped out of school. But they compensate by being passionate about their job. I can't really help you with that through Reddit, but maybe Chattie can.

You could say 'Rate this prompt: how do I become passionate about my job/field/X' . Then ask it how to improve the prompt. And maybe ask multiple times so you have multiple suggestions to choose from. Just to be safe I would save the session in a local doc, because you never know with Chattie.

2

No_Ninja3309_NoNoYes t1_jdr6b85 wrote

LLMs are statistical models whereas maths uses symbols. It's a different approach altogether. If we write an add function, you need two inputs a and b.

def add(a, b): return a + b

We see two symbols or variables a and b. Plus the add function, function definition and plus operation. Whereas LLMs see many tokens. A dozen perhaps. It's completely different from what a compiler/interpreter sees too. There's neurosymbolic AI which combines Deep Learning like in current LLMs with symbolic AI, but AFAIK it's not that good yet. Because I guess it's hard to mix both approaches.

2

No_Ninja3309_NoNoYes t1_jdqrqpf wrote

Yes, well, I have no problems with violence against violent criminals. Obviously the same goes for genocidal individuals. However, who is qualified to make this call? I don't think AI should do it. Even ASI. But of course AGI by some definition should be able to do it. I find the idea unacceptable. Not that humans do such a great job, but you have to draw the line somewhere.

1

No_Ninja3309_NoNoYes t1_jdq2wvm wrote

IDK where you get your information from, but I agree with the title. There's no clear reason why dolphins or squids could not have evolved further. The size of the universe is not that well understood IMO. The best theory is that quickly after the big bang an inflation happened and the universe has been expanding at lower rate ever since. But apparently there's recent evidence that this is unlikely.

There might be evidence of alien intelligent life, but we might be not smart or lucky enough to find it. It's a big universe after all. I don't understand what you are trying to say with quantum entanglement. If you mean that we can communicate with it, I'm not sure that would work.

You can't have literal infinite density. At least theoretically. The laws of physics break down at this point. On the quantum scale when the Planck constant h becomes meaningful single photons have enough energy to be disruptive. The energy of a photon is h * f. When photons are the equivalent of cannon balls, observing is the same as bombardment. There's nothing mystical behind that.

Photons have no mass, but they have energy as I said above. They can't move at infinite speed because if you think about it that's not movement. They would be everywhere and nowhere at the same time. Also v = d/t. You can make v infinite by letting d approach infinity or t approach zero. Kinetic energy is not free. The energy of photons is proportional to the Planck constant, so it's limited. Also because of Heisenberg uncertainty principle, time has a minimum bound and zero duration is meaningless.

TLDR; reality is bizarre, but some of your information needs checking.

3

No_Ninja3309_NoNoYes t1_jdprlpp wrote

80% sounds like a wild stab. I second that current systems are not original. Sure, they can stumble on something unique, but anyone can if they try hard enough. And computers can combine items faster than we can. Some of the combinations might be meaningful, but AI doesn't really know because they have no model of the world.

I don't think we can say much about GPT 4 because OpenAI is secretive about it. But it can't be AGI unless OpenAI invented something extraordinary. If they did, they would be fools to expose it to the world just like that.

It looks like he's talking about neurosymbolic systems or RNNs. IMO we need spiking neural networks hardware. The architecture would probably be something novel that we don't even have a name for yet.

1