Tiamatium

Tiamatium t1_je9fadb wrote

It probably won't.

Also look at the list chatGPT gave you, it broadly falls into two categories:

  1. Artists, and this is based on idea that creativity and artistic expression are unique to humans and cannot be recreated by AI.

  2. Very smart knowledge workers with decades of specialized experience under their belt. The very fact that we consider these people supersmart should give you a hint that not everyone can be AI systems developer or a researcher.

1

Tiamatium t1_jdzrfya wrote

If you can't tell the difference, does it matter?

How are we regulating Photoshop today? How are we regulating digital art today? How are we regulating flat out plagiarism today?

Why the fuck do you want to *regulate art" of all things?! Do you think people should need a special license to create art?! What the fuck is up with this gatekeeping?

None of those problems are unique to AI and none are real. AI is just a tool, and while I know that certain artists want to fight it, ban it or get paid for... "Being fucked" by it, that is not new. In fact we have had this exact problem back in mid-1800's with raise in photography. There is a famous rant from 1860's(?) about all the talentless losers (not my words, I am paraphrasing the author of the rant) who can't paint and who can't graduate from university becoming photographers. Painters who used photographs for reference had to hide it. Painters who said art has to adapt were systemically pushed out of art word and exhibits.

So that is literally not a new problem

11

Tiamatium t1_jdtpmir wrote

All the things you mentioned would create more existential treat to it. This is an example of dumb superintelligence, an AI that we are told is smart but is dumb as fuck. You do know that nuclear war would destroy cities, power stations, and thus datacenters would be among the first things to go.

Sure, it might try to acquire computational power, it might even become establish a company and become a CEO of that company, but once that's done, it's out, humans probably wouldn't be able to shut it down. And bonus if it actually started solving our problems, like curing cancers, then humans wouldn't want to shut it down.

5

Tiamatium t1_jbeaugi wrote

I doubt it. We have a mountain of evidence, including archeological, anatomical, written and oral traditions saying that we started riding horses somewhere between 3000 and 2000 years ago, and we have this suggesting it was 5000 years ago. We have bones of horses showing that 3000 years ago they were too small to support an adult human rider, we have paintings and descriptions from 3000 years ago showing horses being smaller (e.g. from Egypt), be we have oral traditions saying humans used to ride chariots dragged by a pair of horses (e.g. from Homer), etc. All the "horse riders" would be risking death, simply because horses were too small, the weight of rider would break the horses back and throw the rider down and kill or injure him (and injury was death).

−2

Tiamatium t1_jbdk7us wrote

Few things, depending on what you mean by "it". If you're talking about AGI, then I could come up with a small list actually:

  1. Funding and cost of AI in terms of work-results. If we realize that AI of intelligence of a mouse or a stupid dog can do everything and anything we need, and it's rather simple to create an AI like that, but it's a lot harder to create an AI of human level intelligence, there simply won't be any financial insensitive to create a smarter AI, and frankly, I see this as most likely possibility.

  2. Large scale military conflict in Eastern Asia, say if China invades Taiwan or North Korea invades South. Our chip manufacturing capabilities are concentrated in that one small reagion, and this is in a way Taiwan's insurance policy.

  3. Now this is the interesting stuff. It's perfectly possible that consciousness is more complex that we thing. There are few very well respected scientists that believe consciousness might be a result of weird quantum effects (in a way, a biological quantum computer), in which case our AI is further from AGI than most people thing. It's important to move that quantum effects emerge all the time in biochemistry, for example in the unholy union of physics, chemistry and biology known as Photosynthesis, where in each step of the process, from the moment energy is collected in antenna complex, it uses quantum effects.

2

Tiamatium t1_jac6ha2 wrote

France seems to think there might be a large war soon, based on the fact that they are pushing their industry towards producing ammunition and war equipment and based on their troops deployment.

I've also heard rumors about them thinkink of deploying nukes in Baltic to deter Russia but those were rumors, and from last year.

10

Tiamatium t1_ja6xaar wrote

This is not about Ukraine, this is in case of total war with EU and NATO. Because let's be real, if Russia wins in Ukraine, Poland, Romania and Baltics are next, and if Russia mobilises few million troops before winning, they will press the advantage. As much as we would like it, there is no realistic way where Ukrainians or Polish can inflict 1 to 10 (or even higher) casualty rates.

−1

Tiamatium t1_j9y4cw5 wrote

The problem is that they are dropping apples. That will make them rot quickly, thus they won't be suitable for sale to supermarkets, they could only be used for things were apple itself is consumed, like making juice.

1

Tiamatium t1_j9uzmge wrote

Weeks, maybe months.

The larger problem might be long-term memory, but once we figure that out... Actually no, it is easy to figure it out.

So weeks, maybe months, but you will need wifi. A d it will be a bit laggy, as in it will take a noticable delay to respond. Not long, just noticable, so that will take out a lot of emotions out of shit.

Honestly, this depends on when OpenAI releases chatGPT API, because once that's out, it's out. It really is just a quick connection of voice-to-text API, chatGPT and text-to-voice, ad that's it.

4

Tiamatium t1_j9sjg5e wrote

People keep panicking, saying this thing has death rate of 50-60%. One thing you need to understand is that only cases where people die are being reported, and from there on other cases are being established. Chances are that in absolute majority of cases people get infected, get better and cases are never reported, especially in countries like Cambodia.

Similar thing was happening with COVID, where at first only cases where people needed hospitalization were being reported (and without hospitalization they would have died), but once it became a pandemic, we started mass testing and death rate dropped.

Also this is pandemic we were preparing for, for decades. Chances are this is not going to be as bad as COVID, it's probably more like swine flu pandemic of 2010ish, and those that lived through it probably never knew there was a pandemic going on.

Downvote me if you want.

15

Tiamatium t1_j9o25ob wrote

How are you going to enforce it, and measure it? For example, if I have a Linux server, it is running Cron (an agent that runs automation at specified times) and I have 1000 records in it, what/how would they be taxed? Some are literally system records, are those going to be taxed because, say, every night system cleans up old logs?

11

Tiamatium t1_j8in2xl wrote

Right now these language models have no long-term memory capabilities, and "long-term" here refers to anything more than last few prompt/response cycles you had with them.

There are people who are working towards creating bots that learn and can remember your preferences in longer time span.

15

Tiamatium t1_j87g684 wrote

First of, "don't judge a fish for it's ability to climb a tree" is a load of bullshit. No one asks children to be experts in multiple fields, we are asking them for bare minimum knowledge in multiple fields. It's an excuse, a stupidity and laziness fueled excuse, which brings me to my second point, in the future (and to a large degree even now, especially if you are on Reddit) what will truly separate haves and have-nots is ability and willingness to learn new things. YouTube is already full of lectures on every subject imaginable, companies like Amazon and Microsoft will give you free server, to run your web app, etc. The barrier for entry is no longer money, it's the knowledge. Which brings me to my third point, why "revolutionize" children education when adult education needs it even more?

If you are truly serious about something like that, you should realize that adults need and want to learn too, they need education, they want education, and most importantly, they do not have the same barrier for entry, they do not have legal constrain. Second, yes, with technologies like chatGPT we are very close to personalized tutor for every student, and we need that. I believe Arthur C Clarke predicted that back in 60's.

4