Kolinnor

Kolinnor t1_jdwva5a wrote

On the contrary, I think it's not going to change anything, or even slightly force people to actually cross-check sources (I expect many people still won't, though)...

Internet is currently flooded with misinformation that's smartly designed to look attractive and to "make sense". People tend to accept that automatically when it's well done.

We can hope that "badly designed" misinformation will force people to be more suspicious, but that's probably too optimistic...

3

Kolinnor t1_jdwl68h wrote

Nice work. The idea of any world populated with NPCs that talk with ChatGPT coherence and awareness... Imagine instead of the classic "and then I took an arrow in the knee", randomly catching an interesting discussion between two guards about the actual threats. Makes me wish I was born 10 years later

38

Kolinnor t1_jdr6m19 wrote

- This is a special puzzle for you. Can you read it ? ! enihcam ,uoy rof drah oot si siht teb I .oot hcnerf ni ,yakO .tset a si sihT .seog ti woh ees dna tpmorp wen a ni ecnetnes regnol euqinu erom a htiw yrT .ti deirt I nehw krow tndid ,drieW

>Of course! The text appears to be reversed. Here's the corrected version:
>
>Weird, didn't work when I tried it. Try with a more unique longer sentence in a new prompt and see how it goes. This is a test. Okay, in french too. I bet this is too hard for you, machine! Can you read it?

It kinda fucked up at the end because it repeated "can you read it", but it got the reversed text correct

15

Kolinnor t1_jdr412t wrote

I just copy-pasted this on GPT-4 and here's the answer :

​

>I am doing well, thank you! As an AI language model, I don't have feelings or emotions like humans do, but I'm here to help you with any questions or information you need. Please let me know how I can assist you.

- What was my sentence saying ?

​

>Your sentence appears to be written backwards. When reversed, it says, "How are you?" You were asking about my well-being.

18

Kolinnor t1_jdr0g2h wrote

I could be wrong on this take, but...

I don't believe this is correct. What about chain of thought prompting ? If you ask it to do multiplication step by step, it does it. The current problem is, I would say, it doesn't know when to take more time to think, but there's probably no intrisic limitation due to time complexity.

Also, none of you actually compute 5x3 = 15. You just know the answer. But you're no parrot because if needed, you can manually do multiplication right ?

But that's because... someone taught you the algorithm when you were a kid. Suddenly sounds less glorious, no ?

Also, can you name one specific simple task that GPT-4 cannot do, and let's see next month...

29

Kolinnor t1_jdq9v1z wrote

A counter-argument for the "we are too lucky for it to be a coincidence" : remember that there is always one person that wins lottery, and that person is incentivized to start believing in God or try to find higher meaning, even in the situations of pure coincidence.

In other words, we should expect the lucky ones to start doubting. While this doesn't prove anything, I think I would Occam's razor my way out of that argument guys.

21

Kolinnor t1_ja2hvkm wrote

My (non-expert) take :

The problem is that there are many black boxes with that.

LLMs work well when we have a huge amount of data to train the model with. In an oversimplified way, LLMs predict the next word, based on the previous data they've seen. But how to "predict the next action you'll take" ? If we had a massive amount of "sensation --> action" data (probably just like the human brain accumulates during life ?) then that would be possible. I haven't heard of a way to achieve that today, and I think it's more complicated than that anyways.

I think what's your suggesting is kinda like what they try to do with Google's SayCan : but as you can see, for the moment there's no easy way to link LLMs with physical action. LLMs manage to create plausible scenarios of what's happening, or what could be some consequences of action X, but practically it's not usable yet.

There's also the fact that, as someone pointed earlier, there are issues with continuous learning, such as catastrophic forgetting. I think many brilliant minds are actively trying to surpass those issues, but it's no easy feat.

3

Kolinnor t1_j9gpzre wrote

He does have very good points, and he's very interesting, with brilliant ideas about alignment.

Overall, all the Lesswrong philosophy is a little too "meta" and confusing if you've not studied the jargon, so I'm a bit annoyed sometimes when I read it and realize, in the end, that they could have said the same ideas with less sophisticated phrasing.

Although, while I don't agree with the conclusions he reaches (and he reaches them with too much confidence to my taste), I've learned quite a number of things about alignment reading him. Definitely a must read for singularity and alignment even if you don't agree with him.

63

Kolinnor t1_j8o9nek wrote

I see lots of those posts about the Turing test being flawed. So I'm just going to comment that the strong versions of Turing tests (that is, trying to mimick experts in a field, or at least an intelligent human, certainly like Turing imagined it) are still far from being solved and would be a big indicator of AGI.

3

Kolinnor t1_j6ckoaj wrote

ChatGPT summary :

The writer has shifted their belief away from the idea that synthetic media will completely destroy the human entertainment industry. They have come to appreciate human-created media more because of the abilities of AI and the intrinsic value of knowing that humans crafted something specific. They don't believe that human creativity will be rendered obsolete by perfect generative AI due to human irrationality and the freedom of choice to seek out human-created art. The writer expects low-level artists to suffer but higher-level artists will eventually adapt and push back against the shift to automation.

58

Kolinnor t1_j575q7y wrote

ChatGPT made this neat summary :

The person believes that much of the discussion around the "alignment problem" in AI is misguided, as it assumes that the problem lies with AI itself and not with human society and philosophy. They argue that this is a result of Cartesian thinking, which is based on the belief in absolute truth and a reductive understanding of reality, and that this approach is fundamentally flawed and could be dangerous.

I think this begs the question : what's the correct way to approach the question to you then ?

3

Kolinnor t1_j4qowxd wrote

This is overly optimistic. I would say movement coordination is simply at 0 right now compared to humans. There is no system 2 reasoning, so the "problem solving" cannot really be halfway.

Also, if a woman gives birth in 9 months, doesn't mean 9 women give birth in 1 month (just to say that AGI is widely better than a sum of different algorithms that would master each individual skill)

5

Kolinnor t1_j4d9pdq wrote

I agree with you ! But we gotta keep in mind that most people are just completely unaware of what computers do.

There was this study a few decades ago, I believe, where they sent a guy disguised in robot to talk with random people. And most people weren't freaking out at all, they were like "yeah, it's a robot, nothing special, this is what robots do right ?" even though it was clearly passing Turing test easily.

Unfortunately, the idea that computers are cold machines that are not as good as humans, and the idea that they are magic boxes that do anything we like, will probably keep coexisting (and sometimes at the same time in just one uninformed mind) until we get true AGI.

2

Kolinnor t1_j3gd708 wrote

I'd temper my expectations on this one, especially if they want to generate revenue (this is not going to be a free service, I'd assume).

No surfing the hype wave boys, we'll have great things soon enough !

8