LarsPensjo

LarsPensjo t1_j643g3d wrote

I also asked ChatGPT what could be criticized from this text, and got the answer:

>One potential criticism of the text is that it appears to present a biased view of the Microsoft investment in OpenAI, suggesting that the CEO of OpenAI, Sam Altman, has orchestrated a "coup of the decade" and that the specifics of the deal tell a different story from the community's frustration about OpenAI moving away from its ethos of developing AI for everyone, free of economic pressures. Additionally, the text also presents a rosy picture of Sam Altman's background, giving the impression that he is a strategic mastermind and influential figure in Silicon Valley without providing any counterarguments or criticism of his actions or decisions.

5

LarsPensjo t1_j642rfy wrote

Just ask ChatGPT. I got:

> Microsoft is investing $10 billion into OpenAI, an AI research company founded in 2015 by Elon Musk, Sam Altman, and other prominent figures in the tech industry. However, many in the community are frustrated with OpenAI's shift away from its original ethos of developing AI for everyone, free of economic pressures. There are fears that OpenAI's models will become fancy Microsoft Office plugins, leading to a loss of open research and innovation. The specifics of the deal suggest that there is more going on behind the scenes, and that Sam Altman, the CEO of OpenAI, may have orchestrated a major strategic move to secure the company's future.

1

LarsPensjo t1_j35yrqx wrote

> That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

I definitely agree. IMO, you see a lot of "AI is not true intelligence", which doesn't really matter.

Eliezer Yudkowsky had an interesting observation:

> Words aren't thoughts, they're log files generated by thoughts.

I believe he meant the written word.

2

LarsPensjo t1_j33nt4v wrote

One definition of "reflection" is

> serious thought or consideration.

Can you give an example where a human person can reflect on something, which ChatGPT can't? And more crucially, what method would you use to detect this?

What I am aiming at, is that these are borderline philosophical questions, without clear definitions.

2

LarsPensjo t1_j33hla5 wrote

I asked ChatGPT "Are you sentient?", and got the (expected) answer

> I am a machine learning model trained to simulate conversation, so I don't have consciousness or sentience in the same way that humans do.

Then I asked, "Please explain why you say that.", and got the answer

> As a machine learning model, I have been programmed to process and generate text based on patterns in data...

Doesn't that mean it reflected on its own thinking?

6

LarsPensjo t1_j33enxl wrote

I saw an example where someone asked for a Python program to solve a task. ChatGPT produced such a program. But there was an error, and the person pointed out the error and asked for a fix.

ChatGPT then produced a correct program.

Isn't this an example of self-improvement? There was external input, but that is beside the point. Also, the improvement is going to be forgotten if you restart with a new prompt. But that is also beside the point, there was an improvement while the sessions lasted.

Notice also that ChatGPT did the improvement, the person starting the prompt did not explicitly how to solve the error.

2