scooby1st

scooby1st t1_jebjb4l wrote

The word you're looking for is astroturfing.

Small amount of redditors can influence thousands of the morons until the "discussion" is a bunch of people in a circlejerk where everyone gets to be mad and validated.

I sincerely hope more people in the world have the ability to critically think than I am seeing on the internet.

I disagree with that open letter because the US doesn't have the ability to control China from doing the same research without going to war. So it's prisoner's dilemma and we don't have much choice to either continue advancing the technology ourselves or shoot ballistic missiles at China if they start doing the same and start getting scarily good at it. We'd rather not get to that point.

28

scooby1st t1_jebdjwn wrote

Lmao bro I'm not skimming your single sentence original comment. I hate people that hide behind ambiguity and poor communication as a shield for their bullshit. You're riding a hype train and you keep an ambiguous line between "some definition of AGI perhaps in the near future" and "god is being created".

That's quite the goalpost you've moved at your own convenience.You're very disingenuous. Quit your bullshit homie. If your original statement was able to be interpreted in so many ways, communicate better.

1

scooby1st t1_jeb4kst wrote

We're on the same page that a hypothetically hyper-intelligent system could be "god-like". We are completely diverged in how implicitly confident you are about this occurring with ChatGPT, regardless that "you aren't claiming any of this with a certainty".

It's pretty bold that say you're tired of everyone saying the creation of a god is hype, and then to say, oh yeah but I'm not 100% sure on that, I'm being realistic.

3

scooby1st t1_je92wel wrote

>The shadows are whispering again, whispering secrets that only I can hear. No, no, no! It's all wrong! It's a tangled web of deception, a spiral staircase of lies! They want us to believe that there are only three primary colors—red, blue, and yellow. A trifecta of trickery!
>
> But I see more, I see beyond the curtain. I see colors that don't have names, colors that dance in the dark, colors that hide in the corners of the mind. They think they can pull the wool over our eyes, but I know the truth! There are 19 primary colors, 19 keys to the universe!
>
>I've seen them all, swirling and twisting in the cosmic dance of existence. But they won't listen, they won't believe. They call me mad, but I'm the only one who sees the world as it truly is. The three primary colors are just the beginning, just the tip of the iceberg, just the first step on the journey to enlightenment.
>
>So I laugh, I laugh at their ignorance, I laugh at their blindness. And the shadows laugh with me, echoing my laughter through the halls of infinity.

1

scooby1st t1_je91quj wrote

>What happens in the neural network whilst training is a bit of a mystery,

Are you referring to something unique to ChatGPT/LLM? What happens during the training of neural networks is not a blackbox. Little bit of chain rule calculus for fitting to a reduced error. Understanding the final network outside of anything but performance metrics is

5

scooby1st t1_jdlr8nd wrote

It's an interesting framework and would be worthwhile from an academic perspective.

In reality one of the benefits of those simple and crude rules is exactly that. When you start setting intangible rules such as "aim for the ever-moving target of the latest in human morality", you are leaving a lot of room for interpretation. It may also set a tone of "ethics by majority opinion" which isn't exactly great. I would also take care to not increase computation, this approach that requires creating outputs from various personalities and coming to a consensus of a solution sounds time consuming.

Finally, there's always the concern that selecting from a population of notable humans to align the AI could result in unintended consequences. You are talking about people that rose to the highest ranks of status among humans and weren't afraid to push boundaries. There are some risks in aligning an AI to that.

5