ExasperatedEE t1_j9k3ibt wrote

Well, in that case you could argue the AI cheated. It didn't take a photo. It PAINTED the image. If a human used photoshop to create a photorealitic image that won a photography competition, they would also be cheating, and lose, if caught.

> or is the very best we can do also garbage?

It's photography. It's a hobby where if you are wealthy enough to afford the equipment and travel to exotic places and hang out for long enough to spot a cool looking animal, you can win prizes by pointing, adjusting focus, and clicking a button at the right time. It doesn't require a huge amount of skill. Someone can be a naturally talented photographer with almost no training, whereas being a highly skilled artist requires decades of practice. Don't tell me that the award winning photo of the afghan girl isn't a photo that almost any portrait photographer in a mall couldn't have managed to snag, had they been in the right place at the right time.

So maybe the problem really is we're giving wealthy people awards for mediocirity? Even art is not immune to this. There is a hell of a lot of "art" that sells for a lot of money which is literally just a pile of garbage in a corner. But hey, the AI can't produce that, yet, right? That's a physical thing.

So maybe the solution here is for artists to go back to mediums that are physical, like acrylic paint on canvas, and then sell those works for a lot of money instead of just mass printing their stuff on a laserjet? I know I wouldn't buy a laserjet image, human or AI generated, but something with acrylic or oil that has depth to the brush strokes? That's something worth hanging on your wall and paying for.


ExasperatedEE t1_j9ijpww wrote

There's nothing fake about it.

And garbage is garbage, whether its created by humans, or created by AI.

If ChatGPT is generating garbage, and this magazine can't tell the difference between the work an AI is spitting out and what humans are putting out, then what the humans were putting out was also garbage.


ExasperatedEE t1_j6keh1v wrote


What do you mean when you say manipulate the whole prompt? Are they secretly adding something to the prompts when I submit the prompt through ChatGPT, that would not affect the output with the API backend?

If so, you'd think they'd tell people that if they want developers to try this out and then pay to use it!

Also, is there even a way to accesss the API version from the web, or is the only way to try that out by accessing it through code? I've seen some twitch streamers using it to make AI chat bots on their streams. Those are fun.


ExasperatedEE t1_j6k8htj wrote

Iteration 1 of something that is not a general intelligence and is not the beginnings of a general intelligence will not become a general intellignece by iteration 2, 3, 4, or 5.

It's a text generator. Nothing more. It cannot perform complex tasks. It can generate simple code based on probabilities, but if you ask it to do anything remotely complex like write an optimized minecraft style block based rendering engine, it can't do it. It might try, but the code isn't going to work. A lot of time it generates pseudocode too. Like it might generate code with a function renderminecraftterrain() with no actual code for that function and then it will tell you that function should contain the code to render the terrain in plain english after the code.


ExasperatedEE t1_j6k7sns wrote

Spoken like a paranoid fool who has not used it, has no idea what is actually good and bad at, what it's limitations are, or how it works.

I've been using ChatGPT for weeks to generate short stories. Let me tell you what I've learned:

ChatGPT is not a general intelligence. It is a text generator, which generated text by figuring out the next most likely word to appear after what it has said previously, weighting these probabilities by the input you provide it.

It has a limited ability to remember what it was talking about. For example, if you have three characters in a story, and you ask it to generate interactions between two of them, and you don't mention the third character continuing to exist and interact with them, it will basically forget that character exists.

It will also tend to forget important details about characters and things. For example, if your characters are in a holodeck, and that holodeck becomes a forest, ChatGPT will eventually forget they were supposed to actually be in a holodeck.

You also need to be extremely precised about your descriptions of what is happening. It can easily be confused because english is not a precise language. For example if you say "Jim is walking with Bob, and he turns and asks him for a cigarette.", ChatGPT is just as likely to assume "he" refers to Bob as it is to assume it refers to Jim, so it is safer to write "Jim is walking with Bob, and Jim turns and asks Bob for a cigarette." completely forgoing the use of personal pronouns.

The filters they have on the AI to make it "good" also constantly mess up the stories you're trying to generate because it will endeavour to make every character behave in ways which are good, and it will often refuse entirely to generate content where one character would harm another. So for example, even if you trick it into generating an evil character by telling it at the start that all characters know none of the other characters will harm them and they are invicible and immortal but will pretend as if they are not, but none of them will comment on this being the case and they will all act as if it is all real... It will still tend to try to make the evil character regret their actions and release the person they had imprisoned or whatever.

The filters ultimately making trying to write stories with the AI a real chore. You constantly need to ask it to re-write sections, and getting it to write bits of story over time as multiple responses insead of having it try to write a whole story as a single page with The End at the end of it is also difficult.

That said, it's a valuable creative tool, and even though the language it produces can be rather simplistic and straightforward and dull unless you ask it to write in the style of famous fantasy authors or whatever, and tell it to change perspectives between characters, it's still good enough to be of use as a tool for creatives.

But what it is not, is good enough to REPLACE creatives. CharGPT can no more replace writers or programmers than Photoshop's magic fill tool can replace artists, or Visual Studio's reccomend feature replaces coders. It's just another tool in one's arsenal.

Maybe it'll get a lot better in the neat future, but I still forsee it having problems. Another issue it has is that it does not understand spatial relations. So if you tell it thing A is inside thing B, which is inside thing C, it will get confused about it. Like if you tell it a person is in a car and the doors and windows are closed, it will still have them talk to other characters outside the car as if their voice is not impeded at all.

In short, it's still a long long ways away from being something to be worried about.


ExasperatedEE t1_j1etbco wrote

> All because he said COVID lockdowns were bad for children (which turned out to be 100% true).

Prove it.

Also prove that the supposed harm to children greatly outweighed the lives saved among adults.

Your kid being slightly more likely to get the flu does not outweigh the lives of 100 elderly teachers at their school.


ExasperatedEE t1_j1et2sa wrote

Am I supposed to be upset at them protecting people from medical hoaxsters pushing ivermectin, and protecting LGBT people from right wing extremists?

Filtering trending tags is also not "blacklisting". Anyone who is subscribed to those people would still see their posts. Twitter is under no obligation to boost people's posts on the front page if their views do not align with their values.

As Elon said, freedom of speech is not freedom of reach.


ExasperatedEE t1_j1esicg wrote

> What does "liberal bias" have to do with this discussion?

What doesn't it have to do with it?

Conservatives believe Twitter is liberally biased, therefore they are bad.

This story is being reported because they believe the government was working with Twitter to suppress conservative views.

They take this report of US led psyops are proof of that, even though it is of course, not proof the US government was ever utilizing twitter to push liberal ideas.

In the absence of that, what's the problem here? Surely the US conducting psyops in the interest of protecting our nation is not something of concern to conservatives, unless they hate America?