Comments

You must log in or register to comment.

RandomMandarin t1_izv99vg wrote

Picture 4:

"I've seen things you kids wouldn't believe... Grimace's car on fire off the shoulder of I-5... I watched McRibs glitter in the dark near the Fryolator. All those moments will be lost in time, like cotton candy in rain... Time to make the fries."

88

User1539 t1_izuq5tr wrote

Now try to explain to people that ChatGPT is the image on the far left right now.

66

Sieventer OP t1_izutb0f wrote

Will it be as exponentional as pic generation?

8

User1539 t1_izv2sai wrote

I doubt anyone knows for sure. OpenAI is already telling people not to take this iteration seriously, because what they're working on is so much better. Meanwhile, you've got Google telling everyone this is nothing compared to what they're working on.

So, I'd say it's certainly possible we'll see that kind of rapid improvement at least over the short term.

But, then you've got spaces like self-driving cars where it seemed very realistic that we'd have that problem solved 5 years ago.

We'll just have to wait and see.

32

Artanthos t1_izxjwhf wrote

People are willing to accept 10s of thousands of human caused vehicle deaths per year. Just in the US.

They demand perfection from autonomous vehicles.

9

User1539 t1_izy1t6f wrote

I'm not sure what the hold up is, honestly. I'm sure that's part of it, but also you've all seen the tech demos that show Tesla's pulling into oncoming traffic, so it's tough to argue that it's ready for prime time, but no one is willing to pull the trigger.

I'm sure we'll get there, but we are definitely behind the imagined timeline of Elon Musk, who's really proven that he's mostly full of shit at this point, and shouldn't be listened to or trusted.

I think there was a lot of hype, and frankly lies, that clouded our judgement on that one, and now I'm hesitant to say that I feel like I know what the state of things really is.

I'm not sure if we're in a similar bubble with other things or not?

Things are definitely moving along at breakneck speeds. 5 months or 5 years probably doesn't really matter in the long run.

5

Artanthos t1_izy45bf wrote

We have driverless taxis in active use in a small number of cities around the world.

My feeling is that most of the current hurdles are regulatory. Government regulation takes years to develop and implement.

1

User1539 t1_izyrhvl wrote

But, those taxis prove that the regulations have been met. There are licenced trials of driverless taxis.

So, why aren't we using them all the time, everywhere?

The answer seems to be that the driverless taxis are still only used when there's not a lot of traffic, and in very specific areas where the AI has been trained and the roads are well maintained.

So, in certain circumstances that favor the AI, the technology seems pretty much ready. Even the government is allowing it.

I think it really is a technical hurtle to get the AI driving well enough that it can handle every real-world driving situation.

2

Artanthos t1_izyxgdy wrote

Why aren’t we using them everywhere?

Because it’s going to take time to get regulatory approval.

Government is a slow process. Years will be spent gathering data, addressing public concerns, carefully evaluating regulations, etc.

Only after the process is complete and regulations are in place will fully autonomous vehicles be allowed outside their current test cities.

China is already ahead of the US with this regulatory framework.

https://amp.cnn.com/cnn/2022/08/08/tech/baidu-robotaxi-permits-china/index.html

1

User1539 t1_izz0sqa wrote

But, again, people who have the beta of the self driving Tesla all seem to agree it's not ready for primetime. I've ridden in one in the past 6 months where the owner was telling me he won't use it because it's 'like riding with a teenager, you never know when it's just going to do something stupid and you have to panic and slam the brakes'.

They're still limited hours on the ones they are using as driverless taxis (not Teslas, so who knows how far ahead they are?), but I don't think this is entirely regulatory.

If we had video after video of beta users saying 'I just put my hands on the wheel and fall asleep in NYC traffic', I'd be there with you, but that's not what I'm hearing.

1

Artanthos t1_izzzpuk wrote

Tesla is not the market leader for self driving vehicles, and has not been for a long time.

Stop fixating on Tesla and Musk and go look at the rest of the world.

2

prodoosh t1_izy5sgq wrote

Idk if you can say that when autopilot already is 10x safer than the average human driver

1

User1539 t1_izyr47x wrote

I actually looked that up, and ... well, kind of, but mostly no.

That claim was actually 9X safer, and it was done with a tiny sample size of accidents, and didn't take into account that a person basically has to be driving the car with autopilot (so, there's no accounting for the number of times the human took over to stop an accident).

Also, almost no one is using autopilot in congested cities, and the tests that have been done weren't promising.

So, 9X safter, with sparse, cherry picked, data?

For areas without a ton of traffic, that are well known to the AI? It seems to do a pretty good job.

I'm not saying we don't nearly have it, or that we won't have it very soon. I'm just not sure it's as good as some people think it is.

2

prodoosh t1_j00n57w wrote

Thanks. Good write up of the issue. Most auto pilot use is in the easiest scenarios. Makes sense why the numbers look too good to be true

1

mirror_truth t1_izy8bpp wrote

All it would take is one bad crash (like killing a kid) to create a tsunami of bad PR that would set the field back a decade. Not to mention the bad PR that would come first from the mass layoffs of commercial drivers (truckers, cabbies, bus drivers etc).

2

hydraofwar t1_izymadp wrote

Damn, where Google said "this is nothing compared to what they're working on"? Imagine if lamda actually sounds exactly like a human

1

User1539 t1_izyqe02 wrote

I've been playing with ChatGPT quite a bit, and you can kind of catch it not really understanding what it's talking about.

I was testing if it could write code, and it's pretty good spitting out example code for a problem that's 90% what I want it to be. I'm not saying that isn't impressive as hell, especially for easy boilerplate stuff I'd otherwise google and look for an answer.

That said, in its summary of what it did, it was sometimes wrong. Usually just little things like 'This opens an HTTP server on port 80', where the actual example it wrote opened the port on 8080.

It was like talking to a kid who'd diligently copied their homework from another kid, but didn't quite understand what it said.

Still, as a tool it would be useful as-is, and as an AI it's impressive as hell. But, if you play with it long enough you'll catch it contradicting itself and clearly not quite understanding what it's telling you.

I have seen other PHD level experiments with AI where you're able to talk to a virtual bot about its surroundings, and it will respond in a way that suggests it really does know what's going on around it, and can help you find and do things in its virtual world.

I think that level of 'understanding' of the text it's producing is still a ways off from what ChatGPT is doing today. Maybe that's what they're excited about in the next version already, or what Google is talking about?

Either way, I'm prepared to have my mind blown by AI's progress on a weekly basis.

1

Kaarssteun t1_izuxmvu wrote

doubt it. LLMs are quite matured - not in its infancy like AI imagery was at the beginning of 2022. There are improvements to make though!

19

OralOperator t1_izuzkdq wrote

Lol you stated 2022 as past tense, it’s still 2022

11

Kaarssteun t1_izuzwpd wrote

right, meant to say beginning of 2022 - hard to adjust to this pace of progress :P Thanks

5

OralOperator t1_izuzzsg wrote

Nah, I’m on board man, fuck 2022, let’s just move on

10

genshiryoku t1_izxlmiv wrote

Short answer: No.

The big innovation with ChatGPT wasn't the LLM (which was still GPT-3). It was the interpreter and memory system at the front end that understood better what people asked of it.

LLM also have been trained on the vast majority of publicly available text. It's only going to become harder to train them as the data to train them on becomes the bottleneck.

2

bildramer t1_izwvgtr wrote

Not really. We're actually close to some rather hard limits. However, "close" means "there are still orders of magnitude of improvement up for grabs for anyone who wants to try and has millions of dollars to spare" - we already know, today, that we can make much better models, and how to do that. Look up "scaling laws" maybe.

1

User99942 t1_izwpn90 wrote

Nah, the “ladies” that used to offer me casual sex on yahoo chat we’re the ones on the left

1

cuyler72 t1_izv6tx4 wrote

That's not even including google's non-public models which are even better.

12

varkarrus t1_izvkstk wrote

and yet the first one looks most like the real Ronald McDonald.

12

Clevererer t1_izvvfed wrote

Right? "Improvement" seems to be defined very loosely here.

5

MillipedeMenace t1_izvyqqx wrote

Right? If this is the new and improved Ronald, maybe we should be slightly afraid?

3

Sieventer OP t1_izvtr0d wrote

It is somewhat curious but yes, it is the least detailed but at the same time the most faithful to the original design

3

woahdudechil t1_izw4uu4 wrote

The term "improvement" is a bit subjective here lmao

4

overlordpotatoe t1_izvrxx9 wrote

Midjourney's recent update to V4 was when AI images started to truly feel like usable art for me. You still have to generate a bunch to get what you want, but you can get there.

3

ChampagneJordan23 t1_izxagoh wrote

I will said in 2 years or 1 the AI gonna have the energy to build gifs in good quality and then videos in that point is scary because you could possibly create movies, but good movies like in 4k and things like that.

3

Hot_Comment_6052 t1_izyg3kf wrote

Hits blunt… Maybe this is the a.i progression of learning, like at first he knew the surface McDonald’s so we got a “normal” Ronald, but after it learned more it started to make its pics more accurate of what McDonald’s food really is, and Ronald’s image changed darker each time the more the A.i learns about the real McDonald’s lmao

2

mantasVid t1_izuy8w0 wrote

Creating first one it stumbled upon RackaRacka...

1

Loud-Mathematician76 t1_izwh7bb wrote

say whaaat ?

@ OP ..very cool artwork.
But i have a question. Is there no limitation on asking AI to draw Roland / basically a copyrighted logo/mascott ? I would have expected the AI to reject this based on some copyright grounds...I am happy it did not reject it, but still amazed!

Can you make it do things with other brands ? eventually even brand logos ?
Like "coca cola artsy logo with real coca leaves and sugar mountains flowing like a fountain from the red logo ?"

1

Ziggote t1_izx0qnb wrote

In those 6 months, and 4 pictures, how many different tools did you use?
Also keep in mind that your skill at using and prompting this has increased aswell.

1

Sieventer OP t1_izx9nob wrote

I used the tools I show there. Crayon, Dall-E2, and two different versions of MidJourney.
And this is not exclusively about prompting, I mean... no matter how hard you try to make the best prompt with Crayon, you're not going to get even 1% of MidJourney v4 quality.

3

Ziggote t1_izxdjs8 wrote

i feel ya, i was just curious. great job

1

purple_hamster66 t1_izx46ux wrote

I’ve heard people claim that the generated Dall-e mini images can’t be found in the training set. I think this clearly shows the opposite for it, but supports that claim more and more for the images on the right side.

1