Submitted by Dr_Singularity t3_y0rij3 in singularity
Comments
Mr_Hu-Man t1_irv0otr wrote
I agree, these things look like how dreams feel
petalidas t1_irvzns7 wrote
If we're living in a simulation maybe they're closer than we think lol
Taintfacts t1_irux5jt wrote
The disturbing stuff is so creepy.
Something deeply unsettling, staring literally into the abyss.
Saerain t1_irvyhcz wrote
The warp stirs and the Eye opens, brother.
overlordpotatoe t1_irv2jh2 wrote
The text adventure story ones remind me of dreams, too, in the way they jump around and have an element of randomness but are still semi-coherent.
Murky-Garden-9967 t1_iry9bgu wrote
If you’ve ever taken shrooms or LSD it will be very familiar to you. Both are arising from endless connections between concepts being made and displayed in either computer imagery or closed eye visuals.
SuperSpaceEye t1_iruxijm wrote
The video generator only creates video at 24x48 pixel resolution and 3 fps.
Mr_Hu-Man t1_irv0ny6 wrote
….what’s that got to do with the original comment?
SuperSpaceEye t1_irv0w3d wrote
It's "dreamlike" because it originally generates at such low resolution.
kasiotuo t1_irv6lx4 wrote
The low resolution isn't the only thing tho, it's also the morphing imagery
-ZeroRelevance- t1_irvla8f wrote
That probably comes from the temporal upscaling. As they said, the initial video is only 3fps, so they’re basically synthesising 7 frames for each actual frame given. It’s no wonder it’s going to morph. If it began with a higher temporal resolution (initial fps), then it would likely be much more coherent.
darthdiablo t1_irvot21 wrote
All tech start somewhere. This will improve over time.
Saerain t1_irvzawu wrote
Yeah, that may be an important point here actually. I'm reminded of how the imagery we are able to currently read from a live brain was about 12x12 last I checked, and dreams themselves are essentially upscaled interpretations of low resolution noise already.
Smoke-away t1_irtwx51 wrote
Computer, load up Celery Man, please.
Kibubik t1_irueoak wrote
Perfect comment
eve_of_distraction t1_irwwtno wrote
NUDE. TAYNE.
ihateshadylandlords t1_irtz1ls wrote
Is there an ETA on when we’ll be able to use Imagen Video?
Saerain t1_iruanux wrote
I think we'll have to hope for the sort of leaks we've seen with Stable Diffusion. Otherwise they're undoubtedly gonna want to run a tactically restricted instance with content fully owned and monitored by them, and then license out to big bidders.
phazei t1_iruev66 wrote
Leaks we've seen with stable diffusion? What are you smoking? It's an open project that uses it's own methods that build on top of other published papers. Stable diffusion will also run easily on a GTX card locally. The open source crowd is continually building upon it.
Now I've no clue if google will release their work, maybe even unlikely, but I have zero doubt that relatively soon an open version of something equivalent, likely using stable diffusion in some fashion, will be developed and released for everyone to use. I would absolutely find it insulting and disingenuous to the community to call it a leak though.
Saerain t1_irul39p wrote
Oh I know that much, my PC is burning it up daily. But I mean for instance the now heavily used stable-diffusion-webui uses code from the NovelAI leak, right? Creating a big reason that so many people have easy access at this moment.
Granted I may be garbling memories here, the last couple months have moved fast.
phazei t1_irusp2f wrote
True, it's like insane how fast new models are coming out and their ease of use like at replicate.com
Twismyer t1_irvr2nf wrote
It does not I believe, and it was heavily in use before the leak occurred, but NovelAI is using Automatic111's code. As for the referenced leak Automatic111's sd ui was made compatible with NovelAI's leaked model.
TemetN t1_iruc1gb wrote
I feel as if I read that they didn't intend to release it publicly any time soon with the usual 'risks' bit. Minute. Yes, it was from Ars Technica.
​
Akimbo333 t1_iruzfgi wrote
I hear that Stability.AI is working on a text to video model
-ZeroRelevance- t1_irvle6s wrote
Yeah, Google AI put it at the end of their blogpost iirc
LexVex02 t1_irxlqpf wrote
I wish all code was open source. I'm tired of corporations keeping their best from the rest. They are just slowing down progress at this point.
conconcraft t1_irwagtm wrote
In just a few years, I wonder what this will look like.
Smoke-away t1_irxqtc3 wrote
Indistinguishable from reality.
[deleted] t1_irwyicc wrote
[removed]
_Alkahestus_ t1_iruybvm wrote
What the fuck. How. This is magic to me.
Akimbo333 t1_iru44ju wrote
Cool!
SufficientPie t1_irwhdb7 wrote
"high fidelity"
Saerain t1_iru9psd wrote
It continues to arrest me how dreamlike AI imagery is. Much, much closer to the dreaming experience than any other media has ever accomplished.