Comments

You must log in or register to comment.

TFenrir t1_j7ziwlh wrote

Not far, there are a lot of people working on getting the appropriate training data for this right now. One of the most prominent groups is Adept.ai - their v1 model is trained on using browser based apps however, you can see examples and sign up for the waitlist on their website.

If I was going to ballpark when a regular Joe will have access to tech like that (without commenting on proficiency, and specifically for Blender)... 50% certain within 1 year, 80% within 3?

24

FusionRocketsPlease OP t1_j83grcr wrote

AAAAAAAAAAAAAAAAH Damn bro, I'm excited to know that cool people have thought the same as me and are putting it into practice. I can't wait for what's next!

1

Substantial_Space478 t1_j809tc3 wrote

tl;dr: how long until i don't have to learn blender

10

FusionRocketsPlease OP t1_j80ashd wrote

Imagine not having to spend hundreds of hours tinkering with that anti-intuitive stuff.

7

Cold-Ad2729 t1_j80082f wrote

I seen Stable Diffusion plug-ins already developed and incorporated into photoshop. I’d imagine the same will go for Blender. If the creators of say blender go into partnership with someone like MidJourney or SD then it’d move pretty fast. I had fun today cloning a friends voice from an interview and creating a satirical video of an avatar of him speaking to camera talking about his (fake) career in pornography. We both cracked up 😂. All these media uses for AI are going to integrate very quickly

5

wntersnw t1_j800wuu wrote

Was thinking about something like this the other day but for nvidia's rtx remix when it gets released officially. Train the AI as you remaster games using the program, then eventually it knows enough to remaster games by itself.

I think these types of systems will start to appear not long after AI assistants are integrated into operating systems.

3

duffmanhb t1_j80ku0z wrote

Google's LaMDA already has this theoretically figured out. They've shown over a year ago it's ability to function general tasks like this. And I'm sure they are much further ahead already.

3

thePsychonautDad t1_j826gnr wrote

The model wouldn't learn to use blender, that'd be inefficient. It would create a voxel object then upscale the voxels into a proper 3D model, then exported in Blender format.

The bits are already there in various research papers. We know how to take a text prompt and generate a small voxel model. We know how to take a voxel model and upscale it into a large voxel model.... All that's missing is somebody to assemble the entire thing and enough budget for the 7 to 8 figures training cost.

3

imlaggingsobad t1_j81aj9z wrote

I think in about 2-3 years there will be an AI that can control/use any software or app we have. Companies are working on making AI that can navigate a browser, book flights, etc.

2

Financial_Drinker t1_j81p1qb wrote

So an AI trained to create custom macros? The tech is already past that. This project itself would be doable if Blender had some API to create and run macros and there were database of macros for it to train on. It wouldn't be very different from Github copilot. But even then it'd just not be worth the trouble. Its application would be too narrow. It's better to invest in AI that can just render stuff on the fly.

2

FC4945 t1_j81s3tz wrote

This would be awesome. And maybe for Unity and Daz as well. I was thinking today about how these companies are going to be left behind if they don't find a way integrate AI into these programs.

2

thePsychonautDad t1_j826hg6 wrote

The model wouldn't learn to use blender, that'd be inefficient. It would create a voxel object then upscale the voxels into a proper 3D model, then exported in Blender format.

The bits are already there in various research papers. We know how to take a text prompt and generate a small voxel model. We know how to take a voxel model and upscale it into a large voxel model.... All that's missing is somebody to assemble the entire thing and enough budget for the 7 to 8 figures training cost.

2

Tiamatium t1_j82rf2e wrote

Have you asked in chatgpt knows blender commands? I haven't, but I know it knows graphviz, so it can draw graphs.

2

FusionRocketsPlease OP t1_j83ii0i wrote

I asked some generic questions and he gave some answers that seem to be correct from the tutorial.

1

blueSGL t1_j800g8l wrote

it depends what level of abstraction you are taking from the raw actions within the program.

A lot of 3D stuff that can be automated already is, you can write scripts.

Having an AI 'script writer' helper that takes in natural language and produces a python script can already be done. It's my go to thing to test chat bots, to ask them to generate simple scrips for Maya. (the you.com one got a bit better at that recently)

If however you are asking for something like 'create me a full sci-fi environment' or 'rig this model' or 'animate this armature like this' and it just does, well we are not there yet. There are scripts, asset libraries, etc... that streamline these processes but nothing end to end driven by natural language with zero manual input from a human.

1

FusionRocketsPlease OP t1_j83idfn wrote

I don't want one that creates something so generalized with a prompt. I want one where I can create every last detail without needing those ultra-complex menus and years of practice.

2

ninjasaid13 t1_j83e6kl wrote

>They train a language model with all Blender commands, and all possible outcomes. Then the model learns to control blender, allowing the user to be guided through it or the user can ask for what it wants through a text prompt. How far?

can probably be done this year. does it require expensive hardware? Is it slow? is it bad? most likely.

1