Submitted by onformative t3_zivy00 in MachineLearning

​

https://i.redd.it/nuqdy9e3815a1.gif

Hey Reddit,

I just wanted to share an art & research project we've been working on that uses AI and reinforcement learning to teach an AI how to sculpt 3D sculptures. It's been really cool to see the AI learn new strategies and adapt to create unique pieces of art and how we as designers start to take on the role of curators.

The project started earlier last year after thinking about the future of art and the role of humans in the creative process. Something that seems to be more relevant than ever these days. With AI becoming more advanced, what does that mean for us as creatives? We also wrote about this on the project page. It's an interesting thought and I'd love to hear your thoughts on it.

Check out the project here: https://onformative.com/work/ai-sculpting/ and let's discuss in the comments.

​

https://preview.redd.it/ikymqhl7815a1.jpg?width=1280&format=pjpg&auto=webp&s=f375991649ecb3070740471133ddccb6af9e58d4

163

Comments

You must log in or register to comment.

ReginaldIII t1_izt74cb wrote

Beautiful. Really beautiful work.

I love the whole presentation too. Great design and composition.

16

RomanRiesen t1_iztyj2q wrote

I'm much more impressed by the rendering of the sculptures (lighting & angles & compositions) than by the statues themselves tbh.

Not that the idea itself isn't also really cool though.

7

onformative OP t1_izw1ksy wrote

Agree, the statues definitely have a high level of imperfection. What we found interesting in the end though is that you can clearly see the traces of the different tools used so even after the sculpting is done you can still see or at least get an idea of the agents behaviour by the traces it left on the sculpture itself.

3

ReginaldIII t1_izwn6zb wrote

I like them, flaws and all. My doctorate was computer graphics and my flat is full of 3D printed sculptures of generative art and canvas prints of visualisations and slime moulds and all sorts.

The imperfections are what gives them character.

2

onformative OP t1_izwydiq wrote

Nice! Yeah generative art is exactly where we are coming from, if you are into this topic you might have heard about the "Generative Design" book. Anyway.

I totally agree with you about the imperfection, especially if you look at it from a creative perspective, that's when interesting things happen. We wanted to keep this rough state, also to make the progress visible. When I talk about it in lectures, I often show the image below, where you can clearly see the use of the tool through the traces, which is something we wanted to keep and transfer to the digital realm.

https://backend.onformative.com/assets/work/tools.jpg

1

RomanRiesen t1_izxysue wrote

I think you might have misunderstood me.

I do like the sculptures themselves quite a bit!

But with the striking lighting the pieces look like pieces I could see hanging in my room, which is very rare. I like my walls blank and cold.

1

onformative OP t1_izw1f9q wrote

Hello, thank you very much. Being a design studio with a strong focus on digital art and technology, visual representation play an important role in our process. Especially because we are interested in what all the technological advancements mean for us as designers.

2

shadowknight094 t1_iztqfwo wrote

Just curious what program is this? Unity? I see a lot of such examples on two minutes paper channel. What do researchers use for such graphics demos?

3

onformative OP t1_izw1oki wrote

This is using unity for the machine learning part, using houdini to replay the sculpting and translate it into higher resolution meshes and then cinema4d to put and render them in the different enviroments. You find a more detailed technical breakdown here: https://bit.ly/3hdqxeU

2

JFHermes t1_izung0h wrote

They did the ML part in unity so I am assuming the renders are done using unity. Perhaps they bring the voxel model into something like blender to do the final renders but it seems like an unnecessary step as once you know how to use unreal/unity for rendering it becomes better than blender anyway.

1

onformative OP t1_izw1uim wrote

Good point, we thought about doing the rendering in real time as well, but in the end decided to go with 3d software. In our case we used houdini and C4D, but blender would have been an option as well of course. Not saying it would not be possible to achieve the same visual quality in Unreal as well but since the project was planned to be a linear animation in the end, using 3d software gave us a bit more flexibility to explore different ways to visualize the results.

1

JFHermes t1_izwjp36 wrote

I certainly think running the houdini/C4D pipeline has given you some great flexibility for presentation aesthetics. As long as you don't exceed your hardware capabilities it makes sense to have the options available in these programs to make it look as excellent as it does.

Would be very cool to see you develop a game environment with intractability though. Having these kind of platforms for easy access on web 4.0 will be great for designers and visualisation experts.

Looks great really nice job.

1

ComplexBus7725 t1_iztt05d wrote

Looks really cool but I would've expected a little more technical details. Model architecture, reward function, software/framework used for representing 3d space, rig used for training, training time, maybe a chart that shows the quality of the sculptures change over the iterations... I don't know, SOMETHING...

2

panduhbean t1_izux2x3 wrote

I took a look at the site and the article does include more technical details including a link to a white paper...

3

onformative OP t1_izw1y5v wrote

Hi, yes, good point and i totally agree. It was important to us to now just create a good looking piece, but something that explains how it was technically done. Thats why we wrote the 12 page paper explaining a bit the process behind it. I posted it in one of the earlier comments but you might have missed it: Here it is: https://bit.ly/3hdqxeU

1

RomanRiesen t1_iztz82z wrote

Seconds before opening reddit i had the idea to try to create impasto depth maps from photos / diffusion model outputs.

What I came to in the few seconds of given thought was pretty close to the additive approach as far as I can tell.

Unfortunately I don't know the first thing about painting so I'll never implement that.

1

MisterManuscript t1_izwkkoj wrote

If this is an ML model, what is the input? I can't find the format of the input in the article here. Is it 2D images? Text prompt?

1