Viewing a single comment thread. View all comments

AlgaeRhythmic t1_ivuvzx1 wrote

Yeah, the "input-to-output" progression I see coming out over time is:
* Text-to-text
* Text-to-image/audio
* Text-to-3D model
* Text-to-animation/video
* Text-to-interface/game
* Text-to-physical object (through 3D printing and other means)

There will be some overlap though, so this is not necessarily a strict ordering. And the "text" input could also be replaced with voice or some other modality. Image-to-game, 3Dmodel-to-physicalObject, etc.

22

AI_Enjoyer87 t1_ivuxg3a wrote

Crazy that 4 of those already exist. Literally just ticking off boxes every month.

16

AlgaeRhythmic t1_ivvc4hp wrote

Yeah, pretty exciting! 3D models an animations are coming along faster than I expected.

Also, I like to think of some of these as grouping of lower-level ones.

  • A video/animation is a coordinated sequence of images.
  • An interface/game is a coordinated sequence of audio and image/model/animation output depending on user input (and also underlying game logic and values stored and managed through code, which is just text, which means it too can be generated).

At higher levels, I'm imagining things like simple code to more complex information architectures, business models, complex machinery instead of static objects (image-to-physicalObject is similar to animation-to-machine), and systems of interacting machines like manufacturing and transportation systems.

Ultimate goal: automated economy and natural resource management.

4

visarga t1_iw0l8it wrote

  • BCI (brain signals) to human context and behaviour.

Imagine how detailed and massive could this dataset be.

1