milleniumsentry

milleniumsentry t1_j95v74v wrote

I disagree. They are completely related, and directly to the black box problem.

I wish I found this article a month ago, because it sums up a lot of the 'ai's are unknowable' nonsense.

Being a blackbox, is not an inherent quality of an AI. It's an inherent quality of a badly designed AI. Eventually, we will have methods that allow us to query why a particular result was given.

They are unknowable, because we have not designed them to be. The tech is in it's infancy. Give it time.

2

milleniumsentry t1_is13giv wrote

I honestly think this will be a step in the right direction. Not actually for prompt sharing, but for refinement. These networks will start off great at telling you.. that's a hippo.... that's a potato.. but what happens when someone wants to create a hippotato...

I think without some sort of tagging/self reference, the data runs to risk of self reinforcement... as the main function of the task is to bash a few things together into something else. At what point will it need extra information so that it knows, yes.. this is what they wanted... this is a good representation of the task...

A tag back loop would be phenomenal. Imagine if you ask for a robotic cow with an astronaut friend. Some of those image, will be lacking robot features, some won't look like cows... etc. Ideally, your finished piece would be tagged as well... but perhaps missing the astronaut... or another part of the initial prompt request. By removing tags that were not generated by the prompt, the two can be compared for a soft 'success' rate.

1

milleniumsentry t1_irxa3sg wrote

No no. It only tells you what prompts it would use to generate a similar image. There is no actual prompt data accessible in the image/meta data. With millions of seeds, and billions of word combinations, you wouldn't be able to reverse engineer it.

I think having an embed for those interested would be a great step. Then you could just read the file and go from there.

9