Comments

You must log in or register to comment.

Username912773 t1_j94x2hm wrote

Ok what’s your point? How do you propose to fix this?

28

A1-Delta t1_j955sgx wrote

I’m not sure I’m following you. Are you concerned that machine learning models are not easily customizable enough?

Is your trouble with the fundamental concept of transfer learning, that data selection and preparation is difficult, that convolutional neural networks are “black boxes”, or something else?

26

I_like_sources OP t1_j9565gb wrote

Good questions. Machine learning models are usually black boxes, that work or don't as expected.

There is no detail tweaking possible, only re-training. And the specification for good training data is vague at best.

That causes unnecessary frustration, time-consumption and is similar to the blind leading the blind.

The attitude is "offer just more and more data and hope the ai will figure things out, if not, offer more". I am sure I am not the only one who sees fault in this approach.

−32

A1-Delta t1_j9572gt wrote

Tweaking a CNN without retraining makes it sound like you want a no-code option on your machine learning.

Totally agree that model interpretability is a challenge, but there is a whole subsection of our field working on that. The fundamental design of deep learning sort of precludes what you’re talking about - at least given our current understanding of model interpretation. At best, a model may be trained to give options on certain aspects based on its input (we see this all the time), but that doesn’t sound like what you want. It sounds like you want to be able to target specific and arbitrary components of an output and intuitively modify the weights of all nodes contributing to that part of the output - presumably in isolation.

I think your challenge might lie with a fundamental lack of understanding of how these models actually work. I don’t mean that as a dig - they’re complicated. I just want to help bring you to a place of understanding about why the field is how you’re experiencing it.

Not a huge fan of massive edits to original posts after people have started responding. Your newly added recommendations put an onerous responsibility on any open source authors who might make their work public as a hobby rather than a career.

16

I_like_sources OP t1_j957v1f wrote

What are your contributions to enabling users customizability of the result without retraining?

​

>Not a huge fan of massive edits to original posts after people have started responding.

I am not here to make you happy.

−35

sharockys t1_j95agar wrote

It seems that someone is upset by not having free open box tool and requires the others to work more for free for his/her own purpose.

17

I_like_sources OP t1_j95ckjv wrote

It's not about wanting a free open box tool. It's about the lack of transparency and accountability in the AI community. Developers need to take responsibility for their creations and provide support and feedback to users, particularly in critical applications like healthcare or finance. By providing more transparency and support, we can improve the quality and reliability of AI systems, which benefits everyone in the field.

−19

I_like_sources OP t1_j95d7e4 wrote

Both are not well related. You seem to argue that you should let AI do it's thing, what it's good at, without interfering, yet keep in mind that results are for humans, not for computers.

−5

gopher9 t1_j95hafv wrote

Neural networks are by design black boxes. You get great performance in exchange of explainability. This does not mean though that you have no control over the result.

> Example Stable Diffusion. You don't like what the eyes look like, yet you don't know how to make them more realistic.

ControlNet allows to guide image generation: https://github.com/lllyasviel/ControlNet.

> Example NLP. The chatbot does not give you logical answers? Try another random model.

Or give it some examples and ask to reason step by step. Alternatively, finetune it on examples. You can also teach LLM to use external tools, thus avoiding using LLM for reasoning.

18

milleniumsentry t1_j95v74v wrote

I disagree. They are completely related, and directly to the black box problem.

I wish I found this article a month ago, because it sums up a lot of the 'ai's are unknowable' nonsense.

Being a blackbox, is not an inherent quality of an AI. It's an inherent quality of a badly designed AI. Eventually, we will have methods that allow us to query why a particular result was given.

They are unknowable, because we have not designed them to be. The tech is in it's infancy. Give it time.

2

derek_ml t1_j960t8q wrote

> You seem to argue that you should let AI do it's thing, what it's good at, without interfering

Not necessarily, it's just that we have seen good results by letting compute dominate over interference. If other approaches worked better then we would be doing that.

Maybe in a parallel universe they valued creative approaches over quick results and human interference was valued more, and eventually got far better results but not in this one.

1

TransitoryPhilosophy t1_j969v46 wrote

These examples are just wrong, OP. For SD (example 1) there are multiple avenues for making model updates to fine tune. I guess I think your base premise is incorrect

12

Tawa-online t1_j96bpcn wrote

I agree with you somewhat but this isn't windows 11. We're working on some of the more experimental tech rather than stable tech that has been out for years. all of the things you are asking for take time to be implemented, and without predefined systems for how to create these interactions it's pretty difficult to do.

6

photosandphotons t1_j96doro wrote

“Critical applications” will be paying for the type of support you expect, you don’t need to worry about them. Taking feedback, FAQs, and documenting code updates are completely un-novel ideas that exist in industry SAAS products today and they will exist for ML models. They just require you to, you know, actually pay for the developers’ time.

5

Battleagainstentropy t1_j96iltm wrote

Yes this reads like an undergrad or someone entirely outside the field who thinks that no one has thought of these concepts before. They are trying a little Cunningham’s Law by stating that nothing is being done in these areas and hoping that someone provides the correct information rather than simply ask the question of what is being done to address these issues.

12

W_O_H t1_j97zesm wrote

You can fine tune stable diffusion,TTS and NLP. You can't expect authors to tend to every need for users they gave you the tool and have no requirement to teach you how to use it. Yes some models can't be fine tuned but in 99% of cases there is a different one you can fine tuned.

If you really don't like what's out there make your own, the papers exist.

7

Ferocious_Armadillo t1_j98sbqm wrote

I think I’m gonna have to respectfully disagree on a lot of this. You’re right that it largely comes down to training data used. The thing that largely jumps out to me, though, in the examples you give and in your point (1) is that while you want to train using a large amount of data, especially for such large networks as those you suggest, is that while you need that large amount of data, you want to avoid overfitting your model to your data in the pursuit of accuracy or reliability or whatever metric you choose to determine how “good” or accurate your model is against some ground truth.

And while on the surface, NNs can definitely seem like or appear as though they’re “black boxes” or “we can’t accurately describe their structure or how they work”. That’s largely untrue. In fact, I would claim that it’s precisely because we can design and model NN structure and use a structure (both in terms of # of layers, connectedness between them, inputs, weights, biases, activation functions, etc.) that would lend itself best to a given purpose, that has allowed the field to come as far as it has, to generate the NNS in the examples you provide in the first place.

Sorry about the rant… I didn’t realize I get so passionate about NNs.

1

XecutionStyle t1_j99vve7 wrote

You show very specific issues that if you count all of them, is influence.

2