Ulfgardleo

Ulfgardleo t1_j976icn wrote

we can do image segmentation, but segmentation uncertainties are a bit iffy. we can do pixel-wise uncertainties, but that really is not what we want because neighbouring pixels are not independent. e.g., if you have a detect-and-segment task, then with an uncertain detection, your segmentation masks should reflect that sometimes "nothing" is detected and thus there is nothing to segment. i think we have not progressed there beyond ising model variations.

9

Ulfgardleo t1_j91e648 wrote

Reply to comment by goolulusaurs in [D] Please stop by [deleted]

Due to the way their training works, LLM cannot be sentient. It misses all ways to interact with the real world outside of text prediction. it has no way to commit knowledge to memory. It does not have a sense of time or order of events, because it cant remember anything between sessions.

If something cannot be sentient, one does not need to measure it.

2

Ulfgardleo t1_j84fokp wrote

legally the data is not public and the fact that facebook is actively trying to prevent scraping is making it very difficult to argue otherwise.

Legally, the data cnanot be public. The users give facebook a non-exclusive license with limited rights to store and process the data. From this does not follow the right that anyone who sees the shared images (for example) has a right to process them as well. If that wasthe case, the terms (https://www.facebook.com/terms.php 3.1) would have to state under which license the works are redistributed by facebook.

2

Ulfgardleo t1_j84fdfl wrote

if it is illegal now it would be super illegal then, because removing watermarks on its own typically violates the license of the material.

​

The question is 100% the same as "can i include GPLv3 code in my commercial closed source repository if i remove the license headers and ensure that the code ris never published?"

0

Ulfgardleo t1_j7yd02x wrote

You are right, but the point I was making that in ml in general those are not of high importance and this already holds for rather basal questions like:

"For your chosen learning algorithm, under which conditions holds that: in expectation over all training datasets of size n, the Bayes risk is not monotonously increasing with n"

One would think that this question is of rather central importance. Yet no-one cares, and answering this question is non-trivial for linear classification already. Stats cares a lot about this question. While the math behind both fields is the same, (all applied math is a subset of math, except if you people who identify as one of both) the communities have different goals.

6

Ulfgardleo t1_j7y8hdg wrote

The difference between stats and ml is as large as between math and applied math. They aim to answer vastly different questions. In ml you don't care about identifiability because you don't care whether there is a gene among 2 millions that cause a specific type of cancer. This is not what ml is about. In ML you also very rarely care about tail risk (you should) and almost nothing about calibration (you really should). And identifiability is out of the window as soon as you use neural networks and that prevents you from interpreting your models.

22

Ulfgardleo t1_j77rx53 wrote

how should it plan? It does not have persistent memory to have any form of time-consistyency. the memory starts with the beginning of the session and ends with the end of the session. next session does not know about previous session.

​

it lacks everything necessary to have something like a plan.

3

Ulfgardleo t1_j6vzpgz wrote

There is only very little research. They are a nice theoretical idea, but the concept is very constraining and numerical difficulties make experimenting hell.

I am not aware of any active research and I think they never were really big to begin with.

−4

Ulfgardleo t1_j603u8t wrote

in my experience, this is never the bottleneck. rastrigin does not cost much to evaluate, real functions where you would consider evolution on, do. I did research in speeding up CMA-ES and in the end it felt like a useless exercise in matrix algebra for that reason.

Yes, in theory being able to speed-up matrix operations is nice, but doing stuff in higher dimensions (80 is kinda irrelevant computationally, even on a CPU) always has to fight against the O(1/n) convergence rate of all evo algorithms.

So all this is likely good for is benchmarking these algorithms in a regime that is practically irrelevant for evolution.

5

Ulfgardleo t1_j1w3qhc wrote

I am not sure about you but I can predict a significant amount of ai paintings with high confidence. There are still significant errors in paper/material texture. Like "the model has not understood that canvas threads do not swirl". Or "this Hand looks off" or "this eye looks wrong".

(All three examples visible in the painting test above).

1