danja
danja t1_j8tes8q wrote
Reply to comment by crimson1206 in Physics-Informed Neural Networks by vadhavaniyafaijan
I don't quite see how approximation theorems aren't relevant to approximation problems. I'm not criticising the post, I just thought your response was a bit wide of the mark, not much fun.
danja t1_j8nerv0 wrote
Reply to comment by crimson1206 in Physics-Informed Neural Networks by vadhavaniyafaijan
What's a normal NN? How about https://en.wikipedia.org/wiki/Universal_approximation_theorem ?
How efficiently is another matter. Perhaps there's potential for using an activation function somewhere around Chebyshev polynomials that would predispose the net to getting sinusoids.
danja t1_j26n81j wrote
https://add.org/adhd-test/ - it's only like a rough thing, for screening. If you tick so many boxes, you should maybe talk to a medical professional. There was a lot of research done, to the extent the WHO will publish it.
danja t1_j225vqs wrote
Try the Adult ADHD test.
danja t1_j1vytz5 wrote
Reply to [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Really, really good work doing this.
Again make it stop at 10.
I did slightly better on literature, even though I am less familiar with the writers than the painters. Never read any Nathaniel Hawthorn.
I didn't take a note, maybe Hemmingway - one was grammatically awful, but worked really well as a 'poetic' statement. Obviously not AI.
danja t1_j1vwpoq wrote
Reply to [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Crit first - make it stop at 10!
Good work.
I was very surprised, only tried the paintings. I'm a fan of art history, relatively familiar with styles, can identify some because I recognise them. Wrong! Closer to 50/50.
danja t1_j121q19 wrote
Reply to How to train a model to distinguish images of class 'A' from images of class 'B'. The model can only be trained on images of class 'A'. by 1kay7
You're a bit stuck, surely..? As far as the model is concerned, there is no class 'B'. Would mashing up images in 'A' be allowed? Random images? Noise?
danja t1_iuswrq7 wrote
Get a spoonful of sugar and dump it in a pile on the left-hand plate. Close the lid. Wait until the smoke has subsided, lift and see the wonder! (My mother has one virtually identical)
danja t1_iusuuhr wrote
Reply to comment by DeepGamingAI in [D] What are the benefits of being a reviewer? by Signal-Mixture-4046
Nicely put!
danja t1_iusumf1 wrote
It gives you different perspectives.
I've only been on a handful of program committees but spent a couple of years reviewing tech books for a publisher. You use the word 'responsibility', yeah, hold that. Having to look hard at what people have done, critically, is really challenging but very educational. It may be things that you are not sure about, so you have to get yourself up to speed to do it justice.
It can be a nightmare - borderline cases are painful.
Seeing what works and what doesn't, for the subject matter, but also for the write-up, your main interface.
I'm not in academia so it isn't necessary for me, but I'm pretty sure I'd make a better paper now than before reviewing.
Also CV :) .
danja t1_isvn3bv wrote
Reply to comment by Alpacaofvengeance in How is the human gut microbiome established in infancy or earlier on? by molllymaybe
Nah. That doesn't make any sense Why should the first things in your gut be the best?
A course of antibiotics will hammer the bacteria, a different set will surely grow back. Over the course of, call it a year, you have to encounter critters that're better suited than the last lot.
Also, faecal transplants.
danja t1_isfxrfl wrote
Reply to [D] Could diffusion models be succesfully trained to reverse distortions other than noise? by zergling103
Seems like there are maybe 3+ tangential problems here. Noise is one. Then for your 'simple' list, most of those are the result of direct non-linear transformations, I would imagine an old-school mix of convolution and trad neural nets could come up with their inverses fairly efficiently. The 'complex' list - hmm, the word Deep springs to mind ...
danja t1_j8tgfpo wrote
Reply to Physics-Informed Neural Networks by vadhavaniyafaijan
I like it. On a meta level, giving the machine a bit of a priori knowledge about the shape of things to come, that makes a lot of sense.
When the self-driving car hits an obstacle, they will both obey mostly Newtonian sums.
Effectively embedding that knowledge (the differential equations) might make the system less useful for other applications, but should very cheaply improve it's chances on a lot of real-world problems.
Robotics is largely done with PID feedback things. Some more understanding of the behaviour of springs etc etc should help a lot. Quite possibly in other domains, hard to know where such things apply.