danja

danja t1_j8tgfpo wrote

I like it. On a meta level, giving the machine a bit of a priori knowledge about the shape of things to come, that makes a lot of sense.

When the self-driving car hits an obstacle, they will both obey mostly Newtonian sums.

Effectively embedding that knowledge (the differential equations) might make the system less useful for other applications, but should very cheaply improve it's chances on a lot of real-world problems.

Robotics is largely done with PID feedback things. Some more understanding of the behaviour of springs etc etc should help a lot. Quite possibly in other domains, hard to know where such things apply.

1

danja t1_j1vytz5 wrote

Really, really good work doing this.

Again make it stop at 10.

I did slightly better on literature, even though I am less familiar with the writers than the painters. Never read any Nathaniel Hawthorn.

I didn't take a note, maybe Hemmingway - one was grammatically awful, but worked really well as a 'poetic' statement. Obviously not AI.

1

danja t1_iusumf1 wrote

It gives you different perspectives.

I've only been on a handful of program committees but spent a couple of years reviewing tech books for a publisher. You use the word 'responsibility', yeah, hold that. Having to look hard at what people have done, critically, is really challenging but very educational. It may be things that you are not sure about, so you have to get yourself up to speed to do it justice.

It can be a nightmare - borderline cases are painful.

Seeing what works and what doesn't, for the subject matter, but also for the write-up, your main interface.

I'm not in academia so it isn't necessary for me, but I'm pretty sure I'd make a better paper now than before reviewing.

Also CV :) .

5

danja t1_isvn3bv wrote

Nah. That doesn't make any sense Why should the first things in your gut be the best?

A course of antibiotics will hammer the bacteria, a different set will surely grow back. Over the course of, call it a year, you have to encounter critters that're better suited than the last lot.

Also, faecal transplants.

2

danja t1_isfxrfl wrote

Seems like there are maybe 3+ tangential problems here. Noise is one. Then for your 'simple' list, most of those are the result of direct non-linear transformations, I would imagine an old-school mix of convolution and trad neural nets could come up with their inverses fairly efficiently. The 'complex' list - hmm, the word Deep springs to mind ...

0