ktpr
ktpr t1_jdnjj49 wrote
Reply to comment by Aristocrafied in The two retinas are tied/linked together in the brain. Are they tied 1:1, so that each retinal point corresponds to the same retinal point in the other eye? I.e., each retinal point from one eye shares the same binocular neuron with its counterpoint in the other eye? by ch1214ch
To add on to this data point my color perception for right and left eyes are slight different, particularly around red hues. So there isn’t 1:1 overlap between same retinal points
ktpr t1_j9tj52p wrote
Reply to [D] A funny story from my interview by nobody0014
Did you get the job?
ktpr t1_j9jmqq7 wrote
I feel like recently ML boosters come this Reddit, make large claims, and then use the ensuing discussion, time, and energy from others to correct their click content at our expense
ktpr t1_j859ruq wrote
Reply to comment by endless_sea_of_stars in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Or, click here to auto-cite this paper to learn more about number 14!
ktpr t1_j859ne1 wrote
Reply to comment by Trakeen in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Imagine that!
ktpr t1_j8082yu wrote
Reply to comment by mhornberger in North American companies notch another record year for robot orders by darth_nadoma
There are many proposals for funding UBI. You just have to look. One big idea is to separate the idea of wealth from money and use negative taxation, see: https://citizen-network.org/library/how-to-fund-a-universal-basic-income.html
ktpr t1_j807wt8 wrote
Reply to comment by arckeid in North American companies notch another record year for robot orders by darth_nadoma
We’ll have something or else we’ll all end up with nothing in the end.
ktpr t1_j7ho1vo wrote
Reply to comment by st8ic in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
They don’t care that much about what ChatGPT will do search. They care about the advertising users of ChatGPT won’t be seeing.
ktpr t1_j6t8124 wrote
It'll look like something that you can't start preparing for right now because a lot of it hasn't been invented yet.
ktpr t1_j6nmsol wrote
Reply to comment by beanhead0321 in [D] Have researchers given up on traditional machine learning methods? by fujidaiti
Did they claim traditional ML explained the features engineered by the DL? If so, how did they explain the units of feature variables?
ktpr t1_j3usyfe wrote
Reply to comment by Cheap_Meeting in [D] Found very similar paper to my submitted paper on Arxiv by [deleted]
That begs the question, what are good ways to use a preprint server to further your academic career
ktpr t1_j3rxi2v wrote
Reply to comment by DevFRus in [D] Found very similar paper to my submitted paper on Arxiv by [deleted]
Whats the right way to use arXiv?
ktpr t1_j2pmszg wrote
Post to /r/PhD. Lot more knowledge there that’s specific to your case
ktpr t1_j2d9hre wrote
Reply to comment by designer1one in [R] 2022 Top Papers in AI — A Year of Generative Models by designer1one
I’d like to see more progress on data-to-text generalization.
ktpr t1_j1na3la wrote
What is a reviewer blacklist?
ktpr t1_j1cb1nd wrote
Reply to comment by sanman in [D] When chatGPT stops being free: Run SOTA LLM in cloud by _underlines_
I suspect they’ll move towards paid tiers when the popularity goes down. Right now they’re getting a ton of interesting and rich data for free from going viral. But when that eventually fades they’ll want to continue generating some kind of value from it.
ktpr t1_j19lwj0 wrote
Reply to comment by Craksy in [D] Different types of pooling in Neural Nets by Difficult-Race-1188
Should the Reddit ban medium links?
ktpr t1_j14ga2m wrote
Why don't you provide paper references for these types of pooling so that we can see them in context and in action?
ktpr t1_j0eo896 wrote
Since doctors also have to “trace the[ir] final answer back to the original sources,” and contexts of the case how does these help doctors that must do the same due diligence either way?
ktpr t1_j09064e wrote
Just making sure here, this isn’t a published conference paper that went through peer review, correct?
ktpr t1_iz2te98 wrote
Reply to comment by Nameless1995 in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
Impressive. Also the latest multiple month appointment was nearly 40 years ago. Boulder of salt here.
ktpr t1_iz2eya1 wrote
Reply to comment by katprop in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
If he mentioned those extrapolations in a psychology or neuroscience conference he would be laughed out of the room. World class expertise in one area does not translate to informed speculation in another.
ktpr t1_iy3v2i1 wrote
Reply to comment by imyourzer0 in [D] What method is state of the art dimensionality reduction by olmec-akeru
Unfortunately, deeply understanding your problem and its relationship to prior algorithms is a lot more work than just telling someone that you applied a SOTA algorithm and got decent results.
edit - an apostrophe
ktpr t1_ixhgc37 wrote
Thanks for sharing! Are you in a PhD program?
ktpr t1_jeco4so wrote
Reply to comment by cathie_burry in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
I feel like a lot of folks are missing this point. They retraining on ChatGPT output or LLaMA related output and assume they can license as MIT or some such.