extracoffeeplease
extracoffeeplease t1_j54rlkh wrote
When I took my first job at a small startup around food pairing, they were doing exactly the same. Flavor pairing by "has the same aromas" sucks.
It's much better to look which ingredients are used together a lot in recipes. Much like a "users who watched video X also watch video Y".
extracoffeeplease t1_iy0pkp7 wrote
Reply to comment by Deep-Station-1746 in [R] QUALCOMM demos 3D reconstruction on AR glasses — monocular depth estimation with self supervised neural network processed on glasses and smartphone in realtime by SpatialComputing
Yeah they totally didn't show the application?? People've been doing 3d mesh construction with deep learning for a while now
extracoffeeplease t1_j8nqdl8 wrote
Reply to [P] From “iron manual” to “Iron Man” — Augmenting GPT for fast editable memory to enable context aware question & answering by skeltzyboiii
So IIUC this searches text first, then adds that to the prompt as input to the LLM. Now for the text search, why do vector searching and not Elasticsearch, or both? Reason I'm asking is I've seen vector search issues pop up when your data is uncommon and hence badly embedded, for example searching for a unique name or a weird token ( for example, P5.22.a.03), whereas classic text search can find that exact token.