SaifKhayoon
SaifKhayoon t1_j6n9kb2 wrote
Nah, researchers haven't given up on traditional machine learning methods! They combine them with deep learning in lots of places, like image classification, speech recognition, and recommender systems.
Plus, traditional methods can be better for some tasks, like when you have a small dataset or want an explainable model or real-time predictions.
SaifKhayoon OP t1_j6g4dg3 wrote
Reply to comment by Leonos in Ever wanted to learn AI and machine learning? (It's really statistics in disguise) - Guide for visual learners presented in the form a of reactive web application by SaifKhayoon
Here's a list of example sets to try:
>A ∪ B
A union B combines both circles
>A' = {1, 2, 3} complement
Demonstrates complement of a set (A' = {all elements not in A})
>(A ∪ B)' = complement of union of A and B
De Morgan's law for complement of union and intersection
>A ∩ B = ∅
Demonstrates disjoint sets (sets with no common elements)
>A ∪ B = U
Together make up the universal set
SaifKhayoon OP t1_j6dfnuz wrote
Reply to Ever wanted to learn AI and machine learning? (It's really statistics in disguise) - Guide for visual learners presented in the form a of reactive web application by SaifKhayoon
The title is a bit inaccurate beside simplifying the whole of the field to just "statistics" I grouped in AI with Machine learning, when AI is actually a humongous field encompassing things from video game enemies to roombas, machine learning learning is a field within AI and deep learning which is the most impressive is a subfield of that
SaifKhayoon t1_j69e65n wrote
They had a problem sourcing labeled training data of 3D videos, you can tell this tech is still early from the shield in the bottom right example
They could generate a labeled 3D environments from 2D images using InstantNGP and GET3D with Laion's labeled dataset of 5.85 billion CLIP-filtered image-text pairs to create a useful dataset for training because this currently relies on a workaround of only being trained on text-image pairs and unlabeled videos due to lack of labeled 3D training data.
SaifKhayoon t1_jb54pnw wrote
Reply to [R] We found nearly half a billion duplicated images on LAION-2B-en. by von-hust
Is this why some checkpoints / safetensors make for better results than stable diffusion's 1.5 and 2.1 weights?
Was LAION-2B used to train the base model shared by all other "models"/weights?