NovelspaceOnly

NovelspaceOnly t1_jdqx9t2 wrote

This might sound a bit corny. I try to have a sparse BFS understanding of the field at any given time and a DFS on topics I'm interested in like interpretability, NLP, and GNNs.

Four things that I think are important are - contributing to open source, joining discord communities, at the very min "skimming" papers(reading abstracts, conclusions, and charts), and I also topic model researchers' Github repos i find on paperswithcode. As a 5th - ML Twitter if you can maintain your sanity.

The sixth sense and the most important one is to have a strong math background, IMO it is the most important aspect that helps generalize new research. grok linear algebra, probability, and calc. Mostly linear algebra though because the tensor notation really helps with probability and functional analysis. A lot of physics can be understood through the lens of tensor analysis and probability.

1

NovelspaceOnly OP t1_jbsdr55 wrote

I have some preliminary generation scripts for SMILES chemical graphs, Feynman diagrams, storytelling with interleaved images, and testing compilation rates. sorry for switching accounts. this one is logged on my laptop lol..

1

NovelspaceOnly t1_j3a8jb6 wrote

IMHO absolutely not.

In your opinion, would you rather have an open-source community-building AI through open debate and collaboration, or a closed system where only a small number of people have access to advanced AI technologies and development, potentially giving them an unfair advantage? While capitalism can also create divides between those who have access to resources and knowledge and those who do not, an open and collaborative approach to AI research could help to level the playing field and promote transparency and accountability.

1