evanthebouncy t1_ja8qwgw wrote

Hi, I have a PhD. Let me try an answer.

The tldr is that undergraduate focuses on learning, and graduate/PhD focus on discovering.

Refer to this diagram that others have linked: https://www.reddit.com/r/PhD/comments/u65rnp/a_phd_explained_in_a_few_diagrams/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button

Long version below:

In undergraduate degree, your primary job is learning. For the first 2 years, think of it as more difficult highschool -- a general, broad education. In the last 2 years of undergraduate, you'd pick a major and focus heavily on its selected courses -- imagine taking 4 hours of biology classes everyday. Similar to your previous educations, your performance is evaluated on whether you score well on a test -- things that the teacher knows an answer ahead of time. You're acquiring the accumulated knowledge of humanity.

Graduate degree has two levels. You either do a masters (2yrs typically), or you continue after masters to do a PhD(3+ more years on top). I'll explain PhD first.

In a PhD program, your primary job is discovering. Unlike all previous education, in a PhD program nobody knows the answer ahead of times. Your "exam" is more like a class project on steroids: years of research in testing a hypothesis (that nobody has thought of before) by running experiments, and writing up your findings in a scientific paper. Your paper is evaluated by a group of scientists expert in the field (called peer reviews). Once you're done enough original research, preferably with paper publications, you write up a thesis summarizing your original works and graduate.

You know the term "scientist" right? During and after a PhD is when you get to call yourself a scientist, as you'd experienced what it's like pushing the boundaries of human knowledge.


evanthebouncy t1_j6wpf34 wrote

I made a bet in 2019 to _not_ learn any more on how to fiddle with NN architectures. It paid off. Now I just send data to a huggingface API and it figures out the rest.

What will change? What are my thoughts?

All well identified problems become rat races. If there's a metric you can put on it, engineers will optimize it away. The comfort of knowing what you're doing has a well-defined metric is paid for in the anxiety of the rat race of everyone optimizing the same metric.

What do we do with this?

Work on problems that don't have a well defined metric. Work with people. Work with the real world. Work with things that defies quantification, that are difficult to reduce to a mere number that everyone agrees on. That way you have some longevity in the field.


evanthebouncy t1_j1zo0rp wrote

hey, I work on program synthesis, which is a form of neuro-symbolic reasoning. here's my take.

the word "neuro-symbolic" is thrown around a lot, so we need to first clarify which kinds of work we're talking about. broadly speaking there are 2 kinds.

  1. neuro-symbolic systems where the symbolic system is _pre-established_ where the neuro network is tasked to construct symbols that can be interpreted in this preexisting system. program synthesis falls under this category. when you ask chatgpt/copilot to generate code, they'll generate python code, which is a) symbolic and b) can be interpreted readily in python
  2. neuro-symbolic systems where the neural network is tasked to _invent the system_. take for instance the ARC task ( https://github.com/fchollet/ARC ), when humans do these tasks (it appears to be the case that) we first invent a set of symbolic rules appropriate for the task at hand, then apply these rules

I'm betting Demmis is interested in (2), the ability to invent and reason about symbols is crucial to intelligence. while we cannot understate the value of (1) , reasoning in existing symbolic system is immediately valuable (e.g. copilot).

some self-plug on my recent paper studying how people invent and communicate symbolic rules using natural language https://arxiv.org/abs/2106.07824


evanthebouncy t1_j0d1op6 wrote

I work a lot with human AI communication, here's my take.

The issue is our judgement (think value function) on what's good. It's less to do with what the AI can actually do, but more with how it is being judged by people.

Random blotches of colors shapes in an interesting way on a canvas is modern art. It's non intrusive and fun to look at. A painting with less than perfect details such as having goblin hands with 6 fingers (as they often do in AI generated arts) isnt a big deal, as long as the overall painting is cool looking.

A music phrase with 1 wrong note, one missed tempo, one sound out of the groove would sound absolutely garbage. We expect music to uphold this high quality all the way through, all 5 minutes. No 'mistakes' are allowed. So any details the AI gets 'wrong' will be particularly jarring. You can mitigate some of the low level errors by forcing AI to produce music within a DSL such as MIDI, but the overall issue of cohesion will be there.

Overall, generative AI lacks control or finesse over the details, lacks logical cohesion. These aren't problems for paintings as much as music.