Viewing a single comment thread. View all comments

evanthebouncy t1_j1zo0rp wrote

hey, I work on program synthesis, which is a form of neuro-symbolic reasoning. here's my take.

the word "neuro-symbolic" is thrown around a lot, so we need to first clarify which kinds of work we're talking about. broadly speaking there are 2 kinds.

  1. neuro-symbolic systems where the symbolic system is _pre-established_ where the neuro network is tasked to construct symbols that can be interpreted in this preexisting system. program synthesis falls under this category. when you ask chatgpt/copilot to generate code, they'll generate python code, which is a) symbolic and b) can be interpreted readily in python
  2. neuro-symbolic systems where the neural network is tasked to _invent the system_. take for instance the ARC task ( https://github.com/fchollet/ARC ), when humans do these tasks (it appears to be the case that) we first invent a set of symbolic rules appropriate for the task at hand, then apply these rules

I'm betting Demmis is interested in (2), the ability to invent and reason about symbols is crucial to intelligence. while we cannot understate the value of (1) , reasoning in existing symbolic system is immediately valuable (e.g. copilot).

some self-plug on my recent paper studying how people invent and communicate symbolic rules using natural language https://arxiv.org/abs/2106.07824

26

yazriel0 t1_j237x73 wrote

> we cannot understate the value of (1) , reasoning in existing symbolic system

ofc. and (1) may be a good way to bootstrap (2) ..

why arent we seeing more (un)supervised learning on code? perhaps with handcrafted auxiliary tasks.

when will this loop exit? how much memory will this function allocate? etc, etc. this seems to be a huge underutilized dataset.

am i missing something? (yes, its a lot of compute)

1