Viewing a single comment thread. View all comments

turnip_burrito t1_ja5y3hb wrote

I agree with all of this, but just to be a bit over-pedantic on one bit:

> Models cant speak or hear when they want to Its just not part of their programming.

As you said it's not part of their programming, in today's models. In general though, it wouldn't be too difficult to construct a new model that judges at each timestep based on both external stimuli and internal hidden states when to speak/interrupt or listen intently. Actually at first glance such a thing sounds trivial.

1