Viewing a single comment thread. View all comments

he_who_floats_amogus t1_j893bfl wrote

Basically the answer is that it’s OpenAI’s tool and it’s their prerogative to implement it as they see fit. You don’t have any bargaining power to demand additional features or removal of constraints. Even if we take your perspective as correct as an axiom regarding safety, if the tool can meet OpenAI’s goals with excessive safety impositions, then the tool is successfully working as designed. Abundance of caution is only a problem if it’s hampering OpenAI in fulfilling their own goals.

There are many possibilities as to the “why” here. It’s possible that the system is logistically difficult to control to tight degrees of granularity in various ways and it’s better logistically for OpenAI to structure constraints with broad brush strokes in an attempt to make sure they capture the constraints they desire to have. That’s one high level possible explanation among many.

5