Viewing a single comment thread. View all comments

blueSGL t1_ivvarfl wrote

I think companies are going to be scared to be the first out of the gate with LLM tech for the public. Heavily neutered versions will going out first. Everyone is going to be scared of "personal assistant writes N word poem"* or similar ridiculous gawker type headlines. Some people are going to take access to such models as a challenge to create those headlines. This is why I say it will be an upgrade on what we have now but will still have massive limitations in the name of 'safety' even though unfettered models will do much much more.

This is why I think it will first 'go large' in business use cases where they can really put the blinkers on the thing and instruct employees that if it comes out with something incorrect, it's expected, it's 'beta' and to follow a procedure if it happens.

There will be a rise in very narrowly targeted walled garden services with an LLM back end but I doubt they will be used for anything 'general' because scale and scope of the 'safety' problem will prevent it.

Edit: * shortened the title, no news outlet would go with something as verbose as "the personal assistant that writes poems about the N words" that would be a sub heading at best.

2