Fmeson t1_j57jmv9 wrote

I suppose you doubt that they have interesting models to release, and not that they are willing to relax AI saftey rules, but google undoubtedly has interesting models. They have demonstrated their capacity in the past with things like AlphaGo, and they have insane amounts of computing resources, data, and brain power.

GPT3 is not notable because open AI has ML tech no one else has, but because the resources needed to train such a thing is beyond smaller labs. Google can undoubtedly manage it.


Fmeson t1_j49rakv wrote

Humans still had to tell alphazero what it was supposed to optimize for. We defined what the rules were and told it what outcomes we wanted.

If we want an AI moral system, we'll similarly have to define the rules and what outcomes we want.


Fmeson t1_j1o2q9k wrote

Google already has an ai division with some chops. Big corps are risk adverse, but I do think Google can and will learn from open AI. They just to find a way to do it that doesn't hurt their ad revenue.


Fmeson t1_ixd7op7 wrote

I dont personally think so. It's non obvious to me at what level similar personalities should show physical similarities, and it's very interesting that, for example, gender doesn't result in as much similarity.

It's also interesting to consider the cases where looking at imagery didn't result in similar fmri scans despite similar personality.

With this sort of research they may be able to do brain scans and tell you about your own personality, and that's wild.


Fmeson t1_iww9y5v wrote

If it's genuine, it won't tend to come across as patronizing. Think of it less as "making someone feel 'appreciated'" and more "give credit where credit is due".

After all, you can't force someone to feel a way, you can only control your actions. If someone has some issue where they distrust even genuine expression of appreciation, that's on them to work on.