Thorusss t1_jeavgqn wrote

Even if it is against the terms of service of ChatGPT, what are they going to do about it? There are no legal judgments if AI output even is copyrightable, and no judgments if training on copyrightable material is fair use.

And OpenAI trained on a lot of copyright material, so they better think twice about opening that can of worms.

They only thing they can try to do, is limit the access of Google to ChatGPT's output, but good luck with that, if they want it to remain available to the general public.


Thorusss t1_j9np66w wrote

There is something ironic about a Sci-Fi magazine rejecting a new technology.

I would prefer they simply chose the stories that are good. Be it human, AI assisted or purely AI written.

For now, human curation is still necessary - on the author and the publisher side, for good results.

With the next AI versions , it will probably be impossible to tell anyway.


Thorusss t1_j5xnzqv wrote

We allow allow humans do drive, we allow possibly emotional upset, drugged, sensory challenged, tired people distracted by a person/call/whatever to make make a judgement of value of human life. Are we willing to go there?


Thorusss t1_izfuj6u wrote

>Central to many of our strategic planning techniques in Cicero is the idea of regularization towards human-like behavioral policies, to ensure CICERO's play remains roughly compatible with human play

That implies there could be more optimal strategies even with alliances with human players? Is there interest in exploring this, and evolving the strategies beyond what humans have found so far, as it has happened with chess and go? See where and Cicero2 could move the Metagame to?


Thorusss t1_izfsiut wrote

>We're not really interested in building lying AIs

Why? Child psychology sees lies as an important development step in the theory of mind - the insight that knowledge is not universal.

In real world applications, AI might encounter lies. Do you think these systems can be deal with that as good, when they are not themselves capable of it? E.g. for planning, you have the model the other side, how do you model lying successfully, when you cannot lie?


Thorusss t1_izfrw0p wrote

Some answer said that the chat history is not preserved beyond a certain length. Does Cicero track past cooperation/betrayal from other players somewhere else?


Thorusss t1_izda0mm wrote

Players have felt that Cicero is way more forgiving (cooperating after a recent betrayal) than human players, when it serves it purpose for the next turn. Is that your observation as well?

Does Cicero have full memory of the whole game and chat, and can e.g. remember a betrayal from many turns ago?

I also understand that it reevaluates all plans each turn. Does that basically mean it does not have/need an internal long term strategy beyond it current optimization of the long term results of the next move?