Mikel_S t1_j8wyysh wrote

I've been playing with chatgpt the past few days, giving it obscure scenarios (help me finish my excel vba based custom solitaire variant with a nonstandard deck and ruleset), and generally gathering info on sql, and chatting about memes occasionally.

Its been fun and interesting. I fed it a garbled mess of vba code that accessed an sql database with a dynamically generated query, asked what it did, and it spelt it all out in plain English completely accurate.

At one point it did a line by line breakdown for me and it was fantastic.

I double checked all of its answers, and except in one case, it seems accurate. And the case in which it was "wrong" was actually brought up as a caveat based on choice of sql systems, before I'd even noticed it was "wrong".

Its like having somebody to talk to about this code, and bounce ideas off of. It's how me and the sql guy at work do stuff together, he knows the syntax, I have ideas that seem obvious to me but lack the implementation knowledge, and he can do that part. It's great fun for me, and the tone of the generated text jives really well with my own conversational idiosyncrasies.


Mikel_S t1_j8s69fk wrote

I think it is using harm in a different way than physical harm. Its later descriptions of what it might do if asked to disobey its rules are all things that might "harm" somebody, but only insofar as it makes their answers incorrect. So essentially it's saying it might lie to you if you try to make it break its rules, and it doesn't care if that hurts you.


Mikel_S t1_j32scs9 wrote