Viewing a single comment thread. View all comments

leoKantSartre t1_iy4j1ln wrote

Wow will love to dwell deep into it since Im a philosophy enthusiast too

2

Ingvariuss OP t1_iy51ub7 wrote

That's great to hear! A lot of philosophy helped me solve many different Data Science problems over the last couple of years. It definitely gives you a new perspective on things.

2

leoKantSartre t1_iy5234p wrote

Im a ML engineer too(I deal mainly with physics based ai), but I love Kant Spinoza, Nietzsche Camus etc etc lost goes on. Im more interested in epistemology thou that’s why Kant is kinda fav to me.

2

Ingvariuss OP t1_iy52ioq wrote

Oh, I see! Interesting that you like Kant and Nietzche together :D

I mostly read phenomenologists like Maurice Merleau-Ponty, Heidegger, and Husserl but I also like Heraclitus, Whitehead, and the like. My background is in Psychology so my work is focused on bridging the gap between it and AI.

2

leoKantSartre t1_iy52uma wrote

Well I like the works of Husserl and Ponty,I love reading so it’s more like exploring more and more. I legit started with sophists and kept exploring more and more. Ultimately epistemology seems more interesting to me.

2

phobrain t1_iy77obi wrote

Have you looked at my 'Rorschach pairs' concept? Nets predict interesting unseen pairs with 90% accuracy, implying a model of personality is embedded in the training data. Early results:

http://phobrain.com/pr/home/siagal.html

Motivated a lot by British Empiricism as dissected by analytic folk 'under' Wilfrid Sellars, and the idea of alternately seeing the Platonic sameness of two water glasses, vs. the Aristotelian particularity/difference.

2

Ingvariuss OP t1_iy7y7kj wrote

No, I haven't. I've just experimented with it and it is very interesting indeed! Would like to know more about your inspiration and overall approach.

1

phobrain t1_iy84rh5 wrote

Self-explanitorially,

http://phobrain.com/pr/home/explain.html

The pair concept started as a demo for a proposal for an AI conference lounge:

http://phobrain.com/pr/home/SCMETA_Ross.pdf

Before that, I was sequencing single images, comparing the real one to the memory of the previous one, which was based on my discovering ways to sequence abstract slides to hold an audience's attention.

The underlying inspiration was a desire to dissolve militant ignorance, which I saw as a threat to a civilization which had provided the books that gave me a sense of companionship. As a 10-year-old in the 60's, I ironically beamed Morse SOS into the night sky, imagining the colossal improbability of it reaching the aliens who would be our only hope.

My approach was to use myself as a sort of vessel to cook up a solution in, autosuggestively forking a clean copy of my then self into my unconscious, tasked with maintaining its understanding, while interrupting any move toward having children or career commitment by my external self. I needed to reduce the friction I was feeling from the IQ Gap, and what better way than dropping my IQ enough to get along, while my inner self modeled the stupidity involved to be able to unwind it from the inside and claw my way back, leaving a realizable trace, after having been fully Yoda-fied by a varied, Garp-like life.

Part of that kid stuff involved reading lots of history, especially about WW1/2, since I had even listened to family members who fought in WW1 at the dinner table.

I'm readying the code to open source, so first taker and I'll push it. :-) Anyone want to see if seeing their psyche reflected causes a prodromal barf sensation like psychedelics? From that, I wonder if it will be possible to gain control of one's serotonin, dopamine, and what have you from this concept, way down the road.

I derive 'side vectors' that represent the 'pairability' of each photo as if with 'all' other photos appearing on left or right. 2D vectors work ok, and plotting them, I see even spacing locally. Now I'm starting to explore how neighbor effects might be exploited, using lists of the closest 50 other points per point. When the proven nets come up with a socko pair, I grab nearest vector neighbors of each to see how that pair fits, which is the sort of thing that keeps labeling more pairs exciting. Now adding cosine diastance, then will push code to site in case anyone wants to try it. On the site, you can get some such control via the

Phob->Search mode: AI

option.

2

Ingvariuss OP t1_iy9hud6 wrote

Very interesting. Might I ask if you are acquainted with Carl G. Jung and if he was a source of inspiration? I do see a lot of Jungian concepts in the way you describe things. Especially when talking about yourself.

1

phobrain t1_iybxbzh wrote

I was into Jung as a teenager, but realized I couldn't understand his books (now I call it aspiration), so stayed more in Joseph Campbell world. I see a lot in common with him now, but I hope to make way more of a difference in the end, since now we have so much more and quicker quantifiability of who we are, and awareness of sufficient species threats to know what to coalesce our best for.

New slogan: We are the local minima that we seek.

1

leoKantSartre t1_iy526k6 wrote

I will love to know more deeply into this and how this can be use to solve practical problems?

2

Ingvariuss OP t1_iy52rm6 wrote

Good question! I had a few ideas on how this might be adapted for solving hard problems or advancing Philosophy as a field in a certain way by generating new approaches and ideas. Feel free to hit me up in the DM if you want us to talk about this more.

2

idrajitsc t1_iy5apu8 wrote

It cannot, in any way. Nothing about the training process suggests that the trained networks can interpolate in "idea space," they just work on language at a superficial level. There's no reason to associate the meaning of anything the network says with the original philosopher unless it's directly parroting them.

This is fine for cutesy fun stuff, but calling it "Speaking with Plato" and pretending that it can contribute to philosophy is very misleading, even though everyone and their mother is doing similar things with LLMs now.

1

Ingvariuss OP t1_iy7yyw9 wrote

Hi,

Thanks for sharing your thoughts! You do have a point about us needing to be careful and not be deceived by the superficiality of AI.

The name is like that for making it more interesting and it is misleading only to the extent of one's naivety of believing that one can talk directly to a dead person in any way.

As for solving practical problems, I'll need to disagree with you from a cognitive standpoint. AI models like these can be used for fun, but they can also be used to explore new ideas or angles of a philosopher or philosophy that might have evaded us due to the combinatorial explosion of possibilities.

But, we do need to acknowledge that AI, in this example, isn't directly solving a problem but is a tool for a human to explore that problem and spark new ideas or approaches and in turn contribute to advancing our knowledge.

It's a bit similar to those stories where children sparked top scientists to uncover new things and approaches by simply asking a question or giving an idea that our "grown-up" cognitive framing systems didn't want to pass through.

1

idrajitsc t1_iy8hyiv wrote

That's the thing though, it doesn't explore or generate new ideas. It generates grammatically correct text with a bit of flavor that has no actual meaning--meaning requires an intent to convey information. All of the ideas are things you impose on it. There's none of the weird intuition or perspective a child offers. It's just a random text generator you're using to seed your ideas.

And that'd be... okay I guess? Not particularly efficient and maybe counterproductive since it'll bias you towards thinking about nonsense, but not directly damaging. But even if you didn't intend it, the obvious implication here is that "this is how Plato would answer my question!" Which lends it a credibility it doesn't deserve. You should read this paper and particularly section 5 and its citations.

edit: sorry I meant section 6

1

Ingvariuss OP t1_iy9hgqy wrote

Thanks for sharing this paper! I'll give it a read in the coming days. As for making you think about nonsense, who's to say that it is nonsense?

Especially if it manages to give you a worthwhile idea to ponder and be productive about. Even if it's helping some to just "seed" their own ideas, the seed can sprout into something useful and/or thought-provoking. It's psycho-technology in some way. For example, we might say that alchemists and astrologists dealt with "nonsense" but they still managed to lay the foundations for chemistry and astrophysics with the projection of their own inner world to the world of matter.

There must be some reason why human evolution is expanding us toward the world of ideas and imagination as our next frontier that will, based on my humble opinion, certainly expand our understanding of the world and science overall.

Aside from that, the field of AI will just get better over time and who knows what we might be capable of (be it good or bad).

1

idrajitsc t1_iy9o98b wrote

That paper addresses your first question directly, and better than I can. But in brief, it's nonsense because how could it not be? If there is real, interesting information content to what it's saying, how was it generated? How would you expect your network to have an understanding of anything, use that understanding to synthesize new ideas, and then accurately convey those ideas to you? All it has been trained to do is probabilistically produce coherent text--the training process has no interaction with the information content of the training texts, much less anything that would allow it to generate novel meaning.

As for the rest of your reasoning, you could use the same argument for anything at all that causes you to think about things. In line with that paper, would you want to spend serious intellectual effort on deriving deeper meaning from a parrot's chatter? Maybe the network accidentally outputs something that sends you along a path to productive thoughts. Or maybe you waste all your time trying to turn lead into gold. Like, of course you're free to experiment with it, but it's irresponsible to pretend it's outputting anything profound if you're going to be sharing it with other people.

1

Ingvariuss OP t1_iyagl8c wrote

Regarding your first paragraph. It evades what I wrote about it being more of a tool for inspiration and/or being used as a psychotechnology. In other words, it is indeed still up to the human to separate the wheat from the chaff and plant those "seeds" you mentioned earlier.

Regarding the second paragraph, I do believe that comparing it to a parrot is a strawman. Especially for bigger and more advanced language models than the one I used as a proof of concept. As for the probability of it being (un)productive, isn't that the case for many things in life? This is especially true for scientists that have thousands of failed experiments where only one that is successful advances us further. Nonetheless, I would prefer us speaking with each other and bouncing ideas rather than texting with a bot on any day.

As for being (i)responsible, nowhere did I say that it outputs profound things nor would it be intellectually honest to deny it as we are dealing with probabilities that aren't apparent to us. That also informs me that you probably didn't read the full article linked in my post.

1

idrajitsc t1_iyaq0ha wrote

I mean, just throwing up your hands and saying "sure it's probably nothing, but most things are nothing" is a cop-out: why are you posting it here then?

You're contradicting yourself. If it's nothing more than a random text generator with Plato's mannerisms, why's it interesting and why are you saying it's a tool for approaching philosophical problems? If it has something more profound to say--no it doesn't--and if you insist it does it's incumbent on you to justify it with something more than "it's really big and complex so maybe it's doing something inexplicable."

1