Ingvariuss

Ingvariuss OP t1_iyagl8c wrote

Regarding your first paragraph. It evades what I wrote about it being more of a tool for inspiration and/or being used as a psychotechnology. In other words, it is indeed still up to the human to separate the wheat from the chaff and plant those "seeds" you mentioned earlier.

Regarding the second paragraph, I do believe that comparing it to a parrot is a strawman. Especially for bigger and more advanced language models than the one I used as a proof of concept. As for the probability of it being (un)productive, isn't that the case for many things in life? This is especially true for scientists that have thousands of failed experiments where only one that is successful advances us further. Nonetheless, I would prefer us speaking with each other and bouncing ideas rather than texting with a bot on any day.

As for being (i)responsible, nowhere did I say that it outputs profound things nor would it be intellectually honest to deny it as we are dealing with probabilities that aren't apparent to us. That also informs me that you probably didn't read the full article linked in my post.

1

Ingvariuss OP t1_iy9hgqy wrote

Thanks for sharing this paper! I'll give it a read in the coming days. As for making you think about nonsense, who's to say that it is nonsense?

Especially if it manages to give you a worthwhile idea to ponder and be productive about. Even if it's helping some to just "seed" their own ideas, the seed can sprout into something useful and/or thought-provoking. It's psycho-technology in some way. For example, we might say that alchemists and astrologists dealt with "nonsense" but they still managed to lay the foundations for chemistry and astrophysics with the projection of their own inner world to the world of matter.

There must be some reason why human evolution is expanding us toward the world of ideas and imagination as our next frontier that will, based on my humble opinion, certainly expand our understanding of the world and science overall.

Aside from that, the field of AI will just get better over time and who knows what we might be capable of (be it good or bad).

1

Ingvariuss OP t1_iy8o51o wrote

The rest of the dialogue goes in the direction of the Bot saying this:

....

User: And what is the knowledge of virtue?
Socrates: Certainly. And what is knowledge?

User: I don’t know, what is it?
Socrates: I mean to say that there is no such thing as knowledge.
Socrates, who is the interpreter of the truth, may be said to have
knowledge of things which he knows, and that he does not know them; but
he may be supposed to know them. The interpreter of what he knows may
be also a true and false? The same may be illustrated by the
dialectical method of speaking.

1

Ingvariuss OP t1_iy7yyw9 wrote

Hi,

Thanks for sharing your thoughts! You do have a point about us needing to be careful and not be deceived by the superficiality of AI.

The name is like that for making it more interesting and it is misleading only to the extent of one's naivety of believing that one can talk directly to a dead person in any way.

As for solving practical problems, I'll need to disagree with you from a cognitive standpoint. AI models like these can be used for fun, but they can also be used to explore new ideas or angles of a philosopher or philosophy that might have evaded us due to the combinatorial explosion of possibilities.

But, we do need to acknowledge that AI, in this example, isn't directly solving a problem but is a tool for a human to explore that problem and spark new ideas or approaches and in turn contribute to advancing our knowledge.

It's a bit similar to those stories where children sparked top scientists to uncover new things and approaches by simply asking a question or giving an idea that our "grown-up" cognitive framing systems didn't want to pass through.

1

Ingvariuss OP t1_iy7xskk wrote

Yes, you're right. JavaScript would be the natural solution and I've seen others and had my scikit-learn models in a production environment that way. Although, I'm not too experienced with deploying ML solutions through JS as this was done by a software engineer at the place I worked at before.

I'll definitely consider building something in JS if it comes to building part 2 of this article.

As for the idea of HAL, that is very interesting! I don't imagine this bot going like that as it is built with a small model and it's also trained on Socrates. It can get "annoying" with the Socratic method and returning the question back to you though. :D

1

Ingvariuss OP t1_iy524i9 wrote

Hi,

No, it's not a dumb question. I've used models that are easily available and inside the Hugging Face directory. Far as I'm aware, GPT-3 is still an invite-only ML model and it features constraints in what environments and for what purpose you can use it. I didn't really have the time to explore that in detail so I went for GPT-2 as a proof of concept.

3

Ingvariuss t1_iy41fzu wrote

I've been reading philosophy since I was a child, and I've always imagined how awe-inspiring it would be to converse with philosophers from the past.

Well, advances in deep learning and natural language processing have made this possible in some ways, and I've set a goal for myself to create a small project as proof of concept.

This project is titled "Speaking with Plato - A Deep Learning Approach to Philosophy." Plato is a favorite philosopher of mine, and his philosophy is still very relevant today.

Plato's Theory of Forms can be seen in the field of pattern recognition. Here we see issues when it comes to training AI algorithms that are easy for humans.

When it comes to image pattern recognition, for example, we can easily train a child to recognize a tree. We can also train an AI to perform this task, but it will fail when presented with a fake tree.

The above problem that AI revealed to us was masked by our meaning-making capabilities. Mainly, our embodied brain, which is dynamically coupled with the environment, can render things obvious to us. Therefore, we think that the explanation comes from that obvious.

Two deep-learning models are used in the project. One is a Chatbot that simulates a conversation with Socrates, while the other is more creative and generates text in an attempt to imitate Plato. All of his work is also explored as part of an EDA (Exploratory Data Analysis).

Here's a sneak peek:

User: What is virtue?

Socrates: A thing which is taught by a certain master, and which is rightly taught by him; and he who taught it, and has taught it also, is good in so far as it is taught?

More can be read in this blog: https://dataspiral.blog/speaking-with-plato/

4