dmarchall491
dmarchall491 t1_j0p729w wrote
Reply to comment by intent_joy_love in What Plato Would Say About ChatGPT: Zeynep Tufekci argues that A.I. can be a learning tool for schools with enough teachers and resources to use it well. (The New York Times) by darrenjyc
It is true that ChatGPT has received a lot of attention on Reddit and other online platforms, and there may be some individuals who are promoting it heavily in order to benefit financially or otherwise. However, it is also important to recognize that ChatGPT is a highly advanced language model developed by OpenAI, and it has the potential to be a valuable tool in a wide range of applications.
ChatGPT is trained on a massive dataset of human conversation, and it is capable of generating human-like text that is often difficult to distinguish from text written by a person. This makes it a useful tool for generating realistic dialogue for chatbots, virtual assistants, and other types of language-based systems. It can also be used for tasks such as translation, summarization, and text generation.
In terms of its potential impact, ChatGPT has the potential to revolutionize the way we interact with computers and machines, allowing us to communicate with them more naturally and easily. It could also be used to improve customer service, reduce the need for manual data entry, and assist with tasks such as language translation.
Overall, while it is important to be cautious about the hype and potential financial motivations behind some of the positive coverage of ChatGPT, it is also important to recognize the significant potential that this technology has to offer.
-- written by chatGPT
dmarchall491 t1_j0p6vjb wrote
Reply to comment by k0ik in What Plato Would Say About ChatGPT: Zeynep Tufekci argues that A.I. can be a learning tool for schools with enough teachers and resources to use it well. (The New York Times) by darrenjyc
We can also just ask chatGPT to write what it thinks Plato would have argued:
> It is difficult to say for certain how Plato would argue for the use of AI in schools, as he lived over 2,000 years ago and did not have the benefit of modern technology. However, based on his philosophical writings, it is possible to speculate on how he might approach the subject. > > Plato was a strong believer in the power of education and the importance of seeking truth and knowledge. In his dialogue "The Republic," he outlines his vision for an ideal society, in which the education of young people is a central concern. He emphasizes the importance of teaching students to think critically and independently, and to develop their mental and moral capacities to the fullest. > > Given this focus on education and the cultivation of wisdom and virtue, it is likely that Plato would see the potential for AI to enhance and support the learning process. For example, he might argue that AI can be used to provide personalized and tailored instruction to students, helping them to progress at their own pace and to better understand the material they are learning. AI could also be used to provide a greater variety of educational resources and opportunities, allowing students to explore different subjects and areas of interest more deeply. > > Additionally, Plato might see the use of AI as a way to help students develop important skills for the future, such as problem-solving, critical thinking, and adaptability. These skills are likely to be increasingly important in the rapidly changing world of the 21st century, and the use of AI could provide students with valuable experience and practice in using these skills. > > Overall, while it is impossible to say with certainty how Plato would view the use of AI in schools, it is likely that he would see it as a valuable tool for supporting and enhancing the learning and development of young people. > -- written by chatGPT
dmarchall491 t1_j0liyh1 wrote
Reply to comment by FartOfGenius in What Plato Would Say About ChatGPT: Zeynep Tufekci argues that A.I. can be a learning tool for schools with enough teachers and resources to use it well. (The New York Times) by darrenjyc
Using AI to write essays or complete assignments has the potential to undermine the educational process. It's important for students to develop their own writing skills and demonstrate their understanding of the material. While using AI may save time, it does not provide the opportunity for students to learn and grow as writers and critical thinkers.
Additionally, relying on AI to complete assignments can lead to a lack of original thought and creativity. It is important for students to be able to express their own ideas and perspectives in their work, rather than simply regurgitating information that they have obtained from an AI.
Furthermore, while AI may be able to produce coherent text, it may not fully understand the context or meaning of the words it is using. This can result in the production of nonsensical or factually incorrect information, which could be detrimental to the student's grades and overall learning experience.
Overall, while AI may be able to assist with certain aspects of writing, it is not a replacement for the hard work and critical thinking that is required to produce high quality academic work.
-- written by chatGPT
dmarchall491 t1_iz41p3n wrote
Reply to comment by Gmroo in The hard problem of metaphysics: figuring out if other phenomena exist in our universe that like consciousness require we bear a specific metaphysical relation to them - i.e. you can't know of consciousness without being conscious. by Gmroo
> The central point being that without subjective experience, you can't from its description infer it even exists. That it can feel like something.
A philosopher writes lengthy text about consciousness. Why does he do that? Answer that question and you have an understanding of consciousness. There is no reason to assume that this question would be unanswerable, since it's observable.
dmarchall491 t1_ivevobb wrote
Reply to comment by theartificialkid in Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
> But clearly science has no way to dispute the claims of someone who says “it is inherently good to make others suffer”.
The crux there is that the claim is already non-scientific to begin with. "Good" or "bad" are meaningless terms without context. Good/bad for what and for whom? What might be bad for the slave, might be good for the owner.
You can very much do science on morality, but you can't do it in generic unspecific good or bad terms. That not only doesn't work, it completely overlooks that morality is group behavior, not some overarching absolute value system. What's good for some, is bad for others. And how people treat group members will be completely different to how they treat strangers.
dmarchall491 t1_iverr1i wrote
Reply to Michael Shermer argues that science can determine many of our moral values. Morality is aimed at protecting certain human desires, like avoidance of harm (e.g. torture, slavery). Science helps us determine what these desires are and how to best achieve them. by Ma3Ke4Li3
This dreadful discussion again. I am all for science'ing morality, but it seems people keep forgetting what science about. Science is first and foremost about describing and predicting the natural world. It's not about telling you what to do. It's about telling you what will happen when you do a thing.
If you want to do science of morality, you have to observe how actual people behave and react in the real world. Forget whatever morality system and thought experiments you heard about in philosophy class, nobody behaves that way. Forget the bible as well, as nobody behave according to that either, even if they claim so. Look at how people actually act. Look at how indoctrination can influence them. Look at how in-group/out-group drastically changes things up. All that stuff.
dmarchall491 t1_it3u2sd wrote
Reply to comment by iiioiia in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> Is this to say that it is a fact that numbers are never fudged within the practice of science?
You are free to verify and question them. You don't have to take them for granted. It's not that science is never wrong, it's that you are allowed to correct it and many people have done so before you, so it's pretty good most of the time.
> By what means have you acquired comprehensive knowledge of the entirety of reality? Science?
Mostly heuristics. Which ain't as good as science and often wrong, but it gets "good enough" results faster.
> Do you know for a fact (as opposed to believe) that deeper understandings of reality are not available via religion?
Yes. If you can't poke it with a stick, than it's not part of this reality. Your deeper understanding is meaningless when it can't interact with this reality. And when it interacts with this reality, you can just do science on it.
Also the level of understanding that science provides is already so insanely more detailed than anything you can ever hope to find in a religious text, that even called it "deeper understanding" is just nonsense. Religion doesn't even give you really basic understanding of how the world works.
> And are you asserting as a fact that the entirety of the content of all religion is pure fantasy?
Some of it might be "based on a true story", but largely fantasy, yes. That's why we call it religion, not history.
dmarchall491 t1_it3k199 wrote
Reply to comment by memoryballhs in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> Science is not a religion it's a method to approach problems. There is nothing to be advertised.
The method is what needs advertisement. When it comes to something like conversion therapy or abstinence-only sex education the issue is not if it's the morally right thing to do or not, but that it flat out doesn't work to begin with. It fails to accomplish the stated goal.
> Emmanuel Kant was against the vaccine because he thought it further increases the already big population. It's very cruel but based purely on facts.
Doubtful. High risk of child death tends to lead to more children, not less. This is exactly what happens when you don't follow the science, but instead cherry pick your science facts to drive your ideology.
Few problems are well enough understood that it's only the ideology that makes the difference. Most of the time people are either willfully ignorant to the science or the science just hasn't well enough understood the issue at hand.
dmarchall491 t1_it2nzk7 wrote
Reply to comment by sismetic in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> Again, what is the relevance of that to truth, and how do you understand truth?
I consider truth seeking a colossal waste of time. Since not only is there good reason to assume we'll never find it, but also very good reason to assume it is fundamentally impossible to find. If we all live in a simulation, how can you ever hope to find that out? All we can do is describe the rules of that simulation, since that's what we can observe and interact with. What's outside that simulation is completely out of our reach.
And beside, it's not like any other form of knowledge seeking will ever bring you truth either. Most of them can't even describe the rules of this simulation.
dmarchall491 t1_it2li1i wrote
Reply to comment by anarchietzsche in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> And even then, if we take your position, why is science so much more valid than any other form of truth if we know that it's probably wrong?
Because it works. Simple as that. You don't even have to believe in it for it to work. Just look around you, look at the computer you are currently typing on. How do you think that came into existence? I have yet to see any other form of knowledge seeking produce anything even remotely as impressive as that. Heck, even if you take the Bible as literally true, there is nothing in there half as impressive as what science has produced. Having Jesus running around and making some blind people see is pretty unimpressive accomplishment compared to say the discovery of germ theory of disease.
dmarchall491 t1_it2hqqh wrote
Reply to comment by anarchietzsche in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> "science is the way for humans to gain an objective understanding of the world" is equally an ideological position?
No, because it works. It's a completely pragmatic position. If you don't believe in it, you are free to try to replicate and falsify it. Science does not claim to know the truth, quite the opposite, science being wrong is a fundamental part of it, but it has the mechanisms to slowly filter out all the things it gets wrong and replace them with something more accurate.
dmarchall491 t1_it2g872 wrote
Reply to comment by sismetic in [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
> What do you mean by truth?
Here is the experiment I ran, here is how to reproduce it, and here are the numbers I got. This formula is the best way to approximate the results and this is how tall my error bars are. That kind of stuff. The numbers aren't fudged, the math doesn't contain any deliberate mistakes, stuff like that. That doesn't mean the formula always gives the right predictions or that the experiment was free of mistakes. But it means you can go and try to replicate it. You don't have to believe the gospel. It's all just a experiments, predictions and replication, and you are free to join.
> But humans seek a deeper understanding of reality, not accessible or relevant to science.
That is utter bollocks. Humans like to hear pleasurable stories, they don't care about gaining an understanding. If they wanted to have a deeper understanding of reality, they'd do science. But the stories science tell might not be the ones they want to hear and they can get a little complicated, as they are based in reality, not fantasy.
dmarchall491 t1_it282oy wrote
Reply to [Peter Harrison] Why religion is not going away and science will not destroy it by BasketCase0024
Isn't that mostly a marketing issue? It's easier to sell lies than selling the truth. And science isn't even doing a good job at marketing itself. Meanwhile religion has hardcore indoctrination from young age. On top of that humans seem to have a hard time shaking of that indoctrination later in life, you have to wait a couple of generations before science and technology can have any real effects.
That said, I have a hard time seeing religion continuing without major changes. The wonders science and technology has brought us far outpace anything religion could even imagine. And that's going to get a lot more clear in the near future with the rise of AI systems, when your magic human soul turns into something your iPhone can run. The last bit of magic will vanish from this universe and we'll have a reasonable good explanation of almost everything. Religion just isn't compatible with that, you need some mystical unknown and science has been pushing that further and further away.
The art world is already facing that problem, where the magical human creativity is not just getting replaced by AI, but completely outpaced. Paintings that takes a human hours, the AI can crank out in 10sec. Give it another few years, and we'll have completely AI generated movies, with AI written scripts, AI generated voices and video.
dmarchall491 t1_j6p00sx wrote
Reply to comment by muzukashidesuyo in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
> Or perhaps we overestimate what exactly consciousness is?
Certainly, however that's not the issue here. The problem with language model is simply that it completely lacks many fundamental aspects of consciousness, like being aware of its environment, having memory and stuff like that.
The language model is a static bit of code that gets some text as input and produces some output. That's all it does. It can't remember past conversations. It can't learn. It will produce the same output for the same input all the time.
That doesn't mean that it couldn't be extended to have something we might call consciousness, but as is, there are just way to many import bits missing.