AUFunmacy

AUFunmacy OP t1_j6pfm5y wrote

šŸ˜‚šŸ˜‚

Please tell me which degrees you have completed mate, itā€™s not self congratulatory itā€™s providing my credibility to back up the statements I make. What is self congratulatory is you saying ā€œI have completed superior degrees to yoursā€.

Show me my mistake? I am so confused what you are all hung up about, where did I claim neural networks were the only trading strategy?

In general, the instigator of the debate is required to present their argument, you have no argument if you provide no evidence. You havenā€™t showed me what you are talking about, I donā€™t believe you have ā€œsuperior degrees of eitherā€. Get over yourself mate šŸ˜…

−1

AUFunmacy OP t1_j6peiqq wrote

Iā€™m so confused, do you know what ā€œpragmaticā€ means? Because it just seems like you compliment my way of thinking and then say that I am ignorant and so are the rest of people who learn neuroscience and god forbid - choose to believe it.

No idea what you mean by atoms shifting 98% thatā€™s just complete nonsense you wrote to make yourself seem more credible. At least give context to the things you say or provide some evidence? Either would be great.

1

AUFunmacy OP t1_j6otxi7 wrote

Yes, as a programmer who has experience in machine learning I know there are different approaches, however, ChatGPT uses a parameterised, deep-learning (neural network) approach. And it certainly closely imitates how central nervous system neurons communicate, in the brain specifically (Iā€™m in med school as a neuroscience major). That isnā€™t to say just because AI imitates human neuronal activity - that they have the same properties, because they donā€™t.

We should discuss instead of you creating vague rebuttals that provide 0 evidence and 0 explanation.

−1

AUFunmacy OP t1_j6ophx4 wrote

Iā€™m sorry, but if you think youā€™re going to persuade me that Iā€™m wrong with this pseudo-intellectual jargon - you need to rethink your approach. All youā€™ve said is consciousness cannot occur without complex neuronal activity but not vice versa which I did not imply to be false anyway. The rest of your speech was some weird trip you and a thesaurus had together.

Either that or you used an AI to write your comment which I suspect since you said, ā€œbut we have some broad hints and directions to followā€, unless you make a leading statement to that odd sentence - it is just such a non-sequitur thing to say.

1

AUFunmacy OP t1_j6nss96 wrote

I understand the response as I have experience in programming neural networks. You mean that just because the AI that we have run on software and might perceptually represent a similar model to neuronal activity. But physically, on the hardware level and on the compiling level it is very different. However, in essense, still represents steps of thought that navigate toward a likely solution - which is exactly what our brains do in that sense.

I don't mean to say that AI will gain consciousness and suddenly be able to deviate from its programming, but somehow just maybe, the complex neuronal activity conjures a subjective experience. It can only be explained by understanding that when looking at a single celled organism with no organs or possible mechanism of consciousness 3.8 billion years ago it is easy to say that thing cant develop consciousness; and as you evolve this single cell into multi-cellular organisms it still seems impossible until you see a sign of intelligent thought and you think to yourself "when the hell did that begin?" No one knows the nature of consciousness, we have to stop pretending we do.

Let it be known I think a submarine would win the olympics for swimming, and I also think you are naive to consider your consciousness anything more than a language model with some inbuilt sensory features.

−1

AUFunmacy OP t1_j6nqi4t wrote

As I am studying neuroscience in medical school I feel I am semi-qualified to answer this.

I don't think we are any more than the electric and chemical signals in our brains, simply because there isn't anything else that we can point at yet. The fundamental fact is that all human processes, what you could call the entirety of human physiology acts via the comunication between neurons in the nervous system, which is pretty well understood.

You would be dead the very moment (1 planck second) after your neurons stopped conducting - because at that point everything stops, literally everything.

10

AUFunmacy OP t1_j6nn7yf wrote

The entire post is a take on the definition of consciousness? And thats apart from the first half of the post which goes over the definition of consciousness in a number of different perspectives. Would love to hear your definition!

The distinctions I made between human and AI consciousness are all natural inferences to make based on the leading explanations for both AI and human consciousness, dubious is an odd word to put on something that outright claims "nobody knows the answer".

I never claimed AI had achieved consciousness, please let me know which claims you are referring to.

Not sure what you mean in your last point

2

AUFunmacy OP t1_j6njsgh wrote

As a neuroscience major who is currently in medical school and someone with machine learning experience (albeit not as much as you) - I respectfully disagree.

Lets assume we have 2 hidden layers in a neural network that is structured like this: FL: n=400, F-HL: n=120, S-HL:n=30, OL: n=10. The amount of neural connections in this network is 400*120 + 120*30 + 30*10 = 63,910 neural connections. This neural network could already do some impressive things if trained properly. I read somewhere that GPT3 (recent/very-similar predecessor to chatgpt which is only slightly optimised for "chat") uses around 175 billion neuronal connections, but GPT 4 will reportedly use 100 trillion.

Now the human brain also uses around 100 trillion neuronal connections and not even close to all of them for thought, perceptions or experiences - "conscious experiences". I know that neuronal connections is a poor way to measure a neural networks performance but I just wanted a way to compare where we are at with AI compared to the brain. So we are not at the stage yet where you would even theorise AI could pass a Turing test - but how about when we increase the number of connections that these neurons are able to communicate with by 500 times, you approach and I think surpass human intelligence. Any intellectual task at that point, an AI will probably do better.

I simply think you are naieve if you think AI won't replace humans in a number of industries, in a number of different ways and to a large extent. Whether or not Artificial Intelligence will gain consciousness is a question you should ask yourself as an observer of the Earth as single celled organisms evolved into complex and intelligent life. at what point did humans, or if we weren't the first then our ancestor species, gain their consciousness? The leading biological theory is that consciousness is a phenomenon that happens as a result of highly complex brain activity and is merely a perception. So who is to say that AI will not evolve that same consciousness that we did, it certainly doesn't mean that they aren't bound by their programming just like we are always bound by physics but maybe they will have a subjectively conscious experience.

​

Edit: I will note: I have left out a lot of important neuroanatomy that would be essential to explaining the difference between a neural network in and AI vs a brain. But the take home message is, the machine learning model is not a far fetched take whatsoever. But it is important to reign home that software cannot come close to the physical anatomy of neuroscience.

2

AUFunmacy OP t1_j6nb810 wrote

Who said we were designing machines to do sequences of simple things. Complex neuronal activity is the leading biological explanation as to what creates the subjective experience that we call consciousness. AI is constructed in such a way that resembles how our neurons communicate - there is very little abstraction in that sense. I challenge you to tell me why that is absolute nonsense.

I find it purely logical to discuss these things, you will find no where in the post do I claim to know anything or that I claim to believe any one thing.

4

AUFunmacy OP t1_j6mktyu wrote

No I mean I really don't use ChatGPT, I wrote that because my stories are mostly aimed at people in AI who might suspect that practise as well as Programming and Crypto - which you conveniently omitted. It's also due to the vast amount of posts on the platform which are AI generated (which people are well aware of) and I wanted to assure readers that I do not artificially generate my content.

Please tell me which excerpts? I wrote all of this based on research and my own ideas.

0

AUFunmacy OP t1_j6dbhyu wrote

Wow I didnā€™t know that, Iā€™d have to fact check to fully believe you haha. Just the fact that Asimov (or his editor I suppose) was in a time where ā€œrobotsā€ did very simple mechanical tasks that were basic and deterministic but still had the foresight he had. I love accurate cautionary tales especially when we get to the part where we get to follow the plot of tale.

1

AUFunmacy OP t1_j6cbsq3 wrote

General Article TLDR;

It is important to remember that Asimov's laws were written as a cautionary tale and not as a blueprint for how AI should be treated. By being conscious of the potential consequences of our actions, and striving to create a symbiotic relationship with AI where we respect and value its autonomy, we can ensure that we do not make the mistakes cautioned in the past, much like how we have done with Orwell's tales, and pave the way for a brighter future where humanity and AI coexist in harmony. The key is to remember that as we continue to push the boundaries of technology, we must also push the boundaries of our own morality and ethics to ensure that we do not fall victim to our own creations.

3

AUFunmacy OP t1_j6ca94e wrote

I love that you brought that up, I explained this in my article but didn't use the trolley problem as an example.

I referenced self-driving cars, and talked about if they were faced with a situation where it was either hitting a pedestrian to save the passengers or recklessly evading the pedestrian and potentially killing the passengers. Either choice breaks the first law, but there is a crucial flaw in this argument. It assumes there are only two choices from the beginning.

The trolley problem is a hypothetical ultimatum which can create a paradoxical inability to make a ethical choice; in other words, the trolley problem is immutable in the number of choices or actions that can be completed. In real life there are an infinite number of choices - if we get very technical. So for example, the self-driving car might be able to produce a manoeuvre so technically perfect that it evades all danger for both the passengers and the pedestrian; maybe the AI in the, self-driving is able to see 360 degrees, able to sense human pedestrians through heat sensors, able to spot humans far far away with some other technology and make appropriate adjustments to the driving.

It is possible to accomodate to the First Law, but it requires an effort and a need to ensure the technology you create is not going to intentionally or otherwise cause the death of a human (cause being the key word). I believe it would be an effort well spent.

8

AUFunmacy OP t1_j6c7ozd wrote

Isaac Asimov's 3 laws of robotics are a set of guidelines for the ethical treatment and behavior of robots, first introduced in his science fiction novels in the 1940s. These laws state that a robot must not harm a human being, must obey human orders, and must protect its own existence as long as it does not conflict with the first two laws. These laws were intended as a cautionary tale, highlighting the potential dangers of artificially intelligent beings and the importance of ethical considerations in their development and use.

As artificial intelligence (AI) continues to evolve at a rapid pace, Asimov's 3 laws of robotics have become increasingly relevant to society. The advancements in AI have led to the development of autonomous systems that can make decisions and take actions without human intervention. This has raised a number of ethical concerns, particularly in areas such as military and law enforcement, where robots can be used to make life and death decisions.

Asimov's laws serve as a reminder of the potential consequences of creating intelligent machines that are not programmed with a strong ethical framework. Without proper safeguards, robots could potentially harm human beings, disobey orders, or even cause harm to themselves. This is particularly relevant in today's society, where AI is being integrated into more and more aspects of our lives, from self-driving cars to medical diagnosis and treatment.

Furthermore, Asimov's laws are important to consider in the context of the AI's ability to learn and adapt. As a mutable AI learns and adapts, it can change its programming, it can make decisions that go beyond human understanding and control. This makes it even more important to have a set of ethical and technical guidelines in place to ensure that the AI's actions align with human values and ethical principles.

The laws serve to remind us of the possible consequences if we do not consider the ethical implications of AI. If we do not take the time to instill a sense of "empathy" into our Super AI's, how will they ever have the framework to make moral decisions. Think about the ethical and moral implications of AI, we risk creating machines that could cause harm to human beings, or even lead to the destruction of our society.

Asimov's 3 laws of robotics are not just science fiction, they are a reminder of the potential consequences of creating intelligent machines without proper ethical guidelines. As AI continues to evolve at a tremendous rate, it is increasingly important to consider the ethical implications of these technologies and to ensure that they are programmed with a strong ethical framework. Only by doing so can we ensure that the benefits of AI are realized, while minimizing the risks and negative consequences.

5