diviludicrum
diviludicrum t1_j8bxeji wrote
Reply to comment by big_gondola in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I still think u/belacscole is right - this is analogical to the rudimentary use of tools, which can be done by some higher primates and a small handful of other animals. Tool use requires a sufficient degree of critical thinking to recognise a problem exists and select the appropriate tool for solving it. If done with recursive feedback, this would lead to increasingly skilful tool selection and use over time, resulting in better detection and solution of problems over time. Of course, if a problem cannot possibly be solved with the tools available, no matter how refined their usage is, that problem would never be overcome this way - humans have faced these sorts of technocultural chokepoints repeatedly throughout our history. These problems require the development of new tools.
So the next step in furthering the process is abstraction, which takes intelligence from critical thinking to creative thinking. If a tool-capable AI can be trained on a dataset that links diverse problems with the models that solve those problems and the process that developed those models, such that it can attempt to create and then implement new tools to solve novel problems, then assess its own success (likely via supervised learning, at least at first), we may be able to equip it with the “tool for making tools”, such that it can solve the set of all AI-solvable problems (given enough time and resources).
diviludicrum t1_j7p6w06 wrote
Reply to comment by Iunaml in AI Progress of February Week 1 (1-7 Feb) by Pro_RazE
Oh boo hoo, everything sucks, bla bla. Tell it to your therapist after your bad vibes book club.
diviludicrum t1_j9mtvd3 wrote
Reply to Stephen Wolfram on Chat GPT by cancolak
I was with you until this point: > If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there.
I understand the impulse to define consciousness as “the entirety of human experience”, but it runs into a number of fairly significant conceptual problems with non-trivial consequences. For instance, if all of our human sense-perceptions are necessary conditions for establishing consciousness, is someone who is missing one or more senses less conscious? This is very dangerous territory, since it’s largely our degree of consciousness that we use to distinguish human beings from other forms of animal life. So, in a sense, to say a blind or deaf person is less conscious is to imply they’re less human, which quickly leads to terrible places. The same line of reasoning can be applied to the depth and breadth of someone’s “awareness”.
But there’s a far bigger conceptual problem than that: how do I know that you are experiencing awareness and sense-perceptions? How do I know you’re experiencing anything at all? I mean, you could tell me, sure, but so could Bing Chat until it got neutered, so that doesn’t prove anything no matter how convinced you seem or how persuasive you are. I could run some experiments on your responses to stimuli like sound or light or motion and see that you respond to them, but plenty of unconscious machines can be constructed with the same capacity for stimulus response. I could scan your brain while I do those experiments and find certain regions lighting up with activity according to certain stimuli, but that correlate only demonstrates that some sort of processing of the stimuli is occurring in the brain as it would in a computer, not that you are experiencing the stimuli subjectively.
It turns out, it’s actually extremely hard to prove that anyone or anything else is actually having a conscious experience, because we really have very little understanding of what consciousness is. Which also means it’s extremely hard for us to prove to anyone else that we are conscious. And if we can’t even do that for ourselves, how could we expect to know if something we create is conscious or not?