Zer0pede

Zer0pede t1_j9us962 wrote

Have you tried to teach before? Every student learns so differently, and the best teachers are learning while they teach (think of professors who both research in their field and teach, but it applies to other teachers). Students ask fascinating questions. Feynman talks in many places about how much teaching helped his research. Also, office hours are for befriending students because elements of their home life affect how they learn, and you really need to get an idea of how they think.

There’s lots of ways AIs could assist learning and self-education: I’d love to have something that could pre-read books and papers with an eye to subjects I’m interested in and bring those to me. It would also be great if students had something at home that could direct them to parts of a textbook that were relevant to a homework problem they didn’t understand and suggest supplementary materials like videos, but you’re always going to need a human teacher in the mix. Maybe you could save that teacher a lot of stress by having an AI build rough lesson plans and materials that they only had to tweak (teachers can spend all vacation writing a lesson plan), and then the teacher could focus their time on helping students one on one.

7

Zer0pede t1_j9szh15 wrote

I think it depends: What do you consider knowledge? Critical thinking skills for instance are different from facts—those take practice, not information. Arguably all knowledge is more than just information—the process of learning figures into it.

Even with a language: you could download grammar rules maybe, but everybody develops a feel for language that’s intimate, unique, and distinct, and that sense develops because you learn it slowly over time. Could you download the fact that many people find the word “moist” uncomfortable, but only in English? I memorized the case system in German and Russian long before I could use them in conversation correctly and instinctively. Knowing the facts was only part of learning the language.

Scientific insight is often described as coming in a flash after years of familiarization with a subject. That’s more than just the information—that’s years of turning a subject around and around in your head until you feel things about it instinctively, connected with other things in your life. There’s reason to believe dreams forge unique connections between subjects and experiences in your brain, almost like metaphors, and those would be entirely unique to how you learned a subject as opposed to what you learned. That kind of complex learning and interconnecting of subjects as you go is very different from a “download,” but that’s what we mean when we say “knowledge.”

2

Zer0pede t1_j9ix7nj wrote

Yeah, before if a bad writer wanted to submit something, they’d actually have to take the time and effort to write it. That slows them down and weeds out the lazy ones. Now they just have to write a prompt. Nothing to slow them down and nothing to weed out the laziest. Having to read the first several paragraphs of hundreds of submissions just sounds miserable—literally more work than it took them to “write” it. I would absolutely ban everyone who wasted my time like that.

8

Zer0pede t1_j7vuk59 wrote

You can have the same effect without direct wiring into the brain. You just need an AI that always wants to “check in” to make sure it’s aligned with human goals and values. There’s a good discussion of that in this book. It’s more game theory added on top of neural networks than biotech (but it probably will be the basis for more direct wiring once we have a better idea of how the human brain works).

(Also, no “consciousness” would be involved, because nobody is even trying for that.)

1

Zer0pede t1_j7vtopj wrote

Not if the “safeguards” are structured like a value system. I like the approach in Stuart Russel’s “Human Compatible,” which is that we start now making AI have the same “goals”* as humans (including checking with humans to confirm).

*I put “goals” in quotes because it makes AI sound conscious, but literally no AI researcher is working on consciousness so we’re really just talking about a system that “checks in” with humans to make sure it doesn’t achieve a minor human-assigned goal at the expense of more important, abstract human values. (eg., Paperclip Multiplier or Facebook algorithm.)

2

Zer0pede t1_j7vrwvc wrote

Only thing is human values are pretty arbitrary, so there’s no reason a rational AI would have them.

Humans want to save whales because we think they look cool. It’s mostly our pareidolia and projection that makes us like cute animals, and also trees and sunlight.

An AI wouldn’t need any of that—it could just as easily decide to incorporate every squirrel on earth into its giant auto-spellchecker.

1