Comments

You must log in or register to comment.

glonq t1_j65g2s9 wrote

Yeah, I'm sure that will sove the problem

/s

59

wockyman t1_j65hp6x wrote

It's a great brainstorming tool, and it's described some specifics of complicated subjects to me more clearly than my profs did. Universities need to adapt to this, not blanket ban it. Unfortunately (based on how poorly universities have adapted to other challenges in the last couple decades) it'll likely just be one more thing that drives them into the same category as newspapers and broadcast tv.

19

Pilferjynx t1_j66189h wrote

It's like the calculator. You use it by first knowing the key ideas and then express the values with the tools computational power.

5

w-g t1_j66fo2e wrote

It's not that simple -- it's of course natural to ask whether the teachers are requiring rote tasks, or memorizing data. But it's likely that in the future AI systems will be able to produce meaningful texts with somewhat credible argumentation. I know several of teachers who do want students to think (instead of doing rote tasks) are also worried about chatGPT. For example, you may want to assess by asking students to write an essay with the specific goal to make a point, or a rebuttal of something that was already read in class, or whatever. The problem is that chatGPT can do that -- although a crappy job. But the crappy job may be just enough for the student to pass.

So the question is how to do assessment, knowing that students will have access to AI tools -- not chatGPT, but the evolved versions of it and also the other AI tools yet to come. Because we are not supposed to expect people to not think by themselves...

4

wockyman t1_j66oado wrote

Oh I know it's not simple, but I do believe it's required. We're going to have to reconsider some of our long-held assumptions about what education is for and where assessment fits into that. ChatGPT does a C+ job if given a blind prompt. But if you talk to it for a while about a subject, get it to define and clarify first principles, it can do an A- job of producing meaningful analysis. It will blindly concoct flasehoods sometimes. It'll give you a list of general sources, but it won't cite. But I agree, we'll likely blow past those limitations in a couple of years or less. So when everyone can basically talk to the computer from Star Trek: TNG, we're going to have to change the curriculum. I expect to see more practical, project-based classes that result in a complex final product. Like a bunch of mini-dissertations.

−2

BlindProphet0 t1_j66tfq8 wrote

I love using it for brainstorming. It helped me in my current ethics class by rephrasing some of the more complex concepts.

2

danielalvesrosel t1_j65lztg wrote

Just goes to show how poorly some assignments are designed, if we have the means to do something better, why not build on-top of it..

4

Old_comfy_shoes t1_j671km9 wrote

"ChatGPT, how does one continue to use chatgpt, without any universities finding out."

3

jmbirn t1_j6690cz wrote

> "Without transparent referencing, students are forbidden to use the software for the production of any written work or presentations, except for specific course purposes, with the supervision of a course leader,"

In other words, the University issued reasonable guidelines, such as that you should label ChatGPT output accurately. Hardly a "ban."

37

drossbots t1_j672tq4 wrote

Redditors actually read the article challenge (impossible)

16

GoodRedd t1_j693wg9 wrote

I would love for you to explain what "transparent referencing" looks like when using a tool like ChatGPT.

I'm fairly confident that they're not referring to referencing ChatGPT. They're referring to referencing the material ChatGPT trains from. Which is opaque, and therefore makes the tool unusable.

The stupid part is that no human is expected to provide a transparent reference list of every piece of writing that they train themselves from. Which would be like keeping a history of everything you had ever read, and every conversation you had ever had with any person... Or yourself.

−1

jmbirn t1_j6ippwj wrote

A good first step towards transparency is that, if you're going to quote ChatGPT, you should say that you are quoting ChatGPT's output, provide the context of what prompt or question it was responding to, and say when you asked. Just like quoting a person, the quote can be an accurate quote, even if the person being quoted was wrong about something.

2

UntiedStatMarinCrops t1_j65wcd7 wrote

Just do in person testing on paper lol. In class essays, and take home essays should not be a problem, I've had ChatGTP wrote an essay and it was pretty bad.

7

w-g t1_j66ft80 wrote

> and it was pretty bad

But it won't be pretty bad for too long. Give them 2 or 5 years...

6

[deleted] t1_j67cif5 wrote

[deleted]

5

Reggo91 t1_j67gw48 wrote

This. It’s actually simple. Tools like ChatGPT erode the business model of higher education. Why study to learn skills that ChatGPT can readily reproduce. I think we will see more bans outside academia soon.

2

Null_Simplex t1_j6a0oln wrote

I would argue that the internet is similar in that fashion. It made sense for universities to be centered around memorizing facts before you could look up those facts in your pocket. Unfortunately, university is still heavily focused on memorization despite the fact that you don’t really need to memorize things as much as you used to, and understanding those things is far more important for a good education. I fear their adoption of AI into education as a tool will be just as slow.

1

Dr-Gorbachev t1_j683cwy wrote

University: Do not use ChatGPT for your essays. Students: We will not definitely.

5

EmbarrassedHelp t1_j66mkk6 wrote

> The university said on Friday the school had emailed all students and faculty announcing a ban on ChatGPT and all other AI-based tools at Sciences Po.

> Sciences Po, whose main campus is in Paris, added that punishment for using the software may go as far as exclusion from the institution, or even from French higher education as a whole.

This seems a lot worse than the title implies. They are banning all AI tools, which is completely insane.

4

aidenr t1_j670i8h wrote

To punish plagiarism; this doesn’t help to catch it.

3

lasher7628 t1_j666yq8 wrote

Make the students pinky swear. Show them you're serious.

2

Apocalypsox t1_j67fd6n wrote

That's not how this works. You dumb fucks should have asked ChatGPT how to prevent students from using ChatGPT, it might have given you some legitimate fucking strategies beyond "protectionism for our dated education model".

2

Reggo91 t1_j67gnbj wrote

Why does the conversation about ChatGPT revolve around academic plagiarism? It’s the perfect distraction!

This technology might eliminate a lot of professions. Not just copywriters, journalists and the like who already have a hard time to earn a living from doing their job. It’s coming for lawyers and programmers also - or anything really that isn’t manual labour. In some perverted twist of fate, robotics proved to be the harder problem than replicating the skills of 6-figure+ jobs. Which is why self-driving cars aren’t a thing just yet. And you won’t have robot fixing your plumbing anytime soon.

2

rotzak t1_j67v2of wrote

That aught to do it

2

saxbophone t1_j68eoff wrote

I would've thought using it to directly produce academic work falls under plagiarism or academic dishonesty regulations anyway, I don't really know what they hope to achieve by this? If it's a blanket ban it seems pretty ham-fisted. How is one to research about such novel AIs if one can't bloody use them?

2

frodosbitch t1_j68w85a wrote

Bring back Google Glass with ChatGBT embedded into it. I’d love it to remind me of peoples names so I can say - how is your daughter Jane? She was studying piano wasn’t she? Or meta data like looking at a stadium and it tells me what concert is playing there tonight.

1

gojiro0 t1_j6bhrf0 wrote

Good luck mon amie

1

uncomplexity t1_j6br714 wrote

What an idiotic attempt to stop the tide!

1