Submitted by Neo-Geo1839 t3_124ksm4 in singularity

I believe that this issue, while not as important today will become pretty important in the near future, as artificial intelligence begins to evolve and as such will begin to have image generation, word generation etc. appear more human and less distinguishable from typical human-made images or words.

How will we go about regulating AI art (that will be used by humans for their own gain), deep fakes, AI word-writing etc. and how would we be able to enforce those rules? Like, can it really just be as simple as just stamping the name of the software used to create the images/videos? But, what about the words? The deep fakes? How will we able to fact-check if a political figure actually said this or did this and that it isn't AI generated?
How would we (in the future) be able to tell a human-made essay from an AI-written one. Like, how will we know that a student with low grades intentionally wrote an average tier essay with AI or on his own besides pure subjectivism? I would really like to hear your thoughts on this, as this could have profound consequences for human society. Not just with images, but also with deep fakes which can be used to sway public opinion and potentially hurt/improve a political figure's popularity, or just any figure.

11

Comments

You must log in or register to comment.

Surur t1_jdzodsu wrote

If something is impossible it may not be worth doing badly.

Maybe instead of testing a student's ability to write essays, we should be testing their ability to have fun and maintain stable mental health.

I mean, we no longer teach kids how to shoe horses or whatever other skill has become redundant with time.

4

Tiamatium t1_jdzrfya wrote

If you can't tell the difference, does it matter?

How are we regulating Photoshop today? How are we regulating digital art today? How are we regulating flat out plagiarism today?

Why the fuck do you want to *regulate art" of all things?! Do you think people should need a special license to create art?! What the fuck is up with this gatekeeping?

None of those problems are unique to AI and none are real. AI is just a tool, and while I know that certain artists want to fight it, ban it or get paid for... "Being fucked" by it, that is not new. In fact we have had this exact problem back in mid-1800's with raise in photography. There is a famous rant from 1860's(?) about all the talentless losers (not my words, I am paraphrasing the author of the rant) who can't paint and who can't graduate from university becoming photographers. Painters who used photographs for reference had to hide it. Painters who said art has to adapt were systemically pushed out of art word and exhibits.

So that is literally not a new problem

11

sideways t1_jdzvvus wrote

AI made will be faster, cheaper and higher quality.

16

SkyeandJett t1_jdzvx84 wrote

Even if you stopped AI advancement right now at this exact moment in time the traditional classroom instruction model is completely fucked. You'd be much better off using GPT as a one on one tutor.

3

Neo-Geo1839 OP t1_jdzxiig wrote

The thing is, the arguments you just listed completely ignore the political side of things, as AI technology can potentially sway opinions and may just destroy the reputation of a politician even if he didn't do/say the thing the deep fake says he do (or will do). They will become so accurate that you wouldn't be able to tell if it was faked or real. Elections could be decided just by these deep fakes (in the near future). Like, people immediately reacted to a fake Trump arrest image on Twitter, just imagine that in the future.

If there was no political side to this, I would agree, this is not really a new problem. But the fact that there is one just concerns me. This isn't just about the silly little artist mad that an AI can do something better than him. No, this can be used and would inevitably be used by politicians to attack other and divide the populace even further.

−3

Neo-Geo1839 OP t1_jdzxzy0 wrote

While yes that is the ideal policy, it doesn't seem likely that it will be implemented by schools and they will probably continue to have them write essays, even if AI is a thing. Private schools would probably be lobbying the government to regulate AI and prevent it from writing essays and all of that or something rather than just do what you said.

1

Flimsy-Wolverine4825 t1_jdzzr2c wrote

I believe that most of the media are partially fake or at least not very accurate and allready a powerfull tool of propaganda. It's allready very hard to distinguish the veracity of news nowaday and it will only get worst you right.

Maybe books will remain a good and probably better source of knowledge and information, after all we allready have thousands of human knowledge with books.

At the end I think it will be actually worth to be detached to the internet sphere because as the tech will get better the tools to influence and manipulate us will only get stronger and we know they are allready doing very well.

It's just my opinion but I really feel it will be very important for our sake, for our mental health to be detached of this new area of tech. At least I plan to detaching my self of all this.

2

Thatingles t1_je03c07 wrote

In the future, you will type your essay into a chatbot which will evaluate your writing skill as you progress, helping you to improve your essay writing skill and encouraging you to think about the intellectual value of the exercise. This will be a huge relief to tutors as they won't have to plow through the homework marking exercise.

AI will be absolutely revolutionary in education, in all areas.

12

hunterseeker1 t1_je047d4 wrote

On a long enough timeline we really won’t, that’s the pickle. What we’re signing up for will most likely not turn out to be the future we want.

2

errllu t1_je05sx1 wrote

Why the fuck we should test ppl for ability to have fun? I dont give a flaming fuck if a engineer knows how to have fun or not, nor any other job.

−1

errllu t1_je06z8k wrote

Killswitch engineer. Or 'engineer of horizontal surfaces' aka 'cleaning lady'.

There was a paper from JP Morgan recently, 7% of jobs gonna be automated over the next 10 years. Singularity wont take all jobs, at once, on the entire planet. Nor build those fusion reactors to power it from air. Chill

0

errllu t1_je07tru wrote

Maybe it is, maybe it is not. School system reform is in order regardless, dont get me wrong, but sure af not directed at teaching kids to 'have fun'. Thats like one thing they know pretty well how to do

1

Gaudrix t1_je08kxu wrote

This technology makes tutors nearly obsolete. Only small improvements need to be made in reliability and consistency. It is already able to be configured to approach text from the perspective of a certain level of skill or education. GPT 4 won't remove that many jobs, but 5 will be able to fill in almost all of the gaps preventing that now.

6

ptxtra t1_je0ax2w wrote

We won't be able to. AI will be the next lesson in humility to humanity that we're not the center of the universe and we're not special.

3

Mokebe890 t1_je0cvcp wrote

Literally things made by AI will be of higher quality than human made, thats all.

1

audioen t1_je0ioev wrote

Let me show you my squid web proxy. It runs all the content of the Internet through an AI that rewrites it so that everything agrees exactly to what I like. I appreciate your positive and encouraging words where you are enthusiastic, like so many of us, about the potential and possibilities afforded by new technologies, and are looking forwards to near-limitless access to machine labor and assistance in all things. As an optimist, like you, I am sure that it is certain to boost the intelligence of the average member of our species by something like 15 IQ points if not more.

In all seriousness, though, it is a new world now. Rules that used to apply to the old one are fading. You can't usually roll back technology, and this has promise of boosting worker productivity in intellectual stuff by factor around 10. The words of caution are: I will not call up that which I can not put down. However, this cat is out of the bag, well and truly. All we can do now is to adapt to it.

Iain M. Banks once wrote in his Culture series novel something to the effect that in a world where everyone can fake anything, the generally accepted standard for authenticity is a high-fidelity enough real-time recording that is performed by a machine which can ascertain that what it is seeing is real.

Your watermark solution won't work. Outlawing it won't work. Anything can be fake news now. Soon it will come with AI-written articles, AI-generated videos, and AI-supplied photographic evidence, and AI-chatbots pushing it all around on social media. If that is not a signal for your average person to just disconnect from the madhouse that is media in general, I don't know what is. Go outside and look at the Sun, and feel the breeze -- that is real. Let the machines worry about the future of the world -- it seems they are poised to do that anyway.

5

czk_21 t1_je0nhbe wrote

true, even now GPT-4 could be betetr teacher in subjects like psychology, history, economics, medicine, law or biology,it scores very high in these fields, for example biological olympyiad -99.5th percentile= on par with best or better than all humans

factuality need to be improved but you know humans make mistakes too and GPT-4 is already on similar level as experts

imagine when GPT-5 will be better in said subject than most university professors, what point there will be to attending lower level education? even university would not be that good for humanities...

2

No_Ninja3309_NoNoYes t1_je15b34 wrote

Well, right now you can sort of tell if something was written by a skilled person. If AI can reach that level, does it matter? After all in capitalism, it doesn't matter how something was produced, as long as it was legal.

But the elite won't be interested in custom products. If everyone can get whatever they want just by asking, the elite will want things the old fashioned way. So for them it will matter. They won't read AI news. Paper newspapers delivered by little humans is what they want. No robots for them but human servants etc...

1

datsmamail12 t1_je37k6s wrote

Don't worry,these problems won't even last a year

1

isthiswhereiputmy t1_je51yur wrote

'Regulation of Art' isn't regulation so much as a dynamic that hinges on cost of production and niche market interest.

Take for example a new site that wants to install a public sculpture, the leads for the site are not going to just prompt 'Public Art' and 3d-print. The stakeholders and relationships involved are a real thing and influence.

Personally as someone who loves playing around with AI-Art its potential variety is still very small compared to the scope of creative work that human's get up to. Interested people will maintain their sensitivity to the difference.

1