fjaoaoaoao t1_j6oxlb1 wrote

Yep. Especially at that low accuracy, There can be alternative interventions (e.g. resubmission). And perhaps even secondary backup checks. hopefully the detection methods can improve although it is a moving target….

Ultimately it’s better for universities to hire more faculty and look to evaluation methods that are better for students and can’t just be replicated by AI


fjaoaoaoao t1_j6ovn2s wrote

Half of the sub is kool-aid. Could be misremembering, but a similar post that covered this was well upvoted recently. I believe they were saying there will be greater appreciation for human-touched endeavors, just as there is the rise of etsy and so forth.

I agree with you. Artists will adapt. A lot of their art will be fueled by AI, AI used as a tool. What is currently recognized as AI art will be even more recognizeable as machine/AI-based in the future as human artists evolve with AI. Sort of like how trained ears can recognize autotune. Of course, I am only talking about near term future - too hard to say about the far.

Another point. AI is already used in a lot of interactive or simulation art. The humans are using AI to create new forms of art that AI cannot create on its own. Of course, in singularity, that could change, but people in this sub are underestimating what real singularity is.


fjaoaoaoao t1_j6kitys wrote

I don’t think this is the approach to take. There’s so many possibilities of what could happen and how it could impact workers. Additionally, college is just college; while your major will impact your career significantly, it’s very common to change paths in your life.

That being said it’s good to stay abreast of more immediate trends and make smaller career adjustments based on that.

If you really want an answer to your OG question though, some form of entrepreneur will always be around ;)


fjaoaoaoao t1_j5tyc8c wrote

Sometimes depressing thoughts have come from people spouting excitement though - excited by the tech while depressed by the potential changes.

So i wouldn’t say all of those who are having a more tempered reaction means they are depressed or not trying to be uplifting -> could be they are trying to be rational.


fjaoaoaoao t1_j4v1so9 wrote

Yes. Our mind regularly occludes information in order to operate. It can become too cognitively taxing otherwise. And in a social world, sometimes trying to be more rational has zero payoff other than for your own pursuit of truth or whatever.

Basically saying what’s already known but I appreciate the different angle.


fjaoaoaoao t1_j4lp97z wrote

I don't think that's the point though. There are heaps of cases each year and as the article points out, incredibly complex documents that most people cannot bother to review. It's easy for AI to take more subtle choices or make decisions in more morally grey areas, depending on the values and morals it's trained on. Of course, it's not like we have significantly better systems now, but the level of faith in a particular system should always be scrutinized. This is why I suggest a practical solution is to just develop an AI that reviews cases or offers policy examples for now.


fjaoaoaoao t1_j4lmml5 wrote

I wouldn't agree that the party primarily cares about profitable citzenry, unless you think of profits also as votes and attention.

But otherwise I agree with the spirit of your post. Parties and elections have their good side but they can also encourage focus on public image and more immediately gratifying change. It discourages long-term goals and sustained, ongoing collaboration, unless the public can heavily agree on what those goals are. The problem arises is if the public is manipulated, kept ignorant, or even just having largely different lifestyles (e.g. rural versus urban), agreement on long-term goals can be even harder to come by.


fjaoaoaoao t1_j4llk6m wrote

You are right that the author is talking about an ideal that doesn't exist. Right now that AI would be heavily influenced by a "cabal of intellectuals" (but probably not even intellectuals).

But it's still an interesting thought to deshrine or at least point out flaws in democracy. Not anything completely new, but I do think the piece adds to the conversation. As the auth points out, majorites don't often reflect proper application of ethical principles. Democracy places some degree faith in the ability of the people and their human nature to govern. Not that people aren't fallible, but democracy intends to be self-correcting.

an AI-ocracy would place faith in the ability of rational, impartial AI to reflect proper application of ethical principles, which in theory would be nice but obviously would need a code of values and morals to build off from to decide what's more rational and ethical in the gray areas, and these values, morals, and working definitions change over time. If it skips the more gray areas, then it's usefulness of a governing body is diminished.

Perhaps AI-ocracy is not feasible or overall better, but blending AI with other governance forms, using AI as a tool might be.

For right now, maybe a practical solution is for AI to review cases or applications of law and offer an opinion on whether it is using application of ethical principles. It's code base should be open and public so anyone can have a look-see. Having a consistent review might be a good testing ground to see how it could be used in other governance contexts.


fjaoaoaoao t1_j1cd05v wrote

Just to point out something…

Metacognition doesn’t necessarily lead to awareness or admittance of deficit or wrongness as that requires some judgment compared to some standard.

Also, metacognition can be cognitively taxing and inefficient in some tasks.


fjaoaoaoao t1_j1ccr3n wrote

Epistemic trespassing is necessary for interdisciplinary work or the creation of new fields. But it is only beneficial if the expert is purposeful in establishing expertise in the interdisciplinary space or the new field. Epistemic trespassing can create significant problems for fields that have been undergoing significant change or have been perennially perceived as loosely defined by non-experts such as race relations.


fjaoaoaoao t1_iu0coiw wrote

What is this article? Population is not really expected to decline in the next 40 years, barring massive sudden and unexpected events.

Yes, will population decline as it has started to in some of the most wealthy nations? Yes, but that is much different than the overall world population descending.