prototyperspective

prototyperspective t1_itrzyf7 wrote

>These problems need a political, not scientific, solution.

Why is policy studies then severely neglected? That's exactly what their report could be about.

Not only are policy studies neglected, work on concrete policy proposals and similar practicality/real-world-related things is neglected too, as is work on building or improving systems that effectively work to solve real problems.

1

prototyperspective t1_it6wneb wrote

Info like that should be contained in articles like https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence https://en.wikipedia.org/wiki/Regulation_of_algorithms (and that page may be useful in expanding these albeit the summaries should be shorter than that page).

There could also be a new article but I think the "Existential risk" article is too specific to "Existential risk" which makes the common mistake of ignoring the more immediate currently more realistic issues.

Also often it may be a better approach to explore regulating potential AI-made products such as regulation of chemicals (or similar fundamentally different approaches).

2

prototyperspective t1_ir75k3w wrote

>How can we cope with this

I think society needs to start caring more about knowledge integration. At least papers that are published by journals (not preprints) should more often be put into context and made useful by integrating them into existing knowledge systems at the right places.

That's what I'm trying to do when editing science-related Wikipedia articles (along with my monthly Science Summaries that I post to /r/sciences), updating them with major papers of the year (that also includes the much-expanded article applications of AI). I would have thought somebody took care of at least the most significant papers.

It probably needs more comprehensive overview- & context-providing integrative living documents that help people make sense, properly discover and make use out of the gigantic loads of new science/R&D output beyond Wikipedia.

>AI itself can help, by predicting & suggesting new research directions

I think many make the false conclusion that AI is the solution to such problems not a help to a (small) subset of those. Suggesting new research directions seems like an interesting application.

Many ways that could be useful would only be software, not AI. For example, it would be great to somehow better "visualize" (literally or similar) ongoing progress / research topics/fields or categorize papers by their research topics so you can kind of get notified when new subtopics emerge or new research questions related to your watched topics/fields get heatedly debated/investigated etc or auto-highlight text to make things easier to skim etc. I've put some of my ideas (related: 1 2) for such to the FOSS Wikimedia project Scholia which could integrate AIs.

Here are some more similar stats about papers (more CC BY images welcome). Example: ArXiv's yearly submission rate plot

>I have about ~100 open tabs across 4 tab groups of papers/posts/github repos I am supposed to look at, but new & more relevant ones come out before I can do so. Just a little bit out of control.

See some ways/tools to deal with this in this thread at r/DataHoarder here

More R&D (studies, addons, ideas, ...) about such could be very useful as it could accelerate & improve progress on a meta-level.

18

prototyperspective t1_iqqc8pd wrote

Recently added a review about this to 2020s in computing, I think it probably needs quite a bit of time until this moves to practice (at least as much as the title implies):

>A review suggests only surgical robot platforms "that can effectively communicate their intent and explain their decisions to their human companions will find their way into the operating room of the future", defines levels of autonomy and suggests "positive evidence will soon emerge and build up" that would motivate "transition to clinical trials".

2