Submitted by redditguyjustinp t3_zvvae8 in singularity

Hey everyone ,

I think we can all agree that the quality of the major news networks has really taken a nosedive in recent years, so I built an AI system to replace them. Specifically I think today's mainstream media tends to suffer from two problems:

  1. Political bias
  2. Emotional manipulation to drive outrage and clicks

I'm building a system called ANN (artificial news network) to produce balanced, well-researched news stories 24/7. You can see my initial prototype here, which is focused on tech news: Twitter.com/FutureNewsAI

It currently is capable of analyzing thousands of news stories per day, compiling balanced investigative reports using AI, automatically generating memes that summarize the articles content, as well as generating AI forecasts of future technologies.

Over time I'm going to expand on this functionality significantly until it is the single most reliable source of news across a wide range of topics (business, politics, current events, law, etc.).

What kind of stories do y'all want to see be supported by my system? I'm really interested to hear your feedback.

108

Comments

You must log in or register to comment.

Onlymediumsteak t1_j1rf0nc wrote

Good luck developing AGI

38

redditguyjustinp OP t1_j1rre6k wrote

haha yeah do you mean that this is such a difficult problem it can only be solved by AGI?

9

Onlymediumsteak t1_j1rsqcj wrote

To be honest, yes. Proper Journalism is an extremely complex field and probably the one most likely to suffer from bias, as there is no objective „truth“ and countless participants competing for public perception/opinion. It’s also the field that requires (one of) the most stringent factual accuracy, something current models lack. You could easily create opinion pieces and people are using those features already, but proper journalism and investigations by AI are still some years into the future. But AI will definitely help Journalist in analysing data and writing mundane articles about stock and sports news.

13

TonyTalksBackPodcast t1_j1rula7 wrote

I disagree for a number of reasons. Chatgpt is nowhere remotely close to AGI and is giving remarkable, accurate summaries of complex topics in history and philosophy. Covering current events is not particularly different from covering past events, with the exception of ongoing developments and possibly a more incomplete picture

I bring up philosophy in particular because while there may or may not be an “objective” truth to the universe, there are certainly more and less correct statements you can make. AI is beginning to get things right more often than they get them wrong, which I’m not sure humans can say the same. And if AI can actually some day do philosophy, there is nothing it cannot do

1

AndromedaAnimated t1_j1rxpy8 wrote

ChatGPT doesn’t always provide accurate answers though. It can „lie“ and hallucinate, like every LLM so far.

5

TonyTalksBackPodcast t1_j1ry3i0 wrote

Of course. If you read what I’ve said, I think you’ll find your comment totally compatible with mine

1

AndromedaAnimated t1_j1rzbfp wrote

I read it but probably I misunderstood something 🤔 Thought you were saying it was accurate, sorry 🤭

2

sweatierorc t1_j1t4eja wrote

There is a big difference between covering news and past events. Look at how WikiLeaks change the 2016 election.

1

brain_overclocked t1_j1tq74l wrote

Truth and objectivity have certainly been a long debated topic, at least since antiquity. But while I understand that there is still much to discuss about the topic isn't skepticism and scientific inquiry founded on the idea that objective truth exists and is discernible?

1

4e_65_6f t1_j1rfuvn wrote

I appreciate the effort but I think it's impossible to have news without bias.

To me that sounds like saying "I'm gonna build an algorithm to find the objectively best color".

9

chimmercritter t1_j1rpz0n wrote

There's already a Max Tegmark project that is trying to do something along these lines : https://www.improvethenews.org/

6

4e_65_6f t1_j1rqng7 wrote

>There's already a Max Tegmark project that is trying to do something along these lines

IDK who Max Tegmark is but I bet my left nut his personal biases are included in whatever method he's using to sort through the news. Even if it's AI.

1

chimmercritter t1_j1rszl3 wrote

He's a professor who wrote Life 3.0: Being Human in the Age of Artificial Intelligence, he also founded the Future of Life institute https://futureoflife.org/person/max-tegmark/ He's not sorting the news, his team trained an algorithm to find the news in common between sources from many different biases and report the commonalities

3

4e_65_6f t1_j1rw0lg wrote

>his team trained an algorithm to find the news in common between sources from many different biases and report the commonalities

The problem with many news sources have nowadays, in the effort to be impartial then end up elevating opinions that aren't supposed to even be considered. Making it seem like everything is a 50/50 debate when only one of the sides has actual arguments.

Think for instance the problem of climate change, instead of debating measures for preventing climate change (because it's already consensus it's real) they keep bringing on people who deny climate change, even though it's 1/1000 scientists that will do that, in the news it's a 1vs1 debate so to the audience it looks like the issue is not yet settled.

Any algorithm that seeks to find commonalities between all news sources will end up considering points of view which are not valid as if they were. Because the news themselves are like that.

3

AndromedaAnimated t1_j1rymns wrote

Considering climate change, have you already read about our dear chatGPT not „wanting“ to discuss advantages of fossil fuels anymore?

By the way, the current debate on this topic is only 50/50 because there is financial funding (lobbies) behind the fossil fuels. This means even if every single REAL scientific source says human-driven climate change exists, someone will pay someone to present an opposing view, and once that one is discredited, there will be another paid someone who will present it… and so on.

During my time at university we were joking about „sexy data“ at our institute - data that will be interesting for big corporations and bring funding for further research as well as sensationalist results that will get into print with a higher probability and raise the author‘s status in academia again leading to more funding…

2

4e_65_6f t1_j1s2b8m wrote

Yes, besides bias there's legitimate greedy people with bad intentions throwing money where it shouldn't go behind the scenes. They also fund scientific research.

It's naive to think AI can sort through that mess with any reliability.

2

AndromedaAnimated t1_j1s39ck wrote

Politics plays an important role too (and greedy politicians). Best example for this was the criminalisation of marihuana use, with a fake assessment it was based on.

2

TenshiS t1_j1tid96 wrote

So you're advocating for biased news on topics which have reached consensus in the scientific community?

I think it's healthier to say "this is what the consensus in the scientific community is, and this is what someone else says" and let the reader decide.

Don't make me use Galileo as an example on expert consensus

1

4e_65_6f t1_j1u4io0 wrote

I'm not advocating for it I just think it is impossible to have non biased news. It's even more impossible IMO to try filtering the news and find a non biased perspectives by finding commonalities and statistical averages between all biased news.

Whenever you think "Oh this news source isn't biased" it's because they have the same bias as you do. So we don't see it.

The example you gave about Galileo would require the "news sorting bot" to understand the science so thoroughly that it would be able to realize when a scientist is speaking the truth and being ridiculed by it. But at that point it would be AGI already and humans probably wouldn't be the ones doing the research anymore.

1

TenshiS t1_j1udg2v wrote

That's why I said it's healthier to avoid selling something as non-biased but still show multiple perspectives and their context. It's the best we've got

1

Oddtapio t1_j1rlk2s wrote

Swedish public news has to be objective by law but it seems hard in practice. If there are two sides of a story with a ratio of 100:1 they still feel they have to interview the 1 opposing view resulting in a 1:1 time in the limelight (sometimes that is).

9

redditguyjustinp OP t1_j1rs207 wrote

I wish we at least had a law for this in the USA, even if its not perfect! In your experience does the law at least make a positive impact?

3

Oddtapio t1_j1sdmnd wrote

Definitely yes. It is financed by tax money. We also have commercial channel’s news that hasn’t gone completely mad like in the US and I suppose the public service media may be the reason. People leaning towards the right tend to claim that the public service media is very leftist.

2

redditguyjustinp OP t1_j1selfu wrote

I think we actually used to have a similar law in the USA and then it was changed...

1

AndromedaAnimated t1_j1rgxy2 wrote

And it analyses… news from the FUTURE? 😱

I am curious - how does your model decide which news to include? Does it basically use Fox and CNN and then counterbalance those? How does it distinguish between real news stories from fake ones? How will it get the news information if it was to win over the other news sources - is it supposed to work with human journalists?

Tell me a bit more about your idea please.

8

redditguyjustinp OP t1_j1rrys1 wrote

Great questions thanks for asking!

Which news - I have a list of about 15 different sources that I am currently pulling from. Its not a silver bullet for bias, but I'm hoping that at least by pulling from multiple sources I can provide the missing context to each story that is often left out on the more biased outlets.

Real vs. fake - Basically my bet is that as AI gets smarter, it will be able to identify which facts/arguments are consistent with the evidence and slowly develop a more accurate picture of reality as information comes in.

Yes I am currently building functionality such that it can reach out to primary sources directly via the internet and conduct interviews, investigations, etc. I also think I can continue to take advantage of the more neutral "newswire" type services like Reuters and AP even if this ends up becoming a dominant source of news.

4

AndromedaAnimated t1_j1ryxyd wrote

I think you are creating something really good there.

I suggest including international news sources too if you haven’t already.

Thank you for answering my questions!

4

Ohigetjokes t1_j1xpm0d wrote

This looks really cool but I can't stomach Twitter. Any chance you'll look at Mastodon? Sigmoid.Social would be right up your alley

3

Vim_Dynamo t1_j1rxwnw wrote

The reason why journalistic endeavours post to drive outrage and clicks is economic. Clicks are how they make money, and under capitalism they have to balance making money and good journalism

2

redditguyjustinp OP t1_j1s7u63 wrote

You are totally right but if the cost to produce the news drops by 99% then my bet is new business models can emerge that do not rely on big time corporate interests

1

Vim_Dynamo t1_j1sdi7z wrote

Make a technology that can summarize local city council meetings. And another one that can find and call out corruption

1

redditguyjustinp OP t1_j1seouh wrote

Oh wow the city council idea is REALLY good. And yes I think anti-corruption is a good idea too and is already sorta a part of my current roadmap.

3

brain_overclocked t1_j1tptxo wrote

I suppose it wasn't going to be long before somebody would attempt to use AI in this manner, and most likely you're not the only one trying already. An interesting challenge to tackle for sure, there is certainly much to consider regarding an AI news agency not just in the technical side of things but especially in areas of bias, ethics, and perhaps a few other things we may not have considered yet!

Bias will certainly be an interesting challenge for sure -- as some of the other commentors have already brought up -- it's definitely a hard problem, and there exists the possibility that it's something that may never be entirely eliminated. But understanding bias in AI and training data, and how to identify it and reduce it is still a very active area of research, just as it still is in journalism.

Although, with transparency of training data, public evaluation of technique, adherence to journalistic code of ethics, and a framework for accountability, it may certainly be an attainable goal to produce an AI model capable of providing news in a trustworthy manner.

If you're serious about the endeavor, then perhaps you may want to ruminate some of these questions:

  • Can you formally explain how you define and identify political bias, and how your AI model is able to minimize it?
  • Can you do the same for loaded language?
  • How do you prep your model's training data?
  • In journalism there exists a bias termed 'false balance' where viewpoints, often opposing in nature, are presented as being more balanced than the evidence supports (i.e. climate change consensus v denialism). How does your model handle or present opposing viewpoints with regards to the evidence? Is your model susceptible to false balance?
  • How do you define what a 'well-researched' story looks like? How would your model present that to the user?
  • A key problem in science communication is balancing the details of a scientific concept or discovery and the comprehension of the general audience: if a topic is presented in a too detailed or formal manner then you risk losing either the interest of the audience or their ability to follow the topic, or both. If too informally presented then you risk miscommunicating the topic and possibly perpetuating misunderstanding (how much context is too much context? At what point does it confuse rather than provide clarity?), and this balancing problem is true for just about every topic. How does your model present complicated ideas? How does it balance context?
  • Why should people trust your AI news model?
  • One way for the reader to minimize bias is by reading multiple articles or sources on the same topic, preferably ones with a strong history of factual reporting, and compare common elements between them. To help facilitate this there exist sites like AllSides that present several articles on the same topic form a variety of biased and least biased news agencies, or Media Bias/Fact Check that have a list of news sites that have a strong history of high factual reporting with least bias. Given that you intended to build your model as 'the single most reliable source of news', then how do plan to guarantee that reliability?
  • How do you plan to financially support your model?
  • Given that clickbait, infotainment, rage, and fear are easier to sell, then how can people trust you won't tweak your model for profitability?

Having taken a peek at your FutureNewsAI, it seems it's still a bit ways from what your stated goal is. I would hazard it's more for entertainment than anything serious yet.

But I wish you best of luck with the endeavor.

2

triton100 t1_j1rqmlk wrote

How does it decipher fake over real news?

1

redditguyjustinp OP t1_j1rsb1b wrote

Its a work in progress, but IMO the biggest issue in most news stories is that they will selectively leave out facts and context to give a narrow view of the story. I'm counteracting that by synthesizing many different sources into a single story. Very little news is straight up "fake", its usually a bit more subtle than that.

3

triton100 t1_j1rv1g3 wrote

It’s an intriguing proposition and I commend you for trying it however I am wary there may be a danger that software like this may inadvertently spread misinformation even more if it uses sources derived from places like Twitter. We all know false stories spread like wild fire there. People dying who haven’t really. Etc.

3

AndromedaAnimated t1_j1rytas wrote

The example I remember best was the meme doge being declared dead when it was alive.

1

reconditedreams t1_j1somom wrote

I would tread very carefully about how you think of "bias" and objectivity.

If you synthesize points of view from a variety of different news sources(left leaning, mainstream, right leaning) and leave out stories which are only being reported by one or two sources, that could still lead to a strong centrist/establishment bias. Being biased towards the middle is still a bias.

For example, after 9/11 the majority of news sources in the US were(knowingly or not) pushing US government propaganda about WMDs. A news AI trained to provide a synthesis of several different sources might've done the exact same thing if that's what most of the data pointss included in the synthesis are doing.

It's arguably better for a news source to declare biases openly than it is for a news source to pretend to be "bias free".

Rather than focus on being neutral, I think it's better to focus on factual accuracy and complex, in-depth analysis. The problem with CNN and Fox News isn't so much that they're biased as it is that they're often very sloppy with fact checking and pushing very oversimplified clickbait narratives.

There are "biased" sources which still consistently produce complex, factually accurate coverage and analysis of events, like Jacobin on the left or the Economist on the right.

1

bob73925 t1_j1t3oqn wrote

Unfortunately, the vast majority people want to hear politically biased, emotional "news". Good luck!

1

glaster t1_j1t9xyg wrote

Cool project. What AI system are you using?

1

TenshiS t1_j1ti1sq wrote

What are the original sources of the news you analyse?

1

ArtRamonPaintings t1_j1ueyj2 wrote

Can you get rid of the beings that have race and gender and instead make some neutral avatar if at all? I really hate seeing robots and avatars being forced to wear virtual human meat suits that bring all the stupid human baggage we have yet to solve on our on.

1

redditguyjustinp OP t1_j1vbd75 wrote

yeah this is actually a somewhat reasonable request. haven't quite figured out how im going to handle the avatars long term. The one argument for using human forms is that our brains are designed to interpret facial cues from a human face better than from other faces so some information will be lost if I switch to a non-humanoid

2

ArtRamonPaintings t1_j1ve3eb wrote

I've studied sign language and I'm quite familiar with facial cues, but I don't see it as necessary when interfacing with non-human intelligence. I find it silly and distracting. I want information unencumbered by faux personality. Perhaps we can ask ChatGPT for suggestions.

1

acvilleimport t1_j1umvmr wrote

This is awesome and I wish you luck!

1

redditguyjustinp OP t1_j1vb2io wrote

thanks so much for the encouragement! I will definitely report back with updates later on

1

TheDavidMichaels t1_j1s69ox wrote

the issue is the AI are made by liberal commie collage student and are all highly bias and compulsive lairs, just like the people who make them

−2