Viewing a single comment thread. View all comments

[deleted] t1_j028rzq wrote

[deleted]

2

jakderrida t1_j02apso wrote

Well, for one, flipping the script already occurs. When I was an electrician, a manager overheard me claim that a device measures resistance in the circuit. He proclaimed it measures continuity of the charge going through it. I repeatedly told him that it's the same thing with no success.

If it measures whether it has many citations, the inverse of the probability measure given will be the probability it has low citations.

Now if what you're looking for is something like short stories, the hurdle to cross would be to find pretagged data that you would consider a reliable measure of "interesting/engaging" to be converted into mutually exclusive dummy variables for the NLP tool to train for. The reason I mentioned published research and citations is only because it's massive, well-defined, and feasible to collect metrics with associated texts.

Just to ensure you don't waste your time with any dreams of building the database without outside sources, I want you to realize that the thing about deep learning/neural network technologies is that it tends to produce terrible results unless the training data is pretty massive. Even the 50,000 tagged articles I used from Seeking Alpha would be considered somewhat frivolous of me by most in the ML community. Not because they're jerks or anything, but because that's just how NNs work.

2

[deleted] t1_j02b3bj wrote

[deleted]

2

jakderrida t1_j02bzq2 wrote

>It must be a pretty hard problem.

Not particularly. The only hurdle is the database. I collected all the Seeking Alpha articles and tags very easily before organizing the data and building the model to astonishing success on Colab.

An alternative would be to find literature from great writers (James Joyce, Emile Bronte, etc.) and divide it into paragraphs as texts, remove paragraphs that are too small and tag those paragraphs as a 1 and take awful writing (Twilight, Ann Coulter, Mein Kampf, etc.) and do the same with them tagged as 0 before training the model to separate the two.

2