eng050599

eng050599 t1_iy65o8i wrote

For the same reason why he required all the reporters who attended the press conference prior to the publication of the study that they could not discuss any of the results with other scientists.

He wanted to create a splash, yet almost certainly knew that the study would be eviscerated once it was published.

...seriously, there's no way aside from utter incompetence that anyone could submit a paper that bad and not know what the fallout would be.

Unfortunately, it worked, and those lumpy rats get brought up over, and over, and over again.

The scientific community isn't overly affected by this, but that's also why we see so many of the anti-biotech types targeting the public directly, as opposed to trying to convince their peers of the validity of their work.

Just look at Seneff and Samsel.

They've don nothing but present hypotheses as fact for over a decade now, and are viewed as unhinged by even the Seralini crew, but we still see her speaking about her papers as if they were experimentally validated, when there's next to no chance her audience will be able to tell differently.

...it's kinda sad when the evil industrial ag complex has better ethics when it comes to accurately representing research than supposedly independent scientists.

2

eng050599 t1_iy63n5j wrote

>Funny all these independent research groups from various facilities around the world are all thumbs (according to you)
>
>Have you ever proposed a project, had it approved by ethics department etc then gained funding? It's a pretty involved process usually involving a team of people. I'm terrible at statistics but we have specific experts that tell us in advance how how many mice/fish/frogs/flies are needed for each level of results. If you think all those various teams were unaware then really the onus is on you to prove this incompetence.
>
>The OECD guidelines you keep harping on about are for regulatory application approval and consideration for reviews. They dont usually apply to primary research. Have you ever actualy applied for a grant?

Doubling down on idiocy I see, and now pretending that you actually plan and conduct toxicity testing...how cute.

You keep on forgetting that there are different types of studies, and they all have differing abilities based on their design and statistical power.

The studies that you keep on harping about can only show correlation, and in the case of observational studies, that's the norm, as only the largest of these are ever capable of concluding that a causal link exists.

Consider the landmark cancer study of Hammond and Horn. It recruited over 100,000 subjects, and even that wasn't enough to be certain that the link was causal. It was only after the follow-up study of Hammond and the American Cancer Society followed over 1,000,000 subjects that the link was firm enough.

The reason why such numbers are needed is due to the increased variation of the population used in the study. The greater the variance, the greater the required population size is, plus in epidemiological studies, we are not dealing with controlled environments, and as such, the number of confounding and lurking variables makes anything but correlative associations next to impossible.

The OECD studies do not have such a limitation, as they are designed specifically to have the power of analysis to determine if the effects of a given chemical are causal in nature.

Now, first off, I can now see why so many of your posts cannot be seen. They've been removed by the moderators.

https://ibb.co/S7fqV9q

Note that it's not just your replies to me that are getting removed, so this might be an instance where you should take the hint, and realize that you're fundamentally wrong in your understanding of toxicology.

Case in point, in an earlier, and now deleted post (I still have the link though https://libguides.winona.edu/ebptoolkit/Levels-Evidence), you provided a link to the hierarchy of evidence...but you missed what types of studies that was related to, as well as where the OECD methods would fall under this hierarchy.

The page that you selected is in relation to clinical studies relating to treatment protocols, not assessing toxicity of a given chemical.

While toxicity testing is conducted on all pharmaceutical candidates, it's not in the clinical phase, it's all pre-clinical.

Quite literally, you're not even looking at the right place in the research timeline.

Also, and even more amusingly, we can extrapolate out this hierarchy to encapsulate toxicity assessments by looking at the design of things like the OECD methods.

More specifically, almost ALL of the OECD study designs are double blinded randomized control trials, with the test and control populations all randomly selected.

Guess what that makes Griem et al., (2015)?

Top of the bloody heap, as it is a systemic meta-review of all the relevant DB-RCT studies on glyphosate.

Finally, the age of a study isn't relevant unless you can show that there's an issue related to the data collected and/or the methods used. Simply pointing to more recent studies that lack comparable statistical power to the older studies isn't in any way, shape, or form, capable of countering the previous studies.

This is why I keep on pointing out the fact that you have NOTHING that can counter the compliant studies because literally every single study you choose to cite is orders of magnitude weaker in terms of just what it can differentiate.

Hell, even the authors of the study here don't even try to claim that they can show causation, and even their correlative associations are underpowered.

You don't understand this topic, and are unwilling to take the time to learn. Unfortunately this means that your only real use in this discussion is as an abject lesson of the dangers of the Dunning Kruger Effect and cognitive dissonance.

Edit: Oh, and your comment about the number of publications supporting you; again it's so cute that you think that, but you are very wrong, as you have NO publications that can show causal effects. This is the whole reason why we continually see the regulatory and scientific agencies reject the banal fear-mongering from the anti-biotech side of things.

Your supporting data isn't even close to equivalent, let alone capable of superseding properly conducted chronic toxicity studies.

1

eng050599 t1_iy1fwc4 wrote

No, we assign the studies weight based on their design, not on their conclusions, not on their source, not due to what floating around on social media at any given time.

It's good to know that you don't even have a clue how the regulatory agencies determine the exposure limits, as we bake a safety factor right into the development of the ADI, and similar metrics.

We start with the experimentally derived aggregate NOAEL, then apply a safety factor to account for the use of an animal model, along with the variability present in the human population.

I'd normally write that you should take a step back and actually learn some toxicology, but your ramblings are far more amusing to poke holes in.

1

eng050599 t1_iy1f7d2 wrote

You just don't get it do you?

It's not the use of animal models that's the issue. It's the overall experimental design of the studies you elect to cite.

It's entirely possible to make use of the same animal model in multiple studies, it's how they are used in terms of the overall power of analysis that determines the strength of the study, and if it can be used to show causal effects, or is limited to correlative associations.

Quite simply, power of analysis reflects the ability of a given method to accurately differentiate between treatment effects and natural background noise at a specific threshold for significance.

The key elements that factor into this are the sample size, and the variability within the population.

The problem with the studies you cite is that they universally lack the strength to accomplish this for causal effects. There's simply too much noise in the background for them to accurately manage this.

This isn't the case for the OECD studies, as they were specifically developed to ensure that researchers would have the statistical power to test for causal effects, and they've been updated MANY times over the years to take into account new methods and overall knowledge relating to toxicology.

Just take a look at the review of Griem et al., (2015, Doi: 10.3109/10408444.2014.1003423).

It goes through a range of carcinogenicity and chronic toxicity studies and details not only studies that were fully compliant, as well as those who fall short of this.

Just in that review we see the successful replication of the OECD methods, with comparable results obtained from different lab, different researchers, different countries, and a period of two decades.

Now look at the studies that you've bet the farm on.

None of them even come close to the statistical power of one of the compliant studies, let alone be capable of rebutting the full collection of them.

1

eng050599 t1_iy11w2i wrote

There's nothing capable of showing causal effects.

All of that data supports the current toxicity metrics.

What we tend to see are underpowered studies with little replication...if any, non-standard techniques, unsubstantiated deviations from established protocols, and of course, passing off of molecular fishing expositions, as being able to accurately determine treatment effects.

Consider that the OP posted a study where the authors state that their results shouldn't be extrapolated to represent normal pregnancies.

Having the entire study population comprised of high risk pregnancies is a major issue.

1

eng050599 t1_iy0udvj wrote

...you do know that power of analysis isn't a subjective metric, right?

It's quite literally something that we calculate during the design stage of an experiment.

It's also why methods like the OECD designs include multiple guidance documents specifically to ensure that researchers will have data of sufficient strength to test for the causal effects for which the methods were designed.

There is a very real hierarchy in terms of statistical power, and the methods like those from the OECD Guidelines, along with their regional equivalents are only superceded by studies like DB+RCT

All but the largest prospective cohort studies rank below this, and in the case of. Glyphosate, it's actually hilarious that the AHS, a prospective cohort study, that doesn't have the statistical power to counter the OECD-compliant ones, it does have the power to counter the other lesser observational studies.

Guess what?

The AHS shows no significant link between glyphosate exposure at the current limits and harm.

Until data from studies of comparable power to the OECD methods materializes, there's no justification to change the toxicity metrics of glyphosate.

3

eng050599 t1_iy0svdv wrote

The thing to realize ius that, for a depressing percent of the general public, they aren't actually looking for information when they contact someone like me, they're looking for validation of their beliefs.

When that doesn't happen, it can get spiraled up into a Machiavellian conspiracy and that all scientists who disagree with them are paid shills.

That's usually the point where I just shrug and move on.

This is also why I provide quite a bit of detail in my replies to threads like this.

In many cases, I know that nothing I write will change a zealots mind, but my answers aren't for them. They're for someone who comes across this down the road who has an actual interest in learning.

For them, the information is available for them to do so.

2

eng050599 t1_iy0ryhg wrote

No, you're missing a key component here, and not taking the complete dataset into account.

We have multiple OECD-compliant studies showing that adverse effects are not significantly associated with exposure below the current limits.

This study claims that this isn't the case, but it does not have the statistical power to counter the ones that have the ability to test for causal effects.

Additionally, even the authors of this study concede that their results shouldn't be applied to normal pregnancies, and that their overall PoA is insufficient to account for a range of confounding and lurking variables.

As for your link, go through them and check if they can test for causation, or are just correlative assessments.

Your 15min at the U of Google doesn't quite equate to decades at the bench, and it's pretty obvious that you haven't actually taken a comprehensive look at the studies you elect to cite.

The number of studies doesn't really matter when it comes to topics like this, and the key metric is the design and strength of the studies involved.

Multiple weak studies, do not trump studies with a far greater PoA,.

Until you can wrap your mind around this fact, you will be doomed to see the scientific and regulatory communities reject your position.

Want to change things?

Commission a study with comparable statistical power to the OECD designs, and generate data that actually would be capable of countering the earlier studies conducted over the past 40 years.

Just make sure to adhere to the standards of this field in regards to experimental design and GLP in general.

2

eng050599 t1_iy0pmho wrote

And the virtual mountain of data derived from studies capable of showing causal effects, along with the overwhelming consensus among my peers, and the regulatory agencies.

I also have over a decade of primary molecular research, and evaluating the merit of such studies is literally part of my job.

You're the one who decided to comment on a topic you know next to nothing about on a subreddit dedicated to science, and you got called out on it.

Rather than take the information provided in the earlier replies, you doubled down, relying on studies that you didn't bother to place in context with their power of analysis, and yet again, you were called out on it.

That's willful ignorance on your part, and not something I have any desire to coddle.

The truth is that scientists are very easy to convince about something. You just need to show us the data. We examine the methods used, how it was collected, how it was analyzed, and then how it was supports the conclusions reached.

To date, the data from the strongest studies (as determined by the statistical power of their design) does not support your position, and until this is why the overall consensus among us will not change until the data indicates otherwise.

2

eng050599 t1_iy0o09r wrote

No, that's not how it works.

At the present time, all of the data regarding causal effects from glyphosate exposure indicate that there is no increased risk of any harm at the current exposure limits.

None of the studies claiming to show harm have an equivalent power of analysis, and are weighted lower than the compliant studies.

The data gets worse for the anti-glyphosate types when we also consider that, among the observational studies, the one with the largest power of analysis, the AHS, doesn't even show a significant correlative association to harm.

This is the reason why the scientific and regulatory communities overwhelmingly reject claims of harm.

What you are advocating is for scientists to weight studies based on how they align with your ideology, not in the strength of their design.

The key point is that we have data for causal effects from glyphosate exposure.

We have it for chronic exposure

We have it for acute exposure

We have it for carcinogenicity

We have it for cytotoxicity

Even though they've had decades to perform studies to show that those studies are flawed, either methodologically, or analytically, we see nothing that even comes close to the minimum standards in toxicology.

Back to the original study for this thread, it's design was so weak that even the authors state that their results are not representative of normal pregnancies.

That's a far cry from what we can determine from the compliant studies.

3

eng050599 t1_ixy8r6o wrote

...even the IARC rejected the Seralini paper, and for the exact same reasons why it was retracted in the first place.

From the IARC Monograph:

The Working Group concluded that this study conducted on a glyphosate-based formu-lation was inadequate for evaluation because the number of animals per group was small, the histopathological description of tumours was poor, and incidences of tumours for individual animals were not provided.

As for Mesnage and Antoniou, the main issue with them is that:

a) They haven't conducted any OECD compliant study, and instead make use of weaker, correlative studies.

b) They explicitly go against the recommendations relating to large scale 'omics analyses in toxicology.

One of the results of the whole Seralini lumpy rat study, was the EU commissioning 3 different studies (GRACE, G-TwYST, and GMO90+) to determine if there was any validity to the conclusions of Seralini et al., (2012).

The GRACE project specifically examined the effectiveness of molecular fishing studies conducted on transcriptome and proteome-level analyses.

The results were not surprising, as they found that the level of Type I errors was too high for them to be used directly to conclude even correlative effects. The reason for this is that these studies typically involve far more pairwise comparisons than even our best ability to correct for multiple comparisons can handle...a problem that we in the research community deal with frequently.

Studies of this type should only be used to identify prospective targets for further examination using specific testable hypotheses.

To the surprise of none, neither Antoniou, nor Mesnage have performed such follow up work in relation to their transcriptome screenings.

Yet again, you seem to be a bit lacking when it comes to knowledge relating to these topics.

Fortunately, I do not suffer from such an handicap.

3

eng050599 t1_ixy7qcr wrote

If I had blocked you, I couldn't reply to you now could I?

I rarely block anyone, as that means that I can't keep tabs on what they might be posting.

https://ibb.co/HPkBrCn

That's all I see currently from your link. It might be a simple technical issue, but for now, all I can go on is the fact that, I haven't blocked you, and I see notifications relating to your replies, but I cannot reply to them as the comments are missing.

None of this changes the fact that you really have no clue about even basic experimental design, let alone the specifics relating to toxicology.

At least seeing another example of the Dunning Kruger Effect provides amusement enough for me.

5

eng050599 t1_ixy7a97 wrote

...look up what head filling is.

Glyphosate requires active transport for it to be systemic, and active transport to the wheat head stops after head filling is finished.

At this stage the plant is literally dying, but in a controlled manner. Nutrients aren't being transported to the head any longer, and as such, neither is the glyphosate.

Again, you're showing that you have no understanding of any of these topics.

>Doesn't really seem that way. Are you claiming all these studies bypassed peer review?

...seriously?

Peer review isn't the issue.

The issue is that there are literally no studies that can counter the compliant studies conducted to date.

Perhaps you're a bit more dense than even I thought, so I'll take this a bit slower.

Not all studies are created equal.

A basic component of experimental design is that a study need to provide enough power of analysis to accurately distinguish treatment effects from background noise.

Power of analysis isn't some subjective metric. It's a literal calculation that takes elements of the study such as sample size, population variance, and cutoff for significance into account.

The OECD Guidelines for the Testing of Chemicals were developed to ensure that studies used to assess the toxicity of chemicals had sufficient power to test for direct causal effects. Additionally, the standardized methods make falsification of the data much more difficult, as replicating such results is almost impossible without the same manipulation conducted each time.

The studies that you are relying on do not have the statistical power to test for causation. This isn't an arbitrary thing, and it's directly based on the ability of a study to accurately tease apart results from noise.

To date, not a single study capable of showing causal relationships has indicated anything other than there being no increased risk at the current exposure limits.

It really is that simple.

The standards in toxicology apply across the board here, and it's quite telling that you would exempt statistically weaker studies from these requirements just because they align with your pre-existing beliefs.

Again, this is good evidence that you are not a scientist.

3

eng050599 t1_ixy5xdm wrote

I keep up to date on her idiocy simply because I've become the chair's go to person when a "concerned citizen" contacts my department regarding almost anything related to the dimwitted duo.

Now, I will admit that it was hilarious to see her get debunked by the likes of Antoniou and Mesnage in the case of her glyphosate substitutes for glycine hypothesis, she's simply gone off the rails too far in my opinion.

As I wrote to fasthpst, the simple fact that she hasn't bothered to experimentally validate any of her molecular spitballing should be a good sign that her research is useless, but it persists.

2

eng050599 t1_ixy5ivd wrote

Three days for swathing, 7 for harvest in wheat...and you don't know much about endosperm development in wheat do you?

When the crop reaches the point where harvest begins, how much head filling is still happening?

Almost nothing, and the plant is already starting the senesce at this point. There's almost no additional nutrients being transported to the harvested tissues, and that's why pre-application isn't an issue.

Want to know what evidence will change my mind?

The exact same evidence that my peers in the scientific community expect.

Empirical evidence from a study design that meets or exceeds the statistical power of the OECD compliant studies, or their regional equivalents.

You really don't seem to get that you have literally NOTHING like this.

That's actually one way that I know you're not a scientist, and most certainly are not up to speed on even the rudimentary aspects of toxicology.

As for the age of the studies...since there's been nothing to indicate that those ones are in error, you really don't have anything to justify excluding them.

I on the other hand can use the fact that the studies you laud lack the capability of testing for causal effects to assign them a lower weight in the Weight of Evidence Narrative section of the regulatory assessments.

2

eng050599 t1_ixy2l50 wrote

You seem to be removing your comments, but fortunately, they remain viewable. Let's start with this one:

https://ibb.co/h2qMRLF

You'll note that none of the studies indicating harm from glyphosate are compliant with the international standards, but even then, they can't test for causal effects.

Secondly, in the studies using the formulated herbicide mix, have you actually looked at any of them?

There are several reasons why the formulated mix isn't used for the standard toxicity metrics, but the most frequent ones are put on display quite nicely in your citation.

  1. Many of the studies using cell cultures aren't an appropriate model for real world conditions. Quite simply, you will see the same results if you subbed out the herbicide with dish soap. The reason for that is because the formulated herbicides contain surfactants to aid in penetrating the waxy cuticle present on most plant species. Surfactants of this type are soap...and disrupting lipids, like those present in the cytyoplasmic membrane of mammalian cells, are the reason why we've been using them for millennia.
  2. Using the forumlated mix isn't representative of what consumers will be exposed to, as there is a mandatory period, normally 2-3 weeks, where a farmer cannot harvest their crop after an application of pretty well any GBH. Using the formulated mix without accounting for the differential adsorption of the active ingredients (glyphosate, which is systemically transported), and the adjuvants (local exposure only, with little to no systemic spread).
  3. At no point do the studies listed counter any of the compliant studies conducted, as they lack the statistical power to even come close. Add on the fact that many of the OECD-compliant studies have been successfully replicated.
  4. Attacking the source of any study without evidence derived from experimental data of equal or greater power of analysis doesn't work in science, and you'll note that I have provided multiple critiques of the methods, and analyses used in the studies you've elected to cite.

One fun part about actually being a scientist is that it is extremely easy to determine when someone has no real understanding of a given topic, and is just parroting what they've seen online.

You definitely fall into this group.

3

eng050599 t1_ixxvlwo wrote

A comment so vapid, you decided to post it twice?

Well, I'll just paste in the previous reply:

And you missed the fact that I specifically stated that they have been updated during that time.

You also missed that the anti-biotech researchers haven't even attempted to make use of the built in review protocols, and it's because you need to back up allegations with data, and yet again, literally none of the studies capable of showing causation support your allegations.

Oh, and FYI, the entire history of the study designs are openly available, and each modification recorded.

The only one here ignoring anything is you, and the fact that you can't seem to understand just how vast the gap between the power of the studies you're relying are, and those that my peers and I assign the most weight to.

We actually put the studies into context with their power of analysis, not by how the results align with our existing beliefs.

Want to keep going?

5

eng050599 t1_ixxvfbq wrote

And you missed the fact that I specifically stated that they have been updated during that time.

You also missed that the anti-biotech researchers haven't even attempted to make use of the built in review protocols, and it's because you need to back up allegations with data, and yet again, literally none of the studies capable of showing causation support your allegations.

Oh, and FYI, the entire history of the study designs are openly available, and each modification recorded.

The only one here ignoring anything is you, and the fact that you can't seem to understand just how vast the gap between the power of the studies you're relying are, and those that my peers and I assign the most weight to.

We actually put the studies into context with their power of analysis, not by how the results align with our existing beliefs.

Want to keep going?

2

eng050599 t1_ixxq4mj wrote

So does Seneff, and she has yet to actually test any of her hypotheses experimentally.

Not joking about that in the slightest.

Since her first paper in Entropy, all of her publications have involved data-mining other studies, using the bits she likes to develop hypothetical mechanisms on how glyphosate is responsible for every ill mankind suffers from...and that's it.

She stops at the very first stage of the scientific method, developing a testable hypothesis.

There's a reason why she's considered to be unhinged even by the Seralini crew.

2

eng050599 t1_ixxped9 wrote

Actually, it's literally every study capable of showing causation that indicates there is no increased risk of harm at the current exposure limits.

The only studies that try to claim this can only test for correlative effects, and even then they are riddled with design issues. Insufficient sample size, non-standard treatment, inappropriate animal model, deviating from normal histopathological assays without indicating why, and of course, incorrect statistical methods and insufficient power of analysis.

Over and over again, these studies all follow the same pattern. A correlation between glyphosate exposure and harm is claimed...and that's it. There's no attempt to validate the results by designing a study, or using the currently existing baseline, the OECD Guidelines for the Testing of Chemicals, to test for causal effects.

Glyphosate has been through Tier I endocrine disruption screens in both the US and EU, and there is no indication that it has such effects.

Again, the only studies claiming this do not adhere to even the minimum standards in toxicology.

What's even worse for the anti-glyphosate narrative is that, even among the correlative observational studies, the strongest of these (in terms of statistical power), the Agricultural Health Study, a prospective cohort study (best you can get without moving to a DB-RCT), shows no increased risk from glyphosate exposure.

You need to remember that all studies are not equal, and the risk assessment for these chemicals involves weighting studies based on their power of analysis. Methods that have the power to test for causal effects are given more weight than those who can only conclude correlative ones, and one-off studies that don't adhere to the international standards get weighted far, far less.

As things stand, there are no OECD compliant studies indicating that there's an increased risk from glyphosate exposure until the exposure level is orders of magnitude above the current limits.

What you should really be asking is why the anti-biotech researchers seem to be incapable of following the same standards that all scientists, myself included, are expected to uphold.

The OECD methods have been the standard since 1981. Since that time they have been revised, added to, and removed when there is evidence to support this, and there's even a built in mechanism for scientists to instigate such a review.

Those same anti-biotech researchers haven't even tried to indicate that the current standards need revising.

Instead, they just continue to generate weak correlative studies that, unfortunately, individuals like yourself see on various blogs, but almost never in context with their statistical power.

5

eng050599 t1_ixwt46l wrote

You do realize that the overwhelming majority of the scientific community, and regulatory agencies agree that it is not a significant risk at the current regulatory limit, right?

Let me guess, you read about the IARC's classification, and the studies used to spread fear by the various anti-biotech blogs, but haven't actually looked into the full docket?

In fact, I'm going to bet that you can't tell me the difference between how the IARC assesses chemicals, and how literally every regulatory agency does.

4

eng050599 t1_ixw6itw wrote

It's been fascinating and depressing to see how easily charlatans like Seralini, and even worse, the dimwitted duo of Seneff and Samsel, can manipulate the general public, and non-scientists as a whole.

In the case of Seneff, pretty well everyone I've ever interacted with in relation to her papers doesn't realize that all she's been doing for over a decade is data-mining to dream up hypotheses...that she never bothers to test experimentally.

Quite literally, she stops at the first step in the scientific method.

There has been one hilarious thing involving team Seralini, and Seneff was that, while Seneff hasn't bothered to test any of her hypotheses, a group of researchers, including both Antoniou and Mesnage, did test her glyphosate can substitute for glycine hypothesis (Antoniou et al., 2019 Doi:10.1186/s13104-019-4534-3).

To the surprise of no one, the hypothesis was debunked.

2

eng050599 t1_ixs88xf wrote

This study has some more significant issues than that, with the biggest one being that their their entire population was heavily biased.

From the study:

"...our study participants were not selected other than prospectively attending a Maternal-Fetal Medicine Specialty Obstetrics Clinic for high-risk pregnancies."

They then go on to indicate that high risk pregnancies only account for 6-8% of the total in the US and that, as a result, "...our findings cannot be easily generalized to low-risk pregnancies."

Funny how so many of the anti-biotech groups seem to neglect bringing up this point...let alone the issues with them being unable to account for a wide range of variables.

6