Viewing a single comment thread. View all comments

djbuttplay t1_j7x0rsy wrote

They didn't mention the part where when they ended the experiment and the dolphin stopped getting jerked off he drowned himself at the bottom of his tank.


80taylor t1_j7y8wm8 wrote

Is this true?


washington_jefferson t1_j7yak9u wrote

Unfortunately. Here is a good Radiolab podcast:

I asked ChatGPT and it said:

> You might be referring to the story of Margaret Howe Lovatt and the experiments she conducted with a bottlenose dolphin named Peter in the 1960s.

> Margaret Howe Lovatt was a researcher who worked with Peter, a captive bottlenose dolphin, as part of a NASA-funded project to explore the possibility of communication between humans and dolphins. The experiments took place on a small island in the Virgin Islands, where Lovatt lived and worked with Peter for several months.

> During the experiments, Peter became sexually aggressive towards Margaret Howe Lovatt, and she claims to have engaged in sexual contact with the dolphin as a way to appease him and maintain a calm and productive living environment for both of them. However, this behavior is considered highly controversial and unethical by the scientific community, and Lovatt has faced criticism for her actions.

> In the end, the experiments were unsuccessful and the project was eventually abandoned. Peter was eventually transferred to another facility, where he died several years later. The story of Margaret Howe Lovatt and Peter the dolphin continues to be a topic of interest and debate in the scientific community, and serves as a cautionary tale about the dangers of crossing ethical boundaries in animal research.

So, basically an OG nottheonion story.


Kazahaki t1_j80cs8y wrote

Looking forward to seeing/hearing people say "I asked ChatGPT and it said:" a lot more in the future lol


washington_jefferson t1_j80y637 wrote

Yeah, it might get banned in some subs. One sub I frequent the most involves a lot of international law and domestic policies. ChatGPT makes it wayyyyy easier to help people. Before you had to use bullet points from referral references from Google searches, and now you can just tell ChatGPT to give its answers in bullet point form. The thing is- you have to ask it specific questions and tweak things, and you kind of already have to know the answer that you are asking about. It just saves you time explaining and citing things. If you need actual facts with more certainty you should use google, or ask chatgpt where it's getting it's sources.