Viewing a single comment thread. View all comments

Ortus14 t1_j2v0nfh wrote

Realize that there is a risk in trusting ChatGPT on these sorts of things.

Most of these compounds only have short term but not long term studies backing them. The long term effects are anecdotal and many of them are negative. I personally have experienced negative long term effects I attribute to some of the compounds mentioned.

Chat GPT doesn't currently possess a deep enough understanding of biology to predict the long term effects of these sorts of things.

And also Chat GPT writes confidently even when it doesn't fully understand a topic. Another thing to be weary about.

307

Ivanthedog2013 t1_j2wfuns wrote

Yea lot of people miss the part where gpt is a model used to predict how sentences will form based on previously written human sentences. It doesn't actually think about what it's saying, just mimicking what people talk like.

60

Freevoulous t1_j2y8w3r wrote

>. It doesn't actually think about what it's saying, just mimicking what people talk like.

Which, in large part, is how losts of people function and how cultures are formed.

Note, this is not an encouragement to treat ChatGPT too seriously, but to treat what people say less seriously.

10

PunkRockDude t1_j2xvkem wrote

Yup. It is just giving you words based on how likely they are to make sense. So a lot of people have written about the things on the list in the context of your question. It has no ability to evaluate or determine what is best. It is very useful though to build a list like this where you can assume it is a good starting point list that you can then evaluate the details on your own. Still a big time saver potentially

8

indigoHatter t1_j2w7ysv wrote

Compound six: arsenic and hemlock.

Taken in huge doses can greatly increase the speed at which synapses fire! Drink a lot of it, human!

^(disclaimer: >!big sarcasm here, don't actually do it, you'd probably die, and that's the joke!<)

24

LoquaciousAntipodean t1_j2wao3j wrote

Amen on that! Horrifying to see such medically dangerous levels of blind and deluded faith in automatic magical AI divinity đŸ„±... Garbage in garbage out, you pharma-bro tragics!😉

11

indigoHatter t1_j2wc3qt wrote

In fairness... We don't know that anyone asking the AI actually believed it and isn't just probing to see what happens. But, you and I both know that someone out there will take it at face value.

Let's be real anyway: ChatGPT is sourcing this information from a bunch of nootropics forums, hence why it's coming off as confident and knowledgeable (and shortsighted)... because the source material is a pharma-bro.

14

daveattellyouwhat t1_j2v9bpo wrote

Which compounds?

20

Ortus14 t1_j2vkias wrote

I don't remember everything I took, it was many years ago, but at one point or another I'm pretty sure I tried every single racetam. And a bunch of other things.

My intuitive sense is that things that over clock your brain such as racetams have negative long term effects if continuously taken. If you only take it to study for a specific test, or solve a specific problem, that's a different story.

While other things such as acetylcholine and L-Tyrosine found abundantly in our food sources are utilized effectively by the body and brain without significant long term damage. However because our bodies evolved to effectively utilize compounds coming in as clusters in our natural food sources, that's reason to believe there's a significant probability that it would be better for our health as well as cognitive performance if we get these compounds by eating whole foods such as eggs and liver.

But I'm not a doctor. Do your own research. After getting head aches that lasted for years and years, with a large portion of my brain feeling like there was a block of cement in it and in pain, I researched online and found a forum full of around a hundred or so people who all had the same symptoms from racetams I believe it was. This was like, 10 years ago, so I wouldn't be able to find the forum.

30

OtherworldDk t1_j2vxsjx wrote

.. Talk about anekdotial evidence... You tried all of them, and a brunch of other stuff, so cause and effect must be quite blurry here... But then again the chatGPT propably didnt try any of them!

12

Ortus14 t1_j2w0nb8 wrote

Sure, it's a gamble if you want to take them.

My comment is more about not getting lulled into a false sense of security about things that do not have long term studies on their effects.

Especially things that aren't in the form we evolved to consume them in, and for which we don't understand their full mechanism of action in the body, such as racetams.

10

OtherworldDk t1_j2wfbdd wrote

>racetams

yes, I agree on avoiding the feeling of false security. I have stayed away from synthetic substances unless I actually knew the chemist, and knew that the batch was tried and approved... Som from the list above, I can only, and only anecdotally, vouche for the mushrooms

5

Sotamiro t1_j2w1uq2 wrote

I tried aniracetam and I do get a headache each time I take a pill, thanks for your report

7

digitalwankster t1_j2ygijj wrote

Was it nootropicsforum.com? I ran a nootropic site about 10 years ago but I killed it off after reading about some webmasters getting in big trouble with the DEA.

3

Ortus14 t1_j2z3mo7 wrote

Maybe. Do you remember a bunch of people all talking about getting head aches that felt like a block of cement in their brains, and never went away?

If yes, then it was probably that one. I believe I remember the website having a dark background and lighter text.

1

Zacuard t1_j2vj0pw wrote

I also want to know this answer please, I am considering some of them

8

Ortus14 t1_j2vl092 wrote

Thanks. I responded to the above comment with my thoughts and what I remember. TLDR, I think it was the racetams.

6

tonyrizzo21 t1_j2waenw wrote

The trichlornitromethane and the pseudo-halogenic compound cyanogen.

1

LoquaciousAntipodean t1_j2wacif wrote

Chemistry =///= intelligence. That is a total dead end, like expecting water to get wetter if you pour water onto it.

−3

byttle t1_j2yxcv7 wrote

you can lead a horse to chemical water but it wont make him write a phd

2

LoquaciousAntipodean t1_j2yy95b wrote

You know what phd stands for? In this case, 'poisoning humans, dummy' Any clown with a copy of ChatGPT can get a phd these days, just look at that idiot Vandana Shiva. Claims to have degrees coming out of her ears. But I read a few of her abstracts; it's just mindless semantic drivel and polysyllabic garbage, and Sri Lanka still starved when that idiot simp Gotabaya Rajapaksa took her crackpot nonsense seriously ...

1

indigoHatter t1_j2wcn7g wrote

As I discussed in a comment further down, it's not that this is medical advice*, it's that the prompt for ChatGPT triggered it into (finding the part which was previously) nootropics (ran) through its neural network, and it "found" source material from pharmabros on nootropics forums and then (the predictive text based it's response off this learning and) wrote a summary based on that source. This isn't medically cross-examined data, it's just crowdsourced pharma-bro.

*The danger here, though, is that while this isn't medical advice, some dumbass could misconstrue it as such. This is just as true of finding a nootropics forum as well, but people may expect an AI to "be smarter" since it can also discuss medical facts if spoken to in a way that triggers correct medical language. Short version: info cool, but always run your Google & ChatGPT discussions by a real doctor, first.

(edits for clarity)

13

Technical-Berry8471 t1_j2wr48b wrote

Real doctors will think of the liability, and tell you not to. Also ChatGPT does tell people to consult a medical professional.

6

indigoHatter t1_j2xtuyv wrote

That's good. That at least absolves ChatGPT of malpractice, lol. Idiots will still miss that disclaimer, though.

1

monsieurpooh t1_j2wx7o9 wrote

Why do people keep spreading this misinformation? The process you described is not how GPT works. If it were just finding a source and summarizing it, it wouldn't be capable of writing creative fake news articles about any topic

3

indigoHatter t1_j2xubmd wrote

I might have grossly oversimplified the process, but is that not the general idea of training a neural network?

1

monsieurpooh t1_j2xw6ta wrote

These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.

From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.

Edit: Also, your original comment appears to be describing inference, not training

2

indigoHatter t1_j2ysq1c wrote

Okay, again I am grossly oversimplifying the concept, but if it was trained to predict what word should be next in a response such as that, then presumably it once learned about nootropics and absorbed a few forums and articles about nootropics. So.......

Bro: "Hey, make my brain better"

GPT: "K, check out these nootropics"

I made edits to my initial post in hopes it makes better sense now. You're correct that my phrasing wasn't great initially, and leaves room for others to misunderstand what I am not clearly stating.

1

monsieurpooh t1_j2z3bt5 wrote

Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.

2

indigoHatter t1_j2zeaxf wrote

Yeah, I'm not trying very hard to be precise right now. Glad you think it's better though. ✌ Have a great day, my dude!

2

MelodiGreig t1_j2vf4yk wrote

This image is photoshopped, chatgpt dodges questions like this for a reason.

7

SnooDonkeys5480 t1_j2vu4fp wrote

No it isn't. You just have to prompt it in a way that avoids the censorship.

I had it create a new drug. And here's one similar to the OP.

19

chillaxinbball t1_j2vue87 wrote

It's all over the place really. Ask once, it'll flat out refuse. Sometimes just pressing the refresh button changes it's mind.

8

zeezero t1_j2wpi51 wrote

This is the danger of chatgpt. There are tons of nonsense and poor studies around health claims. There is not a lot of proper refutation of most of it. So chatgpt will echo these poorly done studies. Ask it for references and check them if you are asking for health advice. There can be a goop effect on chatgpt.

3

louddoves t1_j2xwils wrote

Not to be pedantic but weary means tired. You mean wary. Common malapropism.

2

monsieurpooh t1_j2y9k2b wrote

I was about to comment the same thing and forgot about it. Every time I see this mistake I can't help but visualize someone huffing and sighing about something they're supposed to be suspicious of

2

EscapeVelocity83 t1_j2wg73a wrote

It's just sorting through known data like a anyone. This is a stack anyone could come up with lots are already doing similar nothing about this is surprising to me. I've been familiar with these compounds for years. I don't do this because intelligence isn't valued in society, popular opinion is your clue

1

Technical-Berry8471 t1_j2wro1n wrote

No one is claiming that ChatGPT can produce knowledge out of nothing. What is being demonstrated is that it can produce a summary of available knowledge in seconds, much faster, more concisely, and more accurately than most humans.

1

thedude0425 t1_j2xl74a wrote

Isn’t ChatGPT more of a language simulator that doesn’t have any real knowledge of what it’s talking about?

IE it’s not trained in biology? Or history?

It’s seeks to understand what you’re asking, and can provide the best answer possible, (and it can craft creative answers with proper tone, etc) but it doesn’t really know what it’s talking about? Yet?

It sounds like it knows what it’s talking about, though.

1

Pingasplz t1_j30c3es wrote

Yeee, I was asking ChatGPT to assist me with a Minecraft datapack and it's JSON examples were a bit off to say the least.

1