Nyanraltotlapun
Nyanraltotlapun t1_jdm0r15 wrote
Reply to comment by brucebay in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
>This is not an alien intelligence yet. We understand how it works how it thinks.
Its alien not because we don't understand It, but because It is not protein life form. It have nothing common with humans, It does not feel hunger, does not need sex, does not feel love or pain. It is metal plastic and silicone. It is something completely nonhuman that can think and reason. It is the true horror, wont you see?
>We understand how it works how it thinks
Sort of partially. And also, it is false to assume in general. Long story short, main property of complex systems is the ability to pretend and mimic. You cannot properly study something that can pretend and mimic.
Nyanraltotlapun t1_jdkkc6q wrote
Reply to comment by 3deal in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.
Nyanraltotlapun t1_jdhb3ae wrote
Reply to comment by LeN3rd in [D] Simple Questions Thread by AutoModerator
No more
Nyanraltotlapun t1_jdeltnm wrote
Reply to comment by SomeLongWindedIdiot in [D] Simple Questions Thread by AutoModerator
Long story short, main property of complex systems is the ability to pretend and mimic. So the real safety of AI lies in its physical limitations (compute power algos etc.) the same limitations that makes them less useful less capable. So the more powerful AI is the less safe it is. There more danger it poses. And it is dangerous alright. More dangerous than nuclear weapons is.
Nyanraltotlapun t1_iso6lqp wrote
Reply to comment by neuroguy123 in [D] Simple Questions Thread by AutoModerator
For example I encoded it as such. Different features have different scales and I need to normalize it somehow. But because differential encoding produce signet values I have problem with it. I afraid that with normalization I will lost information about direction(sign)
Nyanraltotlapun t1_isefd1c wrote
Reply to [D] Simple Questions Thread by AutoModerator
Hi. I have time-series data. I try to do all sorts of thing with it, forecasting and classification with RNNs and Fully Connected models.
The question is - can neural networks capture speed of change of values? RNNs and FC ones? Should I try to feed networks with derivatives of my values? Or it can potentially worsen performance of my networks?
Second question, how should I normalize derivative, my first idea is to take absolute values of derivatives and encode sign as separate features(two features for positive and negative). Does it sounds reasonable? I am afraid of my data becoming to complex.
Nyanraltotlapun t1_jdm1505 wrote
Reply to comment by t0slink in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I'M Jaded