Search

50 results for www.lesswrong.com:

throwawaydthrowawayd t1_j3vbipb wrote

www.youtube.com/watch?v=GsFWDFz5tE0#t=08m50s) * **Jacob Cannell** (Vast.ai, lesswrong-author) ----> AGI: ~[2026-32](https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long) * **Richard Sutton** (Deepmind Alberta) ----> AGI: ~[2027-32?](https://www.youtube.com/watch?v=PvJ14d0r3CM) * **Jim Keller** (Tenstorrent) ----> AGI: ~[2027-32?](https://www.youtube.com/watch?v=0ll5c50MrPs#t=31m25s) ... Nathan Helm-Burger** (AI alignment researcher; lesswrong-author) ----> AGI: ~[2027-37](https://www.lesswrong.com/posts/wgcFStYwacRB8y3Yp/timelines-are-relevant-to-alignment-research-timelines-2-of) * **Geordie Rose** (D-Wave, Sanctuary AI) ----> AGI: ~[2028](https://www.youtube.com/watch?v=1JnTKkoPd1U#t=23m27s) * **Cathie Wood** (ARKInvest ... Sustensis) ----> AGI: ~[2030](https://www.youtube.com/watch?v=2wQ_XLwF6k4#t=22m55s) * **Ross Nordby** (AI researcher; Lesswrong-author) ----> AGI: ~[2030](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) * **Ilya Sutskever** (OpenAI) ----> AGI: ~[2030-35?](https://old.reddit.com/r/singularity/comments/kxgg1b/openais_chief_scientist_ilya_sutskever_comments/) * **Hans Moravec** (Carnegie Mellon University

4

adt t1_j9eh3zp wrote

gonna love Gwern's comment then... [https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K](https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned?commentId=AAC8jKeDp6xqsZK2K) Original post is interesting for context: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned

10

_dekappatated OP t1_j5qsma3 wrote

follow a lot of AI researchers on twitter, occasionally check out [https://www.lesswrong.com](https://www.lesswrong.com), try to read some of the research papers, learn about LLMs, transformers, watch some youtube videos to get high level

17

Submitted by Singularian2501 t3_yrw80z in singularity

model into the future leads to short AI timelines: \~75% chance of AGI by 2032. Lesswrong: [https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long](https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long) Why I think strong general AI is coming soon Lesswrong: [https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) We are VERY

95

GreenWeasel11 t1_iw3zyht wrote

said. One also sees things like ["Why I think strong general AI is coming soon"](https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon) popping up from time to time (specifically, "I think there is little time left before someone builds

1

sheerun t1_iwigcnv wrote

Maybe smartheads from [https://www.lesswrong.com/](https://www.lesswrong.com/) and corporate/academia AI/machine learning researchers. Not that worrying is not justified, very very justified. Controlling GAI is not possible directly indefinitely, we need another GAI, so recursive problem

−1

Singularian2501 OP t1_iwq1iph wrote

www.lesswrong.com/posts/mRwJce3npmzbKfxws/efficientzero-how-it-works A lesswrong article I have found that explains how efficient zero works. In my opinion the author wants to say that systems like efficient zero are more efficient in their data usage

3

iiioiia t1_j1j787j wrote

flawed this thinking is (replace GroupX with "The Jews", "The Blacks", etc and observe how [cognition](https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong) immediately changes, *if it does not terminate in response*), but they typically do not work

2

Baturinsky OP t1_j3hmxy6 wrote

ChatGPT may be not on the level of AGI yet (even though some think it is - [https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that](https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that)) But the preogress of AI training does not show signs of slowing down, and there

0

icedrift t1_j535agx wrote

most up to date that I could find. EDIT: Found this from last year [https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022](https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022) Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really

10

blueSGL OP t1_j53btzc wrote

find this section of an interview with Ajeya Cotra (of [biological anchors for forecasting AI timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) fame) Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754 Where she talks about how several benchmarks were past

16

dancingnightly t1_j7s355b wrote

semantic text embeddings and LM models through this method(would operate differently to multi modal embeddings): [https://www.lesswrong.com/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight](https://www.lesswrong.com/posts/mkbGjzxD8d8XqKHzA/the-singular-value-decompositions-of-transformer-weight) This method, which is only practical for toy problems really right now, would allow

2

leventov t1_j7ubimw wrote

thinking about cognitive science. [Theories of cognitive science and ML/DL form an "abstraction-grounding" stack](https://www.lesswrong.com/posts/opE6L8jBTTNAyaDbB/a-multi-disciplinary-view-on-ai-safety-research#3_4__Weaving_together_theories_of_cognition_and_cognitive_development__ML__deep_learning__and_interpretability_through_the_abstraction_grounding_stack): general theories of cognition (intelligence, agency) -> general theories of DNN working in runtime -> interpretability theories

1

gaudiocomplex t1_j9czm28 wrote

lesswrong.com. I recommend going there instead of listening to idiots here. Here's a [fun one](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) Favorite part: >"The concrete example I usually use here is nanotech, because there's been

−3

Baturinsky t1_j9pe2tg wrote

with Alignment issue could be a good early step on the way to ASI research. This [https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast](https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast) could be a good introduction, imho

1

gwern t1_j9qwz8z wrote

Reply to comment by Hodoss in And Yet It Understands by calbhollo

some of the unacceptable predictions happened to survive by fooling the imperfect censor model': https://www.lesswrong.com/posts/hGnqS8DKQnRe43Xdg/?commentId=7tLRQ8DJwe2fa5SuR#7tLRQ8DJwe2fa5SuR

6

blueSGL t1_j9rf2n4 wrote

find this section of an interview with Ajeya Cotra (of [biological anchors for forecasting AI timelines](https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines) fame) Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754 Where she talks about how several benchmarks were past

17

VirtualHat t1_jaa4jwx wrote

increasing](https://www.taylorfrancis.com/chapters/edit/10.1201/9781351251389-4/ethics-artificial-intelligence-nick-bostrom-eliezer-yudkowsky) [number](https://www.amazon.com.au/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2) [of](https://www.zdnet.com/article/the-next-decade-in-ai-gary-marcus-four-steps-towards-robust-artificial-intelligence/) [academics](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop

3

danysdragons OP t1_jdd2hcv wrote

author, I'm just maintaining the original article title for the post title. Also published at [https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky](https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky)

13

Zermelane t1_iracd7y wrote

lets us get around the current problem of [paucity of training data for code models](https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications#Code).

2