Site /
࿔
LINKS |
---|
Links are posted monthly and include any articles, papers, or essays that I find pertinent to my current research and writing |
I subscribe to many newsletters that arrive in my inbox, which I post here: Newsletters |
2025
April
- How To build an American DeepSeek Jeffries 2025
- AI 2027, AI Futures 2025
- An Optimistic 2027 Timeline, Yitz 2025
- AI 2027: What Superintelligence Looks Like, Kokotajlo et al, 2025
- An Approach to Technical AGI Safety and Security, 2025
- Scott Alexander was right: doubling down, Teslo 2025
- Tiny Agents: a MCP-powered agent in 50 lines of code, Chaumond 2025
- Optimal Brain Damage, Lecun et al, 1989
- The Cacophony, Kriss 2025
- There's A Time For Everyone, ACX 2022
- Sensemaking in 2025: Trump Tariffs Edition, Hall 2025
March
- re-read: The Extended Internet Universe, Venkatash Rao 2019
- Ribbonfarm is Retiring, Rao 2024
- R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning 2025
- Vocabulary Study with Mnemosyne
- Levels of Friction, Zvi 2025
- On Writing #1, Zvi 2025
- The Value of Open Source Software, Hoffman et al 2024
- On the Tolkienic Hero 2019
- Writing for LLMs So They Listen Gwern 2024
- An open letter to graduate students and other procrastinators: it’s time to write Hazelett 2025
February
Much of my time this month has been spent on researching tools that support the development of this site and implementing visual and basic quality-of-life features to establish a solid foundation for the future.
- How to Install PmWiki on Debian 10 / Nginx / PHP-FPM
- Pmwiki - ImagePopup
- Introducing Perplexity Deep Research
- DeepSeek? Schmidhuber did it first ref: @hardmaru
- On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models
- One Big Net For Everything
- Learning complex, extended sequences using the principle of history compression
- 1990: Planning & Reinforcement Learning with Recurrent World Models and Artificial Curiosity
- 1991: First very deep learning with unsupervised pre-training
- DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- Why To Not Write A Book, Gwern 2024
- Design of This Website, Gwern 2023
- Tufte CSS, David Liepmann
- How To Make Superbabies, LessWrong Feb 2025
𖤓 |