DIPPING THE STACKS

20 most recent links from my Raindrop bookmarks!
Grab the full RSS!

  • Risk-maxxing isn't just about taking big risks. It's about weaponizing uncertainty as a competitive tool. It involves conscious rejection of cautious, incrementalist strategies, deliberate institutional stress-testing to discover breaking points, and the creation of hyperaggressive positions that force others to adapt to your reality. And risk-maxxing's defiance of norms means cleaning up the mess becomes someone else's problem after you've moved on, or destroyed markets, systems or countries in the process.
  • most of the platforms we use today weren’t built for resilience, or reflection, or ecological intimacy. they were built for speed, scale, metrics, and for dopamine loops. and in all our solar forgetting, we lose something: rhythm. this is a call to remember that rhythm. something we can call solarsocial.
  • Ilya Sutskever is renowned for his vision when it comes to deep learning. A lot of his now popular quotes come from his 2023 appearance on the Dwarkesh Podcast. I was recently sent this clip of him discussing deep learning in 2015 and was taken aback by how correct he was so long ago, and particularly by how little has changed. With this, I wanted to share an annotated transcript with my thoughts.
  • Fans can sustain careers if they start from the right place and if the fandom infrastructure is strong enough, but they can’t add exposure. They can’t do the work of the still-important middlemen.
  • My switch from favoring permissive to favoring copyleft is motivated by two world events and one philosophical shift.
  • To my knowledge, this is the first case of a company developing a feature because ChatGPT is incorrectly telling people it exists. (Yay?) I’m sharing the story because I think it’s somewhat interesting. My feelings on this are conflicted. I’m happy to add a tool that helps people. But I feel like our hand was forced in a weird way. Should we really be developing features in response to misinformation?
  • Let me just start by saying that Claude Code (with the latest Sonnet 4 and Opus 4 models) is genuinely good at writing code. It's certainly not a top 1% programmer, but I would say that Claude's outputs are significantly better than those of the average developer.
  • Writing about what the Internet is doing to us is like writing about climate change. It’s not that it’s not important—in fact, it obviously is very important—but we’ve been circling the drain for thirty years
  • ā€œYes, sludge is often intentional,ā€ he said. ā€œOf course. The goal is to put as much friction between you and whatever the expensive thing is. So the frontline person is given as limited information and authority as possible. And it’s punitive if they connect you to someone who could actually help.ā€
  • In March, the group released a paper called Measuring AI Ability to Complete Long Tasks, which reached a startling conclusion: According to a metric it devised, the capabilities of key LLMs are doubling every seven months.
  • But if we want to talk about 'here to stay', I think we need to be more specific about what we mean by it, and what aspect of it is significant to us. If you tell me that we need to incorporate AI into our university policy because it is 'here to stay', does that mean we should uncritically invite it in to every aspect of our education and operation? Does 'here to stay' mean that a new technology gets a free pass and full capitulation?
  • I also assume it comes from fan culture—especially as amplified by the internet—where nerds get together to overanalyze everything they love. Again, I grew up in nerdy circles. I get it. One of the most popular pastimes in SFF circles is going off about worldbuilding problems, whether amusingly pointing out that ā€œmeat’s back on the menu, boys!ā€ implies orcs have a dining culture or spending time coming up with ā€œfan theoriesā€ for this or that alleged worldbuilding issue. I cannot think of a single popular SFF franchise or series that hasn’t spawned long-winded and not-necessarily-wrong worldbuilding critiques.
  • Here’s an idea: operationally speaking, AI-generated images are disorienting because they disrupt the linear progression of refinement we’re accustomed to when it comes to traditional image-making production.
  • But the goal is to get as many potentially disease-resistant trees growing as possible and people can do something directly about that,ā€ Smith added. ā€œWhen you talk to folks, they really do care about biodiversity loss, they just don’t think they can do anything about it. Here, they can.ā€
  • ā€œThere wasn't a training,ā€ he said. ā€œThere wasn't a mandate. It was just me watching tutorials. I mean, I've watched tutorials that are like ā€˜how to make maps like Vox’ while I'm working at Vox to learn how to make maps, which is a very funny thing to happen.
  • One industry expert said: ā€œWhat [the Treasury] are completely focused on — I’d say obsessed with — is the cash side, because there’s nearly half a trillion in cash that could be deployed into the economy.ā€
  • Rock My Religion is a provocative thesis on the relation between religion and rock music in contemporary culture. Graham formulates a history that begins with the Shakers, an early religious community who practiced self-denial and ecstatic trance dances
  • Hannah Cairo has solved the so-called Mizohata-Takeuchi conjecture, a problem in harmonic analysis closely linked to other central results in the field. This fall, she will begin her doctoral studies at the University of Maryland
  • A look at the background of Rolf Gardiner as he got involved in youth culture movement through the Scout Association as a child and morris dancing at Cambridge, arguably influencing the Hitler Youth.
  • And that’s what real taste is: a deep internal coherence. A way of filtering the world through intuition that’s been sharpened by attention.