I am broadly interested in theoretical computer science and neuroscience.
Recently I've been thinking more about gradual disempowerment risks due to AI and potential mitigation strategies.
Hi, we had been working on a single player TTX for the past couple of months, do check it out - https://www.lesswrong.com/posts/epn73xEkeu5T4sZa5/rehearsing-the-future-tabletop-exercises-for-risks-and
Thanks! This was our first big event (>10), so it was kind of a trial by fire. Glad that we could pull it off (obviously with the help of the community). Lots of learnings to digest and incorporate for the next iteration.
Arguments made in https://epoch.ai/gradient-updates/what-will-the-imo-tell-us-about-ai-math-capabilities were so prescient!
If the 2025 IMO happens to contain 0 or 1 hard combinatorics problems, it’s entirely possible that AlphaProof will get a gold medal just by grinding out 5 or 6 of the problems—especially if there happens to be a tilt toward hard geometry problems. This would grab headlines, but wouldn’t be much of an update over current capabilities. Still, it seems pretty likely: I give a 70% chance to AlphaProof winning a gold medal overall, with almost all of that chance coming from just such a scenario.
But in fact it turned out to be a more general reasoning model than AlphaProof
We're literally using the economic proceeds from attention extraction to build artificial attention mechanisms that might make human attention obsolete.
Hi Sanjay, yeah we are planning to organise future meetups in Bangalore. Do fill the form, so that we can keep you updated.
Since the drones are centrally produced, they could easily implement digital watermarks for provenance
Probably do a screen recording to scale later with AI
Super interested in this! I would be even up for a discord or slack