Anthropic recently announced higher Claude usage limits for off-peak hours, which is everything except weekdays from 8 AM - 2 PM ET.
I was surprised by the specific times here. This is night time in Asia, the tail end of the workday in Europe, and the start of the work day in North America. The beginning of the window is too early for much West Coast activity, and it winds down at 11 AM in California. That seems inconsistent with the idea that coding is driving power use?
The timing seems most consistent with the period when you'd see overlapping demand from the US East Coast and from Western Europe -- neither of which are super tech heavy regions. Maybe this timing implies a lot of the power users are financial firms working on New York/London time?
A brief look at the Anthropic Economic Index suggests that the US is about 22% of global absolute Claude usage, and CA is 20% of US usage. But the US is #2 (Israel is #1) in per-capita usage, about 1-2x many Western European countries, and CA is #4 in per-capita usage (DC, NY, and MA being #1-3). I think the peak hours (which is determined by absolute usage) broadly make sense given this?
Hypothesis I am not confident about: maybe Silicon Valley usage is a big slice of the cake, but it's stable because efficient power users keep agents running at night or do shifts to manage them? And so the variation in demand is driven by non-power-users like employees at non-tech companies.
Could be true, but the API price doesn't vary by time of day, so there's no particular incentive to do that for power users and so I'd be kinda surprised if they did anything active to smooth their usage.
"The start of the work day in North America" is not usually the time when software developers are the most productive.
As a non-coder, I found AI pretty useless before Opus 4.6. It was definitely having a net negative effect on my productivity because of the time I'd waste trying to get it to do things that didn't work out or required massive corrections. It was much worse for my projects than an intern with an hour of training.
Now, all of the sudden, it actually works. And this was a step change from "no amount of scaffolding I do can get this to happen short of my manual intervention every time" to "I just describe what I want and it happens." I'm hearing the same from colleagues.
Up until now, it seems like the only thing the models could actually accomplish beyond being a chatbot was writing code. At least for me, I had no idea what to make of that and whether vibe coding ought to really be considered impressive or not. It was also very hard to tell if that would translate to anything else in the medium term -- whether the ability to do stuff in the "clinical" world of coding and math and taking tests would actually turn into the ability to do messier real world stuff.
Obviously other people have a different task distribution, but I think Opus 4.6 is the inflection point in terms of getting the word out to non-coders that AI can actually do stuff and will be able to do even more stuff soon and so on. For people not extremely on board with the "this is the worst it will ever be" school of thinking, I think interacting with previous models often left a "This is obviously unimpressive crap" response where it was really hard to tell if coders saying something different was real or hype.
This also leads me to concur (as a former staffer) that "get people in DC to play with Claude Code for a while" is now a high impact intervention whereas I think previously that kind of thing was likely to backfire.
(Could be totally wrong maybe we just happen to have hit my threshold now and not anyone else's, but hitting that has definitely been a worldview shift for me).
I work in policy analysis, so a lot of what I want is messy data work. Some examples:
There's a fairly large (1,800 forecaster) market on AGI timelines at Metaculus (albeit with a contestable definition).
The market has had a fairly stable forecast of mid-to-late 2033 over the last six months or so. In the last couple of days, the median has shifted from May 2033 (Tuesday) to October 2032 this morning.
I wonder what that's about. The date of the move doesn't really align with any new releases or any breakthroughs that I'm aware of (unless it was partially an anticipatory move on GPT-5.4 rumors). Maybe it's just noise?
Note it was forecast for May 2030 exactly one year ago. It's been fluctuating from 2030-2034 ever since GPT-4 was dropped almost exactly 3 years ago, with a few extended periods closer to both the high and low ends. I think it's mostly noise.