Warning: this is not in typical LessWrong "style", but nevertheless I think it is of interest to people here.
Most people approach productivity from the bottom up. They notice something about a process that feels inefficient, so they set out to fix that specific problem. They use a website blocker and a habit tracker, but none of these tools address the root problem. Personally, I even went as far as making my own tools, but they yielded only marginally more productive time. I craved more, and I was willing to go as far as it takes. I wanted to solve productivity top down—with a system that would enforce non stop productivity with zero effort on my part.
I had tried less intense “watch you work” solutions before. Sharing a...
This is good comment but I'm already sort of at my limit; going to try to focus just on DirectedEvolution.
+1 to be able to check notifications, messages, and specific posts here without seeing newsfeeds:
https://github.com/jordwest/news-feed-eradicator/issues/253
https://github.com/ForumMagnum/ForumMagnum/issues/6640
I was looking at the specs for the Kia EV6 after someone brought it up in a discussion:
DC Fast Charge Time (10-80% @ 350 kW via Electric Vehicle Supply Equipment) Level 3 Charger: Approx. 18 min.
If you're not familiar with EVs or other similar equipment you might think that this draws a constant 350kW, but charging a 77.4 kWh battery from 10-80% at 350kW would take only 9min so it can't be that. Instead, EVs are smart: they communicate with the charger to draw varying amounts of current depending on how quickly the battery can accept charge.
So then you might think that 350kW reflects the peak current the car draws. But no: when P3 Group measured it they found it peaks at 235kW, before throttling back when the battery gets to 50%.
This isn't unique to Kia:...
On the Kia EV6 page you link first, I think the position of the 350 kW value you quoted being part of the initial conditions rather than an expected draw is pretty clear. The interpretation I'm pointing at is “if connected to a charger with a capacity of 350 kW, the expected time is approximately 18 minutes”—the 350 kW is on the LHS of the conditional as signaled by its position in the text. By comparison to nearby text, the entry immediately above the one you quoted states 73 minutes under the condition of being connected to a Lev...

Changelog:
My current overall assessment is probably common sense:
Yes! I changed my display name but it's the same ol me.
There is an insightful literature that documents and tries to explain why large incumbent tech firms fail to invest appropriately in disruptive technologies, even when they played an important role in its invention. I speculatively think this sheds some light on why we see new firms such as OpenAI rather than incumbents such as Google and Meta leading the deployment of recent innovations in AI, notably LLMs.
Disruptive technologies—technologies that initially fail to satisfy existing demands but later surpass the dominant technology—are often underinvested ...
You've done it. You've built the machine.
You've read the AI safety arguments and you aren't stupid, so you've made sure you've mitigated all the reasons people are worried your system could be dangerous, but it wasn't so hard to do. AI safety seems a tractable concern. You've built a useful and intelligent system that operates along limited lines, with specifically placed deficiencies in its mental faculties that cleanly prevent it from being able to do unboundedly harmful things. You think.
After all, your system is just a GPT, a pre-trained predictive text model. The model is intuitively smart—it probably has a good standard deviation or two better intuition than any human that has ever lived—and it's fairly cheap to run, but it is just a cleverly tweaked GPT,...
Sure, I agree GPT-3 isn't that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: "If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button."
Related work: Hero Licensing, Modest Epistemology, The Alignment Community is Culturally Broken, Status Regulation and Anxious Underconfidence, Touch reality as soon as possible, and many more.
TL;DR: Evaluating whether or not someone will do well at a job is hard, and evaluating whether or not someone has the potential to be a great AI safety researcher is even harder. This applies to evaluations from other people (e.g. job interviews, first impressions at conferences) but especially to self-evaluations. Performance is also often idiosyncratic: people who do poorly in one role may do well in others, even superficially similar ones. As a result, I think people should not take rejections or low self confidence so seriously, and instead try more things and be more ambitious in general.
Epistemic status: This is another experiment in writing fast as opposed to carefully....
Writing down something I’ve found myself repeating in different conversations:
If you're looking for ways to help with the whole “the world looks pretty doomed” business, here's my advice: look around for places where we're all being total idiots.
Look for places where everyone's fretting about a problem that some part of you thinks it could obviously just solve.
Look around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better.
Then do it better.
For a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of, “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he's limited here by his unusually low stamina, but I...
I've kept updating in the direction of do a bunch of little things that don't seem blocked/tangled on anything even if they seem trivial in the grand scheme of things. In the process of doing those you will free up memory and learn a bunch about the nature of the bigger things that are blocked while simultaneously revving your own success spiral and action-bias.
feels unlikely