Recent Discussion

Warning: this is not in typical LessWrong "style", but nevertheless I think it is of interest to people here.

Most people approach productivity from the bottom up. They notice something about a process that feels inefficient, so they set out to fix that specific problem. They use a website blocker and a habit tracker, but none of these tools address the root problem. Personally, I even went as far as making my own tools, but they yielded only marginally more productive time. I craved more, and I was willing to go as far as it takes. I wanted to solve productivity top down—with a system that would enforce non stop productivity with zero effort on my part.

I had tried less intense “watch you work” solutions before. Sharing a...

4lc3h
This might have been her incredibly awkward way of engineering a scenario for you to flirt with her.

feels unlikely

7Simon Mendelsohn4h
This is hilarious and beautiful and exactly what I expect from LessWrong.  Also, hello fellow Simon. 
  1. Don't say false shit omg this one's so basic what are you even doing. And to be perfectly fucking clear "false shit" includes exaggeration for dramatic effect. Exaggeration is just another way for shit to be false.
  2. You do NOT (necessarily) know what you fucking saw. What you saw and what you thought about it are two different things. Keep them the fuck straight.
  3. Performative overconfidence can go suck a bag of dicks. Tell us how sure you are, and don't pretend to know shit you don't.
  4. If you're going to talk unfalsifiable twaddle out of your ass, at least fucking warn us first.
  5. Try to find the actual factual goddamn truth together with whatever assholes you're talking to. Be a Chad scout, not a Virgin soldier.
  6. One hypothesis is not e-fucking-nough.
...

This is good comment but I'm already sort of at my limit; going to try to focus just on DirectedEvolution.

2DirectedEvolution42m
I agree that is a potential takeaway from my comment. I also agree that it's not fair to overly criticize authors when that reaction toward defensiveness may be because they're correctly anticipating a harsh PONDS [https://www.lesswrong.com/posts/k5TTsuHovbeTWgszD/for-better-commenting-avoid-ponds] response from their readership. I do have empathy for the problem. When I read the blog posts I really enjoy, it seems to me those authors manage to write in ways that come across as non-defensive, with exaggerations and humor and "you know what I mean" implications. They rely on me to fill in some of the blanks, and that's part of the fun of their posts and part of what keeps my attention.  When I write defensively, I feel like way too much of my mental energy is going into combatting phanom future commenters and not enough to the object-level of the post. And when that gets overwhelming I just delete it or leave it in drafts. I have a large graveyard of dead posts. I used to have a lot more fun writing, enjoying the vividness of language, and while I thank LessWrong for improving many aspects of my thinking, it has also stripped away almost all my verve for language. I think that's coming from the defensiveness-nuance complex I'm describing, and since the internet is what it is, I guess I'd like to start by changing myself. But my own self-advice may not be right for others.
2DirectedEvolution1h
I disagree. My issue with your comment is not that it lacks nuance. It's that it read as a personal attack against me, an ad hominem. Below, I write an edited version of your original comment that has little-no more nuance than the original - perhaps less - but also was not an ad hominem. I strikeout the parts that felt like ad hominems to me, and replaced them with wording that I think roughly captures the (as you say, un-nuanced) meaning of your sentences without coming across as an attack. Of course, I won't do as good a job of reflecting your intended meaning as you would, which is why I hope that in the future, you'll remove the ad hominems from your writing on your own. If you had made a comment roughly along these lines, we could have had a productive debate, using the amount of nuance that seemed appropriate to us both.
2Duncan_Sabien33m
I'll offer up the edit "It feels like CYA when you don't care about [the particular delineations of] truth [involved]." (Sort of an "everyone faster than you on the highway is reckless, everyone slower is holding up traffic" claim.) Look. Inasmuch as you can claim that my CYA line was a direct attack on your character (it wasn't intended as such and I think you're stretching to make it so, especially since the comment went on to elaborate), you had already launched a similarly direct attack on mine, taking the discourse that I was arguing is crucial and important and calling it, variously, CYA loophole-closing, over-explaining, and in-service-of-self-protection rather than aiding the reader. And this is sort of generally the point: you think I was the first one to break norms, whereas I was genuinely just trying to mirror you back to yourself. Your comment had a lot of implications about why people want nuance that were uncharitable and not universal; the discussion felt unfriendly to me from the moment of your comment being dismissive. (You were also explicitly agreeing with someone who, a comment earlier, had said that nuance is poison.) You would like, I think, for me to care about how you felt unfriendlied-toward. Do you care about how I did? (I note that I was slightly more cavalier than I would ordinarily be, because we're under a post titled "Fucking goddamn basics;" I think this is not an unreasonable call to have made. I think that calling the previous comment an ad hominem attack is a bit of a motte-and-bailey; it is certainly nowhere near a median-bad instance of the class, even if we label it a member.)

I was looking at the specs for the Kia EV6 after someone brought it up in a discussion:

DC Fast Charge Time (10-80% @ 350 kW via Electric Vehicle Supply Equipment) Level 3 Charger: Approx. 18 min.

If you're not familiar with EVs or other similar equipment you might think that this draws a constant 350kW, but charging a 77.4 kWh battery from 10-80% at 350kW would take only 9min so it can't be that. Instead, EVs are smart: they communicate with the charger to draw varying amounts of current depending on how quickly the battery can accept charge.

So then you might think that 350kW reflects the peak current the car draws. But no: when P3 Group measured it they found it peaks at 235kW, before throttling back when the battery gets to 50%.

This isn't unique to Kia:...

On the Kia EV6 page you link first, I think the position of the 350 kW value you quoted being part of the initial conditions rather than an expected draw is pretty clear. The interpretation I'm pointing at is “if connected to a charger with a capacity of 350 kW, the expected time is approximately 18 minutes”—the 350 kW is on the LHS of the conditional as signaled by its position in the text. By comparison to nearby text, the entry immediately above the one you quoted states 73 minutes under the condition of being connected to a Lev... (read more)

Mink in cages
H5N1 likely has spread among minks, the first major spread from mammal to mammal

Changelog:

  • 4 Feb 2022: Checklist started, score is 5/10. Updates will be sporadic unless new developments put the virus in the newspaper more often. As always, this data is based on what the virus and society is doing now, though with an eye to future developments.

My current overall assessment is probably common sense:

  • Transmissibility: H5N1 can transmit from birds to a wide range of mammalian species. Spread from birds to mammals happens but is rare. However, likely spread between minks at a fur farm is a new and concerning development. If H5N1 became transmissible among humans, while retaining anywhere close to its current deadliness, it could become a civilization-threatening global catastrophe.
  • Danger: H5N1 is extremely deadly. It
...
3luidic1h
I was going to comment "I wonder what AllAmericanBreakfast's thoughts are", but I guess that's already covered!

Yes! I changed my display name but it's the same ol me.

There is an insightful literature that documents and tries to explain why large incumbent tech firms fail to invest appropriately in disruptive technologies, even when they played an important role in its invention. I speculatively think this sheds some light on why we see new firms such as OpenAI rather than incumbents such as Google and Meta leading the deployment of recent innovations in AI, notably LLMs.

Disruptive technologies—technologies that initially fail to satisfy existing demands but later surpass the dominant technology—are often underinvested ... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

You've done it. You've built the machine.

You've read the AI safety arguments and you aren't stupid, so you've made sure you've mitigated all the reasons people are worried your system could be dangerous, but it wasn't so hard to do. AI safety seems a tractable concern. You've built a useful and intelligent system that operates along limited lines, with specifically placed deficiencies in its mental faculties that cleanly prevent it from being able to do unboundedly harmful things. You think.

After all, your system is just a GPT, a pre-trained predictive text model. The model is intuitively smart—it probably has a good standard deviation or two better intuition than any human that has ever lived—and it's fairly cheap to run, but it is just a cleverly tweaked GPT,...

Sure, I agree GPT-3 isn't that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: "If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button."

Related workHero LicensingModest EpistemologyThe Alignment Community is Culturally BrokenStatus Regulation and Anxious UnderconfidenceTouch reality as soon as possible, and many more. 

TL;DR: Evaluating whether or not someone will do well at a job is hard, and evaluating whether or not someone has the potential to be a great AI safety researcher is even harder. This applies to evaluations from other people (e.g. job interviews, first impressions at conferences) but especially to self-evaluations. Performance is also often idiosyncratic: people who do poorly in one role may do well in others, even superficially similar ones. As a result, I think people should not take rejections or low self confidence so seriously, and instead try more things and be more ambitious in general. 

Epistemic status: This is another experiment in writing fast as opposed to carefully....

Writing down something I’ve found myself repeating in different conversations:

If you're looking for ways to help with the whole “the world looks pretty doomed” business, here's my advice: look around for places where we're all being total idiots.

Look for places where everyone's fretting about a problem that some part of you thinks it could obviously just solve.

Look around for places where something seems incompetently run, or hopelessly inept, and where some part of you thinks you can do better.

Then do it better.

For a concrete example, consider Devansh. Devansh came to me last year and said something to the effect of,  “Hey, wait, it sounds like you think Eliezer does a sort of alignment-idea-generation that nobody else does, and he's limited here by his unusually low stamina, but I...

I've kept updating in the direction of do a bunch of little things that don't seem blocked/tangled on anything even if they seem trivial in the grand scheme of things. In the process of doing those you will free up memory and learn a bunch about the nature of the bigger things that are blocked while simultaneously revving your own success spiral and action-bias.

3rchplg12h
Relatedly on "obviously dropping the ball": has Eliezer tried harder prescription stimulants [https://astralcodexten.substack.com/p/know-your-amphetamines]?  With his P(doom) & timelines, there's relatively little downside to this done in reasonable quantities I think. They can be prescribed. Seems extremely likely to help with fatigue From what I've read, the main warning would be to get harder blocks on whatever sidetracks eliezer (e.g. use friends to limit access, have a child lock given to a trusted person, etc) Seems like this hasn't been tried much beyond a basic level, and I'm really curious why not given high Eliezer/Nate P(doom)s.  There are several famously productive researchers who did this [https://en.wikipedia.org/wiki/Paul_Erd%C5%91s#Personality]