Shortform Content [Beta]

Willa's Shortform

Shortform #29 Almost Back on the Wagon!

Today was an excellent day :) I did not stick to the schedule I put together, but writing it last night was helpful since I had it for reference today. I allowed myself to sleep in and that seems to have helped considerably in many ways.

  1. I logged my calories today, totaled ~2230kcal
  2. I was virtually social for >4 hours via phone calls
  3. I was up and active for the majority of the time I was on the phone, I spent >5 hours packing and organizing things today. (including but not limited to, boxing up all 375+ of my b
... (read more)
AllAmericanBreakfast's Shortform

Eliezer's post on motivated stopping contains this line:

Who can argue against gathering more evidence? I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later."

This is often not true, though, for example with regard to whether or not it's ethical to have kids. So how to make these sorts of decisions?

I don't have a good answer for this. I sort of think that there are certain superhuman forces or drives that "win out." The drive ... (read more)

If you will get more evidence, whether you want it or not, is there a way you can do something with that?

Ba zbgvingrq fgbccvat vgfrys - jul fgngvp cebprffrf, engure guna qlanzvp barf?

Raemon's Shortform

I vaguely recall there being some reasons you might prefer Ranked Choice Voting over Approval voting, but can't easily find them. Anyone remember things off the top of their head?

As a voter, I don't have to decide where to draw the approval line. The lower I draw it, the less I approve of the people I'm including. (1 dimension model.)

Something that isn't usually talked about - maybe the coalition incentives. ("We'll approve your candidate if you approve ours.") Whether that leads to compromise which is good or collusion which is bad... (Consequences of adoption.)

adamzerner's Shortform

Everyone hates spam calls. What if a politician campaigned to address little annoyances like this? Seems like it could be a low hanging fruit.

2habryka2dDepends on what you mean by "low-hanging fruit". I think there are lots of problems like this that seem net-negative, but it doesn't seem anywhere close to the most important thing I would recommend politicians to do.
2adamzerner2dBy low-hanging fruit I mean 1) non-trivial boost in electability and 2) good effort-to-reward ratio relative to other things a politician can focus on. I agree that there are other things that would be more impactful, but perhaps there is room to do those more impactful things along with smaller, less impactful things.

I don't think there IS much low-hanging fruit.  Seemingly-easy things are almost always more complicated, and the credit for deceptively-hard things skews the wrong way: promising and failing hurts a lot (didn't even do this little thing), promising and succeeding only helps a little (thanks, but what important things have you done?).

Much better, in politics, to fail at important topics and get credit for trying.

AllAmericanBreakfast's Shortform

Reading and re-reading

The first time you read a textbook on a new subject, you're taking in new knowledge. Re-read the same passage a day later, a week later, or a year later, and it will qualitatively feel different.

You'll recognize the sentences. In some parts, you'll skim, because you know it already. Or because it looks familiar -- are you sure which?

And in that skimming mode, you might zoom into and through a patch that you didn't know so well.

When you're reading a textbook for the first time, in short, there are more inherent safeguards to keep you f... (read more)

just_browsing's Shortform

(this is just a rant, not insightful) Everybody knows how important it is to choose the right time to write something. The optimal time is when you're really invested in the topic, learning rapidly but know enough to start the writing process. Then, ideally, during the writing process everything will crystalize. If you wait much longer than this the topic will no longer be exciting and you will not want to write about it. 

Everybody gives this advice, both within and outside of academia. I've heard it from professors, LW-y blog posts (maybe even on LW?), and everywhere in between. 


Showing 3 of 6 replies (Click to show all)
4Raemon17hProtagonist: "Everybody knows!" Narrator: "Everybody didn't know []." (edit: I think this came out meaner than I meant it to, mostly I thought it was a fun injoke about the everybody knows post)
1just_browsing17hThis might be helpful advice. Some of the more required writing I've been putting off is probably too niche for the "Being Wrong On The Internet" aspect but I could probably more proactively find people willing to let me explain things to them. Come to think of it this has often been a good way to motivate me to learn / write things...

Yeah, it seems that the desire to write is tied is often tied to a desire to explain things, it's just that our past self is usually the first person we want to explain things to. ;-) We could think of it as being like a pressure differential of knowledge, where you need a lower-pressure area for your knowledge to overflow into. Having a mental model of a person who needs to know, but doesn't, then feels like an opportunity to relieve the sudden pressure differential. ;-)

In principle, I suppose imagining that person might also work if you can model such a person well enough in your mind.

mike_hawke's Shortform

For Winter Solstice, I recommend listening to the album "Soon It Will Be Cold Enough to Build Fires" by Emancipator.

Particularly, "Father King" and "Anthem". For me personally, "Father King" is the solstice song.

After having listened to "Soon it Will be Cold Enough" about 7 times, I must say I agree. I like "Good Knight", "Anthem", and "When I Go" most.

"When I Go" is very solsticy because of the repeating words "When I go, I will be long gone" which I think is about death. There is "The Darkest Evening of the Year" which, I guess, is exactly about the winter solstice.

P.s. "Father King" is not a part of "Soon it Will be Cold Enough", so I've never listened to it. Will try now.

ofer's Shortform

[COVID-19 related]

It was nice to see this headline:

My own personal experience with respirators is that one with headbands (rather than ear loops) and a nose clip + nose foam is more likely to seal well.

MikkW's Shortform

"From AI to Zombies" is a terrible title... when I recommend The Sequences to people, I always feel uncomfortable telling them the name, since the name makes it sound like cookey bull****- in a way that doesn't really indicate what it's about

I agree. 

I'm also bothered by the fact that it is leading up to AI alignment and the discussion of Zombies is in the middle!

Please change?

2Yoav Ravid3dI usually just call it "from A to Z"
2Willa3dI think "From AI to Zombies" is supposed to imply "From A to Z", "Everything Under the Sun", etc., but I don't entirely disagree with what you said. Explaining either "Rationality: From AI to Zombies" or "The Sequences" to someone always takes more effort than feels necessary. The title also reminds me of quantum zombies or p-zombies everytime I read it...are my eyes glazed over yet? Counterpoint: "The Sequences" sounds a lot more cult-y or religious-text-y. "whispers: I say, you over there, yes you, are you familiar with The Sequences, the ones handed down from the rightful caliph [] , Yudkowsky himself? [] We Rationalists and LessWrongians spend most of our time checking whether we have all actually read them, you should read them, have you read them, have you read them twice, have you read them thrice and committed all their lessons to heart?" (dear internet, this is satire. thank you, mumbles in the distance) Suggestion: if there were a very short eli5 [] post or about page that a genuine 5 year old or 8th grader could read, understand, and get the sense of why The Sequences would actually be valuable to read, this would be a handy resource to share.
Bucky's Shortform

10 months ago there was the coronavirus justified practical advice thread.

This resulted in myself (and many many others) buying a pulse oximeter.

Interested to hear now that these are now being provided in the to people in the UK who have COVID but are not in hospital and who are in a high risk category.

I note that there was some discussion on LW about how useful they were likely to be as people would probably notice difficulty in breathing which usually comes with low oxygen levels. It turns out that with COVID oxygen levels can get low without people noti... (read more)

Willa's Shortform

Shortform #28 What's going on?

I woke up feeling fried, extra crispy, with no motivation and everything was grey, even the outdoors (the weather was literally a hazy drizzly grey all day, and not the good kind, was more of the bad swampy kind). What I noticed feeling yesterday and then throughout today correlate reasonably well with a depression episode trying to take root. I'd prefer for that not to happen, because those aren't fun, and I have things to do. Time for interventions! (to be detailed later in the post)

  1. I practised Swift for about 1 hour and 3
... (read more)
MakoYass's Shortform

My opinion is that the St Petersberg game isn't paradoxical, it is very valuable, you should play it, it's counterintuitive to you because you can't actually imagine a quantity that comes in linear proportion to utility, you have never encountered one, none seems to exist.

Money, for instance, is definitely not linearly proportionate to utility, the more you get the less it's worth to you, and at its extremes, it can command no more resources than what the market offers, and if you get enough of it, the market will notice and it will all become valueless.

Every resource that exists has sub-linear utility returns in the extremes. 

(Hmm. What about land? Seems linear, to an extent)

Matt Goldenberg's Short Form Feed

Random question for traders?

What percent of "gains" from trading do you think currently come from algorithms and AI vs. human traders?

If any trader answers it, I would also be very interested in their error bars. How much uncertainty is there?

purrtrandrussell's Shortform

trying to think of ways to disentangle antifa's (in the sense of the Torch Network, Popular Mobilization and One People's Project) impact on authoritarian right organizing from law enforcement impact and non-antifa anti-authoritarian-right organizing such as the Southern Poverty Law Center and various faith groups.

As a general policy, data should go first, conclusions second. I do not have much data on this topic, so I can't say much specific about it.

I have a feeling there is some kind of "motte and bailey" about antifa, like on one hand it refers to some nebulous idea, on the other hand it refers to some specific people and organisations. So the "data" part should start with explaining who those people and organisations are, what is their role, whether they are respected by others who use the label and why (and how is this respect enforced in real life). Without t... (read more)

Willa's Shortform

Shortform #27 A Day of Meandering

I don't think today was a bad day, I definitely enjoyed many parts of it, but I wasn't really a focused human being today. I didn't begin coding practice until 14:50, and I suspect that's part of why I was so less focused today. Instead of practising coding first thing after waking up like I had been doing, I instead read Hacker News, LessWrong, and elsewhere, finished a task that required some concentration, and had several interruptions. I was considerably grumpier today than usual as time passed too, which was odd.

I thin... (read more)

Roko's Shortform

The US FDA (U.S. Food and Drug Administration)'s current advice on what to do about covid-19 still pretty bad.

Hand-washing and food safety seem to just be wrong, as far as we can tell covid-19 is almost entirely transmitted in the air, not on hands or food; hand-washing is a good thing to do but it won't help against covid-19 and talking about it displaces talk about things that actually do help.

6 feet of distance is completely irrelevant inside, but superfluous outside. Inside, distance doesn't matter - time does. Outside is so much safer than inside that... (read more)

6 feet of distance is completely irrelevant inside, but superfluous outside. 

That seems to be a bold claim. Do you have a link to a page that goes into more detail on the evidence for it?

In fact the CDC is still saying not to use N95 masks, in order to prevent supply shortages. This is incredibly stupid - we are a whole year into covid-19, there is no excuse for supply shortages, and if people are told not to wear them then there will never be an incentive to make more of them.

Here in Germany Bavaria decided as a first step to make N95 masks required ... (read more)

2Rob Bensinger4dI like the idea of this existing as a top-level post somewhere.
Yoav Ravid's Shortform

What is the class which ask/guess/tell/reveal cultures are instances of? it doesn't currently have a name (at least not something less general than communication culture), which makes this awkward to talk about or reference. so i thought about it for a bit, and came up with Expectation Culture

Ask/guess/tell/reveal culture are a type of expectation culture. they're all cultures where one thing that is said maps to a different expectation. this is also the case with different kinds of asks

This seems like a useful phrase with which to bundle the... (read more)

unparadoxed's Shortform

Shortform on "Hedonic Collapse"

Assumptions :

  1. One is subject to hedonic adaptation.
  2. In the absence of external hedonic input, "entropy" renders one's hedonic stablepoint to be negative.

Desires :

  1. It is desired that one's response to events are temporally invariant. (All else being equal, my reaction to an event should not depend on whether I experience it today or tomorrow)
  2. It is desired to be able to forecast the (probability and impact of the) occurrence of future events as well as possible.


Given the above desires and assumptions, an all-knowing, time-in... (read more)

Willa's Shortform

Shortform #26 Oh wow, a marathon of posts! Also, mild ranting.

Today was good, but felt very split into three pieces. The morning, the trip into town, and the rest of the day: I did productive things for almost three hours in the morning, then drove into the nearby big city for my doctor's appointment, and several hours passed before I got home. The doctor's appointment was great! All good things, but when you have to drive for so long there and back, that does take quite a chunk out of the day. The rest of the day started around 15:30 and went nicely too :... (read more)

Raemon's Shortform

Seems like different AI alignment perspectives sometimes are about "which thing seems least impossible."

Straw MIRI researchers: "building AGI out of modern machine learning is automatically too messy and doomed. Much less impossible to try to build a robust theory of agency first."

Straw Paul Christiano: "trying to get a robust theory of agency that matters in time is doomed, timelines are too short. Much less impossible to try to build AGI that listens reasonably to me out of current-gen stuff."

(Not sure if either of these are fair, or if other camps fit this)

Showing 3 of 6 replies (Click to show all)

(I got nerd-sniped by trying to develop a short description of what I do. The following is my stream of thought)

+1 to replacing "build a robust theory" with "get deconfused," and with replacing "agency" with "intelligence/optimization," although I think it is even better with all three. I don't think "powerful" or "general-purpose" do very much for the tagline.

When I say what I do to someone (e.g. at a reunion) I say something like "I work in AI safety, by doing math/philosophy to try to become less confused about agency/intelligence/optimization." (I dont... (read more)

6Rob Bensinger4dCaveat: I didn't run the above comments by MIRI researchers, and MIRI researchers aren't a monolith in any case. E.g., I could imagine people's probabilities in "scaled-up deep nets are a complete dead end in terms of alignability" looking like "Eliezer ≈ Benya ≈ Nate >> Scott >> Abram > Evan >> Paul", or something?
2Raemon4dOkay, that is compatible with the rest of my Paul model. Does still seem to fit into the ‘what’s least impossible’ frame.
Load More