Quick version of conversations I keep having, might be worth a top level effortpost.
whistleblower protections at large firms, dating, project management and internal company politics--- all userbases with underserved opinions about transparency. Manifold could pivot to this but have a lot of other stuff they could do instead.
Think about slack admins are confused about how to prevent some usergroups from @channel
and discord admins aren't.
(sorry for pontificating when you asked for an actual envelope or napkin) upside is an externality, Ziani incidentally benefits but the signal to other young grad students that maybe career suicide is a slightly more viable risk seems like the source of impact. Agree that this subfield isn't super important, but we should look for related opportunities in subfields we care more about.
I don't know if designing a whistleblower prize is a good nerdsnipe / econ puzzle, in that it may be a really bad goosechase (since generating false positives through incentives imposes name-clearing costs on innocent people, and either you can design your way out of this problem or you can't).
I think the lesswrong/forummagnum takes on recsys are carrying the torch of RSS "you own your information diet" and so on -- I'm wondering if we can have something like "use lightcone/CEA software to ingest substack comments, translates activity or likes into karma, and arranges/prioritizes them according to the user's moderation philosophy".
This does not cash out to more CCing/ingesting of substack RSS to lesswrong overall, the set of substack posts I would want to view in this way would be private from others, and I'm not necessarily interested in confabulating the cross-platform "karma" translations with more votes or trying to make it go both ways.
In terms of the parts where the books overlap, I didn't notice anything substantial. If anything the sequel is less, cuz there wasn't enough detail to get into tricks like the equivalent bet test.
I'm halfway through how to measure anything: cybersecurity, which doesn't have a lot of specifics to cybersecurity and mostly reviews the first book. I never finished the first one, and it was about four years ago that I read the parts that I did.
I think for top of the funnel EA recruiting it remains the best and most underrated book. Basically anyone worried about any kind of problem will do better if they read it, and most people in memetically adaptive / commonsensical activist or philanthropic mindsets probably aren't measuring enough.
However, the mate...
one may be net better than the other, I just think the expected error washes out all of one's reasoning so individuals shouldn't be confident they're right.
There's a somewhat niche CS subtopic that a friend wants to learn, I'm really well positioned to teach her. More discussion on the manifold bounty:
A trans woman told me
I get to have all these talkative blowhard traits and no one will punish me for it cuz I'm a girl. This is one major reason detrans would make my life worse. Society is so cruel to men, it sucks so much for them
And another trans woman had told me almost the exact same thing a couple months ago.
My take is that roles have upsides and downsides, and that you'll do a bad job if you try to say one role is better or worse than another on net or say that a role is more downside than upside. Also, there are versions of "women talk too much" as a stereotype in many subcultures, but I don't have a good inside view about it.
oh haha two years later it's funny that 2 years ago quinn thought LF was a "very thorough foundation".
I've only been in IP / anticompetitive NDA situations at any point in my career. The idea that some NDAs want to oblige me to glomarize is rather horrifying. When you have IP reasons for NDA of course you say "I'm not getting in the weeds here for NDA reasons", it is a literal type error in my social compiler for me to even imagine being clever or savvy about this.
https://www.lesswrong.com/posts/BGLu3iCGjjcSaeeBG/related-discussion-from-thomas-kwa-s-miri-research?commentId=fPz6jxjybp4Zmn2CK This brief subthread can be read as "giving nate points for trying" and is too credulous about if "introspection" actually works--- my wild background guess is that roughly 60% of the time "introspection" is more "elaborate self-delusion" than working as intended, and there are times when someone saying "no but I'm trying really hard to be good at it" drives that probability up instead of down. I didn't think this was one of thos...
Semanticists have been pretty productive https://www.cambridge.org/core/books/foundations-of-probabilistic-programming/819623B1B5B33836476618AC0621F0EE and may help you approach what matters to you, there are certainly adjacent questions and concerns.
A bunch of PL papers building on like the giry monad have floated around in stale tabs on my machine for a couple years. open agency architecture actually provides bread crumbs to this sort of thing in a footnote about infrabayesianism https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architectu...
Yes. Great post. See here https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=gFvzzBwdsoeRjGicA for related discussion.
My summary translation:
You have a classical string P
. We like to cooperate across worldviews or whatever, so we provide sprinkleNegations
to sprinkle double negations on classical strings and decorateWithBox
to decorate provability on intuitionistic strings.
The definition of inverse would be cool, but you see that decorateWithBox(sprinkleNegations(P)) = P
isn't really there, because now you have all these annoying symbols (which cancel out if we implemented translations correctly, but how would we know that?). Luckily, we know we can automate cancelation w...
i don't want to make claims about the particular case, but im worried if you infer a heuristic and apply it elsewhere it could fail
I think Nonlinear should be assigning enough credence to the possibility that they extremely harmed their employees with a degree of horror and remorse. Not laughter or dismissal.
Sometimes scrupulous/doormat people bend over backwards to twist the facts into some form to make an accuser reasonable or have a point. If you've done this, maybe eventually you talked to outside observers or noticed a set of three lies that you h...
Yall, this is a rant. It will be sloppy.
I'm really tired of high functioning super smart "autism" like ok we all have madeup diagnoses--- anyone with a IQ slightly above 90 knows that they can learn the slogans to manipulate gatekeepers to get performance enhancement, and they decide not to if they think theyre performing well enough already. That doesn't mean "ADHD" describes something in the world. Similarly, there's this drift of "autism" getting more and more popular. It's obnoxious because labels and identities are obnoxious, but i only find it repuls...
This comment's updates for me personally:
Ahhhhh kick ass! Stephen Mell is getting into LLMs lately https://arxiv.org/abs/2303.15784 you guys gotta talk I just sent him this post.
Ah yeah. I'm a bit of a believer in "introspection preys upon those smart enough to think they can do it well but not smart enough to know they'll be bad at it"[1], at least to a partial degree. So it wouldn't shock me if a long document wouldn't capture what matters.
epistemic status: in that sweet spot myself ↩︎
I second this--- I skimmed part of nate's comms doc, but it's unclear to me what turntrout is talking about unless he's talking about "being blunt"--- it sounds that overall there's something other than bluntness going on, cuz I feel like we already know about bluntness / we've thought a lot about upsides and downsides of bluntness people before.
Microsoft has no control, but gets OpenAI's IP until AGI
This could be cashed out a few ways--- does it mean "we can't make decisions about how to utilize core tech in downstream product offerings or verticals, but we take a hefty cut of any such things that openai then ships"? If so, there may be multiple ways of interpreting it (like microsoft could accuse openai of very costly negligence for neglecting a particular vertical or product idea or application, or they couldn't).
I heard a pretty haunting take about how long it took to discover steroids in bike races. Apparently, there was a while where a "few bad apples" narrative remained popular even when an ostensibly "one of the good ones" guy was outperforming guys discovered to be using steroids.
I'm not sure how dire or cynical we should be about academic knowledge or incentives. I think it's more or less defensible to assume that no one with a successful career is doing anything real until proven otherwise, but it's still a very extreme view that I'd personally bet against. Of course also things vary so much field by field.
For the record, to mods: I waited till after petrov day to answer the poll because my first guess upon receiving a message on petrov day asking me to click something is that I'm being socially engineered. Clicking the next day felt pretty safe.
This seems kinda fair, I'd like to clarify--- I largely trust the first few dozen people, I just expect depending on how growth/acquisition is done if there are more than a couple instances of protests to have to deal with all the values diversity underlying the different reasons for joining in. This subject seems unusually fraught in potential to generate conflationary alliance https://www.lesswrong.com/s/6YHHWqmQ7x6vf4s5C sorta things.
Overall I didn't mean to other you-- in fact, never said this publicly, but a couple months ago there was a related post ...
In my sordid past I did plenty of "finding the three people for nuanced logical mind-changing discussions amidst a dozens of 'hey hey ho ho outgroup has got to go'", so I'll do the same here (if I'm in town), but selection effects seem deeply worrying (for example, you could go down to the soup kitchen or punk music venue and recruit all the young volunteers who are constantly sneering about how gentrifying techbros are evil and can't coordinate on whether their "unabomber is actually based" argument is ironic or unironic, but you oughtn't. The fact that t...
Winning or losing a war kinda binary.
Will a pandemic get to my country is a matter of degree, since in principle you can have a pandemic that killed 90% of counterfactual economic activity in one country break containment but only destroy 10% in your country.
"Alignment" or "transition to TAI" of any kind is way further from "coinflip" than either of these, so if you think doomcoin is salvageable or want to defend its virtues you need way different reference classes.
Think about the ways in which winning or losing a war isn't binary-- lots of ways for implem...
height filter: I don't see anywhere about how many women use the height filter at all vs dont [1]. People being really into 6'5" seems alarming until you realize that if you're trait xyz enough to use height filters at all, you might as well go all in and use silly height filters.
as a man, filters on bumble are a premium feature. Likely for price discrimination to give many premium features to women for free, though. ↩︎
I've certainly wondered this! In spite of the ACX commenter I mentioned suggesting that we ought to reward people for being transparent about learning epistemics the hard way, I find myself not 100% sure if it's wise or savvy to trust that people won't just mark me down as like "oh, so quinn is probably prone to being gullible or sloppy" if I talk openly about my what my life was like before math coursework and the sequences.
I think that (for this thing and many others too), some people are going to mark you down for it and some people are going to mark you up for it. So the relevant question is not "will some people mark me down" but "what kinds of people will mark me down and what kinds of people will mark me up, and which one of those is the group that I care more about".
Yes. So much love for this post, you're a better writer than me and you're an actual public defender but otherwise I feel super connected with you haha.
It's incredibly bizarre being at least a little "early adopter" about massive 2020 memes -- '09 tumblr account activation and Brown/Garner/Grey -era BLM gave me a healthy dose of "before it was cool" hipster sneering at the people who only got into it once it was popular. This matters on lesswrong, because Musk's fox news interview referenced the "isn't it speciesist to a priori assume human's are better th...
I'm wondering if we want "private notes about users within the app", like discord.
Use case: I don't always remember the loose "weight assignments" over time, for different people. If someone seems like they're preferring to make mistake class A over B in one comment, then that'll be relevant a few months later if they're advocating a position on A vs B tradeoffs (I'll know they practice what they preach). Or maybe they repeated a sloppy or naive view about something that I think should be easy enough to dodge, so I want to just take them less seriously in ...
yeah IQ ish things or athletics are the most well-known examples, but I only generalized in the shortform cuz I was looking around at my friends and thinking about more Big Five oriented examples.
Certainly "conscientiousness seems good but I'm exposed to the mistake class of unhelpful navelgazing, so maybe I should be less conscientious" is so much harder to take seriously if you're in a pond that tends to struggle with low conscientiousness. Or being so low on neuroticism that your redteam/pentest muscles atrophy.
I think this is a crucial part of a lot of psychological maladaption and social dysfunction, very salient to EAs. If you're way more trait xyz than anyone you know for most of your life, your behavior and mindset will be massively effected, and depending on when in life / how much inertia you've accumulated by the time you end up in a different room where suddenly you're average on xyz, you might lose out on a ton of opportunities for growth.
In other words, the concept of "big fish small pond" is deeply insightful a...
(I was the one who asked Charles to write up his inside view, as reading the article is the only serious information I've ever gathered about debate culture https://www.slowboring.com/p/how-critical-theory-is-radicalizing )
hm maybe you have a private note version of a post, but each inline comment can optionally be sent to a kind of granular permissions version of shortform, to gradually open it up to your inner circle before putting it on regular shortform.
I don't bother with linux steam, I just boot steam within lutris. Lutris just automates config / wraps wine plus a gui, so lutris will make steam think it's within windows and then everything that steam launches will also think it's within windows. Tho admittedly I don't use steam a lot (lutris takes excellent care of me for non-steam things)
Here's the full text from sl4.org/crocker.html
, in case sl4.org
goes down (as I suspected it had when it hung trying to load for a few minutes just now).
...Declaring yourself to be operating by "Crocker's Rules" means that other people are allowed to optimize their messages for information, not for being nice to you. Crocker's Rules means that you have accepted full responsibility for the operation of your own mind - if you're offended, it's your fault. Anyone is allowed to call you a moron and claim to be doing you a favor. (Which, in poi
I'm curious and I've been thinking about some opportunities for cryptanalysis to contribute to QA for ML products, particularly in the interp area. But I've never looked at spectral methods or thought about them at all before! At a glance it seems promising. I'd love to see more from you on this.
While I partially share your confusion about "implies an identification of agency with unprovoked assault", I thought Sinclair was talking mostly about "your risk of being seduced, being into it at the time, then regretting it later" and it would only relate to harassment or assault as a kind of tail case.
I think some high libido / high sexual agency people learn to consider seducing someone very effectively in ways that seem to go well but the person would not endorse at CEV a morally relevant failure mode, say 1% bad setting 100% at some rape outcome. Ot...
"some person in the rationalist community who you have seen at like 3 meetups."
I think what's being gestured at is that Sinclair may or may not have been referring to
An example of the variety of ways of thinking about this: Many women (often cis) I've talked to, among those who have standing distrust or bad priors on cis men, are very liberal about extending woman-level trust to trans women. That doesn't mean they're maximally trusting, just that they're...
There was a bunker workshop last year, an attendee told me that early on everyone reached a consensus that it wasn't going to be a super valuable or accurate strategy in the first few hours of the first day, then goofed off the rest of the weekend.
It'd be great if yall could add a regrantor from the Cooperative AI Foundation / FOCAL / CLR / Encultured region of the research/threatmodel space. (epistemic status: conflict of interest since if you do this I could make a more obvious argument for a project)
is the inclusion map.
what makes a coproduct an "inclusion mapping"? I haven't seen this convention/synonym anywhere before.
ok, great! I'm down.
Incidentally, you caused me to google for voting theory under trees of alternatives (rather than lists), and there are a few prior directions (none very old, at a glance).
Seems like a particularly bitterlessony take (in that it kicks a lot to the magical all-powerful black box), while also being over-reliant on the perceptions of viewpoint diversity that have already been induced from the common crawl. I'd much prefer asking more of the user, a more continuous input stream at each deliberative stage.
the author re-reading one year+ out: