All of Quinn's Comments + Replies

the author re-reading one year+ out:

  • My views on value of infographics that actually look nice have changed, perhaps I should have had nicer looking figures.
  • "Unlike spreadsheets, users can describe in notebooks distributions, capturing uncertainty in their beliefs." seems overconfident. Monte carlo has been an excel feature for years. My dismissal of this (implicit) makes sense as a thing to say because "which usecases are easy?" is a more important question than "which usecases can you get away with if you squint?", but I could've done a way better job
... (read more)

Quick version of conversations I keep having, might be worth a top level effortpost.

A prediction market platform giving granular permission systems would open up many use cases for many people

whistleblower protections at large firms, dating, project management and internal company politics--- all userbases with underserved opinions about transparency. Manifold could pivot to this but have a lot of other stuff they could do instead.

Think about slack admins are confused about how to prevent some usergroups from @channel and discord admins aren't.

(sorry for pontificating when you asked for an actual envelope or napkin) upside is an externality, Ziani incidentally benefits but the signal to other young grad students that maybe career suicide is a slightly more viable risk seems like the source of impact. Agree that this subfield isn't super important, but we should look for related opportunities in subfields we care more about.

I don't know if designing a whistleblower prize is a good nerdsnipe / econ puzzle, in that it may be a really bad goosechase (since generating false positives through incentives imposes name-clearing costs on innocent people, and either you can design your way out of this problem or you can't).

2Linch1mo
My guess is still that this is below the LTFF bar (which imo is quite high) but I've forwarded some thoughts to some metascience funders I know. I might spend some more free time trying to push this through later. Thanks for the suggestion! 

Is this a retroactive grant situation?

4Linch1mo
This is the type of thing that speaks to me aesthetically but my guess is that it wouldn't pencil, though I haven't done the math myself (nor do I have a good sense of how to model it well). Improving business psychology is just not a very leveraged way to improve the long-term future compared to the $X00M/year devoted to x-risk, so the flow-through effects have to be massive in order to be better than marginal longtermist grants. (If I was making grants in metascience this is definitely the type of thing I'd consider). I'm very open to being wrong though; if other people have good/well-justified Fermi estimates for a pretty large effect (better than the marginal AI interpretability grant for example) I'd be very happy to reconsider.

I think the lesswrong/forummagnum takes on recsys are carrying the torch of RSS "you own your information diet" and so on -- I'm wondering if we can have something like "use lightcone/CEA software to ingest substack comments, translates activity or likes into karma, and arranges/prioritizes them according to the user's moderation philosophy".

This does not cash out to more CCing/ingesting of substack RSS to lesswrong overall, the set of substack posts I would want to view in this way would be private from others, and I'm not necessarily interested in confabulating the cross-platform "karma" translations with more votes or trying to make it go both ways.

In terms of the parts where the books overlap, I didn't notice anything substantial. If anything the sequel is less, cuz there wasn't enough detail to get into tricks like the equivalent bet test.

I'm halfway through how to measure anything: cybersecurity, which doesn't have a lot of specifics to cybersecurity and mostly reviews the first book. I never finished the first one, and it was about four years ago that I read the parts that I did.

I think for top of the funnel EA recruiting it remains the best and most underrated book. Basically anyone worried about any kind of problem will do better if they read it, and most people in memetically adaptive / commonsensical activist or philanthropic mindsets probably aren't measuring enough.

However, the mate... (read more)

1matto1mo
What's different there compared to the first book? I read the first one and found it to resonate strongly, but also found my mental models to not fit well with the general thrust. Since then I've been studying stats and thinking more about measurement with the intent to reread the first book. Curious if the cybersecurity one adds something more though

one may be net better than the other, I just think the expected error washes out all of one's reasoning so individuals shouldn't be confident they're right.

what are your obnoxious price systems for tutoring?

There's a somewhat niche CS subtopic that a friend wants to learn, I'm really well positioned to teach her. More discussion on the manifold bounty:

A trans woman told me

I get to have all these talkative blowhard traits and no one will punish me for it cuz I'm a girl. This is one major reason detrans would make my life worse. Society is so cruel to men, it sucks so much for them

And another trans woman had told me almost the exact same thing a couple months ago.

My take is that roles have upsides and downsides, and that you'll do a bad job if you try to say one role is better or worse than another on net or say that a role is more downside than upside. Also, there are versions of "women talk too much" as a stereotype in many subcultures, but I don't have a good inside view about it.

3Elizabeth1mo
This may be true, but it might be that she's incurring a bunch of social penalities she isn't aware of. Women are less likely to overtly punish, so if she's spending more time with women that could already explain it. No one yells at you to STFU, but you miss out on party invite you would have gotten if you shared the conversation better.  I suspect men are also more willing to tell other men to STFU than they are to say it to women, but will let someone else speak to that question. 
2Viliam1mo
The fact that both roles have advantages and disadvantage doesn't necessarily prove that neither is better on net. Then again, "better" by what preferences? Lucky are the people whose preferences match the role they were assigned. To me it seems that women have a greater freedom of self-expression, as long as they are not competitive. Men are treated instrumentally: they are socially allowed to work and to compete against each other, anything else is a waste of energy. For example, it is okay for a man to talk a lot, if he is a politician, manager, salesman, professor, priest... simply, if it is a part of his job. And when he is seducing a woman. Otherwise, he should be silent. Women are expected to chit-chat all the time, but they should never contradict men, or say anything controversial.

oh haha two years later it's funny that 2 years ago quinn thought LF was a "very thorough foundation".

I've only been in IP / anticompetitive NDA situations at any point in my career. The idea that some NDAs want to oblige me to glomarize is rather horrifying. When you have IP reasons for NDA of course you say "I'm not getting in the weeds here for NDA reasons", it is a literal type error in my social compiler for me to even imagine being clever or savvy about this.

6habryka2mo
Yep, also seems horrifying to me, which is why I had such a strong reaction to Wave's severance agreements. Luckily these things are no longer enforceable.

https://www.lesswrong.com/posts/BGLu3iCGjjcSaeeBG/related-discussion-from-thomas-kwa-s-miri-research?commentId=fPz6jxjybp4Zmn2CK This brief subthread can be read as "giving nate points for trying" and is too credulous about if "introspection" actually works--- my wild background guess is that roughly 60% of the time "introspection" is more "elaborate self-delusion" than working as intended, and there are times when someone saying "no but I'm trying really hard to be good at it" drives that probability up instead of down. I didn't think this was one of thos... (read more)

Maybe he meant that at South by Southwest the chance is higher than 28%?

Answer by QuinnOct 07, 202320

Semanticists have been pretty productive https://www.cambridge.org/core/books/foundations-of-probabilistic-programming/819623B1B5B33836476618AC0621F0EE and may help you approach what matters to you, there are certainly adjacent questions and concerns.

A bunch of PL papers building on like the giry monad have floated around in stale tabs on my machine for a couple years. open agency architecture actually provides bread crumbs to this sort of thing in a footnote about infrabayesianism https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architectu... (read more)

My summary translation:

You have a classical string P. We like to cooperate across worldviews or whatever, so we provide sprinkleNegations to sprinkle double negations on classical strings and decorateWithBox to decorate provability on intuitionistic strings.

The definition of inverse would be cool, but you see that decorateWithBox(sprinkleNegations(P)) = P isn't really there, because now you have all these annoying symbols (which cancel out if we implemented translations correctly, but how would we know that?). Luckily, we know we can automate cancelation w... (read more)

3Kenoubi1mo
I've seen a lot of attempts to provide "translations" from one domain-specific computer language to another, and they almost always have at least one of these properties: 1. They aren't invertible, nor "almost invertible" via normalization 2. They rely on an extension mechanism intentionally allowing the embedding of arbitrary data into the target language 3. They use hacks (structured comments, or even uglier encodings if there aren't any comments) to embed arbitrary data 4. They require the source of the translation to be normalized before (and sometimes also after, but always before) translation (2) and (3) I don't think are super great here. If there are blobs of data in the translated version that I can't understand, but that are necessary for the original sender to interpret the statement, it isn't clear how I can manipulate the translated version while keeping all the blobs correct. Plus, as the recipient, I don't really want to be responsible for safely maintaining and manipulating these blobs. (1) is clearly unworkable (if there's no way to translate back into the original language, there can't be a conversation). That leaves 4. 4 requires stripping anything that can't be represented in an invertible way before translating. E.g., if I have lists but you can only understand sets, and assuming no nesting, I may need to sort my list and remove duplicates from it as part of normalization. This deletes real information! It's information that the other language isn't prepared to handle, so it needs to be removed before sending. This is better than sending the information in a way that the other party won't preserve even when performing only operations they consider valid. I think this applies to the example from the post, too - how would I know whether certain instances of double negation or provability were artifacts that normalization is supposed to strip, or just places where someone wanted to make a statement about double negation or provability?

i don't want to make claims about the particular case, but im worried if you infer a heuristic and apply it elsewhere it could fail

I think Nonlinear should be assigning enough credence to the possibility that they extremely harmed their employees with a degree of horror and remorse. Not laughter or dismissal.

Sometimes scrupulous/doormat people bend over backwards to twist the facts into some form to make an accuser reasonable or have a point. If you've done this, maybe eventually you talked to outside observers or noticed a set of three lies that you h... (read more)

Yall, this is a rant. It will be sloppy.

I'm really tired of high functioning super smart "autism" like ok we all have madeup diagnoses--- anyone with a IQ slightly above 90 knows that they can learn the slogans to manipulate gatekeepers to get performance enhancement, and they decide not to if they think theyre performing well enough already. That doesn't mean "ADHD" describes something in the world. Similarly, there's this drift of "autism" getting more and more popular. It's obnoxious because labels and identities are obnoxious, but i only find it repuls... (read more)

This comment's updates for me personally:

  • The overall "EA is scary / criticizing leaders is scary" meme is very frequently something I roll my eyes at, I find it alien and sometimes laughable when people say they're worried about being bold and brave cuz all I ever see are people being rewarded for constructive criticism. But man, I feel like if I didn't know about some of this stuff then I'm missing a huge piece of the puzzle. Unclear yet what I'll think about, say, the anon meta on forums after this comment sinks in / propagates, but my guess is it'll b
... (read more)
1Oliver Sourbut2mo
This is a generally constructive comment. One bit left me confused, and I wonder if you can unpack what it means? What was the misfire? (I mean literally what does 'it' stand for in this sentence?) Also, what kind of points and karma are we talking about, presumably metaphorical?

Ahhhhh kick ass! Stephen Mell is getting into LLMs lately https://arxiv.org/abs/2303.15784 you guys gotta talk I just sent him this post.

Ah yeah. I'm a bit of a believer in "introspection preys upon those smart enough to think they can do it well but not smart enough to know they'll be bad at it"[1], at least to a partial degree. So it wouldn't shock me if a long document wouldn't capture what matters.


  1. epistemic status: in that sweet spot myself ↩︎

I second this--- I skimmed part of nate's comms doc, but it's unclear to me what turntrout is talking about unless he's talking about "being blunt"--- it sounds that overall there's something other than bluntness going on, cuz I feel like we already know about bluntness / we've thought a lot about upsides and downsides of bluntness people before.

3TurnTrout2mo
I might reply later, but I want to note that Nate's comms doc doesn't really track my (limited) experience of what it feels like to talk with Nate, and so (IMO) doesn't make great sense as a baseline of "what happened?".
6Raemon2mo
So, I don't know what actually happened here. But I at least want to convey support for:  "There are ways of communicating other than being blunt that can... unsettlingly affect you [or, at least, some people], which are hard to explain, and their being hard to explain makes it psychologically harder to deal with because when you try to explain it or complain about it people are kinda dismissive." (I'm not expressing a strong opinion here about whether Nate should have done something different in this case, or what the best way for Turntrout, Vivek's team, or others should relate to it. I'm just trying to hold space for "I think there's a real thing people should be taking seriously as a possibility and not just rounding off to 'Turntrout should have thicker skin' or something) I have some guesses about the details but they're mostly informed by my interactions with people other than Nate, which give me sort of an existence proof, and I'm wary of speculating myself here without having actually had this sort of conversation with Nate.

Microsoft has no control, but gets OpenAI's IP until AGI

This could be cashed out a few ways--- does it mean "we can't make decisions about how to utilize core tech in downstream product offerings or verticals, but we take a hefty cut of any such things that openai then ships"? If so, there may be multiple ways of interpreting it (like microsoft could accuse openai of very costly negligence for neglecting a particular vertical or product idea or application, or they couldn't).

2Zach Stein-Perlman2mo
I meant Microsoft doesn't get to tell OpenAI what models to make, but gets copies of whatever models it does make. See WSJ.

I heard a pretty haunting take about how long it took to discover steroids in bike races. Apparently, there was a while where a "few bad apples" narrative remained popular even when an ostensibly "one of the good ones" guy was outperforming guys discovered to be using steroids.

I'm not sure how dire or cynical we should be about academic knowledge or incentives. I think it's more or less defensible to assume that no one with a successful career is doing anything real until proven otherwise, but it's still a very extreme view that I'd personally bet against. Of course also things vary so much field by field.

2ChristianKl2mo
Given that the replication rate of findings isn't zero, it seems that some researchers aren't completely fraudulent and at least partly "real work".  An interesting question is how many failed replications are due to fraud. Are 20%? 50% or 80% of the studies that don't replicate fraudulent?
3Linch2mo
My current (maybe boring) view is that any academic field where the primary mode of inquiry is applied statistics (much of the social sciences and medicine) is suss. The fields where the primary tool is mathematics (pure mathematics, theoretical CS, game theory, theoretical physics) still seems safe, and the fields where the primary tool is computers (distributed systems, computational modeling in various fields) are reasonably safe. ML is somewhere in between computers and statistics.  Fields where the primary tool is just looking around and counting (demography, taxonomy, astronomy(?)) are probably safe too? I'm confused about how to orient towards the humanities. 

For the record, to mods: I waited till after petrov day to answer the poll because my first guess upon receiving a message on petrov day asking me to click something is that I'm being socially engineered. Clicking the next day felt pretty safe.

This seems kinda fair, I'd like to clarify--- I largely trust the first few dozen people, I just expect depending on how growth/acquisition is done if there are more than a couple instances of protests to have to deal with all the values diversity underlying the different reasons for joining in. This subject seems unusually fraught in potential to generate conflationary alliance https://www.lesswrong.com/s/6YHHWqmQ7x6vf4s5C sorta things.

Overall I didn't mean to other you-- in fact, never said this publicly, but a couple months ago there was a related post ... (read more)

1Holly_Elmore2mo
Yeah, I’ve been weighing a lot whether big tent approaches are something I can pull off at this stage or whether I should stick to “Pause AI”. The Meta protest is kind of an experiment in that regard and it has already been harder than I expected to get the message about irreversible proliferation across well. Pause is sort of automatically a big tent because it would address all AI harms. People can be very aligned on Pause as a policy without having the same motivations. Not releasing model weights is more of a one-off issue and requires a lot of inferential distance crossing even with knowledgeable people. So I’ll probably keep the next several events focused on Pause, a message much better suited to advocacy.

In my sordid past I did plenty of "finding the three people for nuanced logical mind-changing discussions amidst a dozens of 'hey hey ho ho outgroup has got to go'", so I'll do the same here (if I'm in town), but selection effects seem deeply worrying (for example, you could go down to the soup kitchen or punk music venue and recruit all the young volunteers who are constantly sneering about how gentrifying techbros are evil and can't coordinate on whether their "unabomber is actually based" argument is ironic or unironic, but you oughtn't. The fact that t... (read more)

3Holly_Elmore2mo
This strikes me as the kind of political thinking I think you’re trying to avoid. Contempt is not good for thought. Advocacy is not the only way to be tempted to lower your epistemic standards. I think you’re doing it right now when you other me or this type of intervention.

Winning or losing a war kinda binary.

Will a pandemic get to my country is a matter of degree, since in principle you can have a pandemic that killed 90% of counterfactual economic activity in one country break containment but only destroy 10% in your country.

"Alignment" or "transition to TAI" of any kind is way further from "coinflip" than either of these, so if you think doomcoin is salvageable or want to defend its virtues you need way different reference classes.

Think about the ways in which winning or losing a war isn't binary-- lots of ways for implem... (read more)

5Daniel Kokotajlo2mo
Interesting, thanks. Yeah, I currently think the range of possible outcomes in warfare seems to be more smeared out, across a variety of different results, than the range of possible outcomes for humanity with respect to AGI. The bulk of the probability mass in the AGI case, IMO, is concentrated in "Total victory of unaligned, not-near-miss AGIs" and then there are smaller chunks concentrated in "Total victory of unaligned, near-miss AGIs" (near-miss means what they care about is similar enough to what we care about that it is either noticeably better, or noticeably worse, than human extinction.) and of course "human victory," which can itself be subdivided depending on the details of how that goes. Whereas with warfare, there's almost a continuous range of outcomes ranging from "total annihilation and/or enslavement of our people" to "total victory" with pretty much everything in between a live possibility, and indeed some sort of negotiated settlement more likely than not. I do agree that there are a variety of different outcomes with AGI, but I think if people think seriously about the spread of outcomes (instead of being daunted and deciding not to think about it because it's so speculative) they'll conclude that they fall into the buckets I described. Separately, I think that even if it was less binary than warfare, it would still be good to talk about p(doom). I think it's pretty helpful for orienting people & also I think a lot of harm comes from people having insufficiently high p(doom). Like, a lot of people are basically feeling/thinking "yeah it looks like things could go wrong but probably things will be fine probably we'll figure it out, so I'm going to keep working on capabilities at the AGI lab and/or keep building status and prestige and influence and not rock the boat too much because who knows what the future might bring but anyhow we don't want to do anything drastic that would get us ridiculed and excluded now." If they are actually correct th

height filter: I don't see anywhere about how many women use the height filter at all vs dont [1]. People being really into 6'5" seems alarming until you realize that if you're trait xyz enough to use height filters at all, you might as well go all in and use silly height filters.


  1. as a man, filters on bumble are a premium feature. Likely for price discrimination to give many premium features to women for free, though. ↩︎

I've certainly wondered this! In spite of the ACX commenter I mentioned suggesting that we ought to reward people for being transparent about learning epistemics the hard way, I find myself not 100% sure if it's wise or savvy to trust that people won't just mark me down as like "oh, so quinn is probably prone to being gullible or sloppy" if I talk openly about my what my life was like before math coursework and the sequences.

I think that (for this thing and many others too), some people are going to mark you down for it and some people are going to mark you up for it. So the relevant question is not "will some people mark me down" but "what kinds of people will mark me down and what kinds of people will mark me up, and which one of those is the group that I care more about".

Yes. So much love for this post, you're a better writer than me and you're an actual public defender but otherwise I feel super connected with you haha.

It's incredibly bizarre being at least a little "early adopter" about massive 2020 memes -- '09 tumblr account activation and Brown/Garner/Grey -era BLM gave me a healthy dose of "before it was cool" hipster sneering at the people who only got into it once it was popular. This matters on lesswrong, because Musk's fox news interview referenced the "isn't it speciesist to a priori assume human's are better th... (read more)

2Morpheus3mo
Why does it have to be niche? Haven't met many nonrationalists who's mind doesn’t go haywire once you start on Politics or Religion. Where did these EAs/Rats grow up if they weren't exposed to that?

I'm wondering if we want "private notes about users within the app", like discord.

Use case: I don't always remember the loose "weight assignments" over time, for different people. If someone seems like they're preferring to make mistake class A over B in one comment, then that'll be relevant a few months later if they're advocating a position on A vs B tradeoffs (I'll know they practice what they preach). Or maybe they repeated a sloppy or naive view about something that I think should be easy enough to dodge, so I want to just take them less seriously in ... (read more)

2Yoav Ravid3mo
I would like to have that feature (though I don't see it as high priority)

yeah IQ ish things or athletics are the most well-known examples, but I only generalized in the shortform cuz I was looking around at my friends and thinking about more Big Five oriented examples.

Certainly "conscientiousness seems good but I'm exposed to the mistake class of unhelpful navelgazing, so maybe I should be less conscientious" is so much harder to take seriously if you're in a pond that tends to struggle with low conscientiousness. Or being so low on neuroticism that your redteam/pentest muscles atrophy.

2Viliam3mo
That sounds intriguing. I would like to read an article with many specific (even if fictional) examples.

How are people mistreated by bellcurves?

I think this is a crucial part of a lot of psychological maladaption and social dysfunction, very salient to EAs. If you're way more trait xyz than anyone you know for most of your life, your behavior and mindset will be massively effected, and depending on when in life / how much inertia you've accumulated by the time you end up in a different room where suddenly you're average on xyz, you might lose out on a ton of opportunities for growth.

In other words, the concept of "big fish small pond" is deeply insightful a... (read more)

5Viliam3mo
So, being a "big fish in a small pond" teaches you habits that become harmful when you later move to a larger pond. But if you don't move, you can't grow further. I think the specific examples are more known that the generalization. For example: Many people in Mensa are damaged this way. They learned to be the smartest ones, which they signal by solving pointless puzzles, or by talking about "smart topics" (relativity, quantum, etc.) despite the fact that they know almost nothing about these topics. Why did they learn these bad habits? Because this is how you most efficiently signal intelligence to people who are not themselves intelligent. But it fails to impress the intelligent people used to meeting other intelligent people, because they see the puzzles as pointless, they see the smart talk as bullshit if they ever read an introductory textbook on the topic, and will ask you about your work and achievements instead. The useful thing would instead be to learn how to cooperate with other intelligent people on reaching worthy goals. People who are too smart or too popular at elementary school (or high school) may be quite shocked when they move to a high school (or university) and suddenly their relative superpowers are gone. If they learned to rely on them too much, they may have a problem adapting to normal hard work or normal friendships. Staying at the same job for too long might have a similar effect. You feel like an expert because you are familiar with all systems in the company. Then at some moment fate makes you change jobs, and suddenly you realize that you know nothing, that the processes and technologies used in your former company were maybe obsolete. But the more you delay changing jobs, the harder it becomes. I remember reading in a book by László Polgár, father of the famous female chess players, how he wanted his girls to play in the "men's" chess league since the beginning, because that's what he wanted them to win. He was afraid that playing

(I was the one who asked Charles to write up his inside view, as reading the article is the only serious information I've ever gathered about debate culture https://www.slowboring.com/p/how-critical-theory-is-radicalizing )

hm maybe you have a private note version of a post, but each inline comment can optionally be sent to a kind of granular permissions version of shortform, to gradually open it up to your inner circle before putting it on regular shortform.

I don't bother with linux steam, I just boot steam within lutris. Lutris just automates config / wraps wine plus a gui, so lutris will make steam think it's within windows and then everything that steam launches will also think it's within windows. Tho admittedly I don't use steam a lot (lutris takes excellent care of me for non-steam things)

Here's the full text from sl4.org/crocker.html, in case sl4.org goes down (as I suspected it had when it hung trying to load for a few minutes just now).

Declaring yourself to be operating by "Crocker's Rules" means that other people are allowed to optimize their messages for information, not for being nice to you.  Crocker's Rules means that you have accepted full responsibility for the operation of your own mind - if you're offended, it's your fault.  Anyone is allowed to call you a moron and claim to be doing you a favor.  (Which, in poi

... (read more)

I'm curious and I've been thinking about some opportunities for cryptanalysis to contribute to QA for ML products, particularly in the interp area. But I've never looked at spectral methods or thought about them at all before! At a glance it seems promising. I'd love to see more from you on this.

2Joseph Van Name4mo
I will certainly make future posts about spectral methods in AI since spectral methods are already really important in ML, and it appears that new and innovative spectral methods will help improve AI and especially AI interpretability (but I do not see them replacing deep neural networks though). I am sure that LSRDRs can be used for QA for ML products and for interpretability (but it is too early to say much about the best practices for LSRDRs for interpreting ML). I don't think I have too much to say at the moment about other cryptanalytic tools being applied to ML though (except for general statistical tests, but other mathematicians have just as much to say about these other tests). Added 8/4/2023: On a second thought, people on this site really seem to dislike mathematics (they seem to love discrediting themselves). I should probably go elsewhere where I can have higher quality discussions with better people.

While I partially share your confusion about "implies an identification of agency with unprovoked assault", I thought Sinclair was talking mostly about "your risk of being seduced, being into it at the time, then regretting it later" and it would only relate to harassment or assault as a kind of tail case.

I think some high libido / high sexual agency people learn to consider seducing someone very effectively in ways that seem to go well but the person would not endorse at CEV a morally relevant failure mode, say 1% bad setting 100% at some rape outcome. Ot... (read more)

"some person in the rationalist community who you have seen at like 3 meetups."

I think what's being gestured at is that Sinclair may or may not have been referring to

  1. the base rate of this being a bad idea
  2. the base rate of this bad idea conditioning on genders xyz

An example of the variety of ways of thinking about this: Many women (often cis) I've talked to, among those who have standing distrust or bad priors on cis men, are very liberal about extending woman-level trust to trans women. That doesn't mean they're maximally trusting, just that they're... (read more)

1Sinclair Chen5mo
I don't think that cis women are harmless either. On one hand, women that are abusers tend to be more manipulative and isolating wheras men that are abusers tend to be more physical.  And mayyybe that's a neurotype thing that correlates with bio sex rather than hormonal gender or a cultural thing that is a product of gendered upbringing rather than gendered adult life.  And mayyybe that meaningfully affects in what scenarios one ought to be wary of cis women vs trans women. Feels a bit like an irresponsible speculation though. Not putting forth a strong argument here, just clarifying my position.

There was a bunker workshop last year, an attendee told me that early on everyone reached a consensus that it wasn't going to be a super valuable or accurate strategy in the first few hours of the first day, then goofed off the rest of the weekend.

It'd be great if yall could add a regrantor from the Cooperative AI Foundation / FOCAL / CLR / Encultured region of the research/threatmodel space. (epistemic status: conflict of interest since if you do this I could make a more obvious argument for a project)

2Austin Chen5mo
I'm generally interested in having a diverse range of regrantors; if you'd like to suggest names/make intros (either here, or privately) please let me know!

 is the inclusion map.

what makes a coproduct an "inclusion mapping"? I haven't seen this convention/synonym anywhere before. 

1harfe5mo
"inclusion map" refers to the map iX, not the coproduct X∐Y. The map iX is a coprojection (these are sometimes called "inclusions", see https://ncatlab.org/nlab/show/coproduct). A simple example in sets: We have two sets X, Y, and their disjoint union X⊔Y. Then the inclusion map iX is the map that maps x (as an element of X) to x (as an element of X⊔Y).

ok, great! I'm down.

Incidentally, you caused me to google for voting theory under trees of alternatives (rather than lists), and there are a few prior directions (none very old, at a glance).

Seems like a particularly bitterlessony take (in that it kicks a lot to the magical all-powerful black box), while also being over-reliant on the perceptions of viewpoint diversity that have already been induced from the common crawl. I'd much prefer asking more of the user, a more continuous input stream at each deliberative stage.

3Zac Hatfield-Dodds5mo
Opportunities and Risks of LLMs for Scalable Deliberation with Polis, a paper out this week, investigates the application of LLMs to assist human democratic deliberation: In particular, §2.2.4 concurs with your (and my) concerns about this post:
  1. I've been very distressed thinking that instrumental and epistemic parts are not cleanly separable, and that entire is-ought gap or humean facts-values is a grade school story or pedagogically noble lie https://www.lesswrong.com/posts/kq8CZzcPKQtCzbGxg/quinn-s-shortform?commentId=fdCTjtJgucYP9Xza4
  2. I got severely burnt out from exhaustion not long after writing this, and one of the reasons was the open games literature lol. But good news! I was cleaning out old tabs on my browser and I landed on one of those papers, and it all made perfect sense instantly!
... (read more)
2Alexander Gietelink Oldenziel5mo
That's nice to hear. Could you say more on your update towards open games ?
Load More