Sorted by New

Wiki Contributions


Frame Control

It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all

The recent string of posts where women point out weird, abusive, and cultish behavior among some community leader rationalists really cemented this understanding for me. I'll bet the surface rationalist culture doesn't provide any protection against potential abusers. Of course actually behaving rationally provides some of the best protection, but writing long blog posts, living in California, being promiscuous, and being open to weird ideas doesn't make one rational. And that sort of behavior certainly doesn't protect against abusers. It's probably helping them thrive.

Someone whose life was half ruined because they fell in with an abusive cult leader in the Berkeley community is less rational than the average person, regardless of whatever signifier they use to refer to themselves.

I should say that by my understanding Aella doesn't fit the rational-in-culture-only box. Seems that she has a pretty set goal and works towards that goal in a rational way.

Apparently winning by the bias of your opponents

My understanding of why autogenyphiles end up in relationships with other males is because of their high levels of eroticism and inability to find suitable female partners. Transsexuals end up with other transsexuals of the same biological sex at a much higher rate than do regular males or females. This fits the theory that they desire to be with a woman, but find difficulty doing so and thus end up with another transsexual, a near approximation. Additionally the proportion of men who would accept a relationship with a transsexual is higher than the similar proportion of women. The end result of transsexuals in gay relationships isn't entirely a product of their own behavior.

So observing autogenyphiles getting into gay relationships doesn't need to imply an inherent attraction to men, which cuts against the idea of autogenyphilia. Just that the autogenyphiles situation in life makes gay relationships much more likely to occur than relationships with women.

What is the most evil AI that we could build, today?

If you were an evil genius with, say, $1B of computing power, what is the most harm you could possibly do to society?

AI risk is existential and currently theoretical. Learning what a malicious actor could do with $1B of compute today will not help focus your thinking on the risks posed by AGI. It's like trying to focus your thinking on the risks of global nuclear war by asking, "What's the worst a terrorist could do with a few tons of TNT?" It's not that the scale is wrong, it's that the risks are completely different. That doesn't mean that a terrorist with 100,000 tons of TNT isn't an important problem, but it's not the problem that Thomas Schelling and the nuclear deterrence experts were working on during the cold war.

What is the most evil AI that could be built, today?

This is an entirely different question and the answer is there isn't any public evidence that anybody has the ability to create an evil AI today. I don't want to belabor the point, but nobody knew how to split an atom in 1937, yet in 1945 the US dropped two atomic bombs on hundreds of thousands of Japanese civilians.

Vax passports - theory and practice

when a clerk suspected that a $20 bill was fake

Which was in fact true.

frontier64's Shortform

The future may have a use for frozen people from the current era. In the future, historical humans may be useful as an accurate basis to interpret the legal documents of our era.

Original pubic meaning is a is a fairly modern mode of legal interpretation of the US Constitution. It's basis is that the language of the constitution should be interpreted the way that the original meaning of the text was when it was drafted and amended into the constitution. A similar mode of interpretation is used less commonly for statutes. It's likely that this mode of interpreation would become more common in the future as a way to prevent value drift.

One of the struggles of modern times employing the original public meaning test is that for older amendments there are no people currently alive who lived in the culture that the amendments were drafted in. It would be very helpful to have just a single ordinary man who lived in 1790 who could explain his understanding of the constitution and the language therein.

It's possible the future will have similar issues interpreting constitutional language from our time and will appreciate the ability to benefit from questioning a portion of the population of that era.

This theory is a subset of the idea that humans from past eras will be useful in the future to prevent value drift generally. My first brush with this idea was Three World's Collide.

I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead

It seems the state of the art with generating GPT-3 speech is to generate multiple responses until you have a good one and cherry-pick it. I'm not sure whether including a disclaimer explaining that process will still be helpful. Yes there's a sizable number who don't know about that process or who don't automatically assume it's being used, but I'm not sure how big that number is anymore. I don't think Isusr should explain GPT-3 or link to an OpenAI blog every time he uses it as that's clearly a waste of time even though there's still a large number of people who don't know. So where do we draw the line? For me, every time I see someone say they've generated text with GPT-3 I automatically assume it's a cherry-picked response unless they say something to the contrary. I know from experience that's the only way to get consistently good responses out of GPT-3 is to cherry pick. I estimate that a lot of people on LW are in the same boat.

I read “White Fragility” so you don’t have to (but maybe you should)

I agree that it isn't strong evidence. I should have made my point more explicit. My point is that Ooziegooen mentions the vitriol as if it is evidence that DiAngelo's argument has value and should be discussed. If anything it's evidence against that notion (however weak it may be).

I read “White Fragility” so you don’t have to (but maybe you should)

My response is fine in tell culture too no? I'm stating what I believe to be true of their comment. Why is it ok for ozziegooen to speak truthfully in his comment but it's not ok for me to reply truthfully wrt to my impression of his comment?

I read “White Fragility” so you don’t have to (but maybe you should)

I agreed that an "I am tapping out of this" comment is helpful until I experienced it and realized that the experience is quite unpleasant. There's something particularly stinging about being told that a discussion with you can't be productive. I think I wouldn't be effected at all if the non-response was "I am tapping out of this." without any particular reason being given.

I think it has to do with Jordan Peterson's 9th rule for life, "Assume the person you're listening to might know something that you don't". That just just makes sense to me. I don't quite understand why some people care about vitriolic comments on the internet. To me, vitriolic comments are par for the course and bringing it up is an obvious attempt to play the victim card for sympathy. But hey ozziegooen seems like a well-written dude so maybe he has a good explanation for why I should care about whether or not people have written scathing online reviews of DiAngelo's book. Or maybe he has another insight into the topic that I couldn't predict. Definitely his last response to me gave me a lot of information I didn't already know so for me the interaction was a net positive.

Saying "we can't have a productive discussion" in response to a two sentence reply completely goes against that 9th rule. It's an acknowledgement that the responder is listening to me, because he responded to my comment. But he's also stating that he thinks I have literally nothing to offer him by way of new information and vice-versa. That's pretty low!

I am certainly more sensitive on this issue than most people here. If ozziegooen's comment wouldn't seem insulting to others then really the issue lies entirely with me and I'll adapt to the style of decorum that fits most people. I don't want to jump at conduct that the LW community thinks is fine.

On a different note, I agree with you that people should feel free to tap out of discussions. I don't mind if someone doesn't wish to discuss further. I've tapped out of many conversations myself for a variety of reasons and sometimes the reason is I don't think the conversation will be productive.

I'm not going to respond any further after this comment because I don't think this back-and-forth will be productive. [1]

  1. I'm just saying this to give you the experience. I don't mean it at all. But even then I feel bad saying it because it sounds so rude to me! ↩︎

Load More