Wiki Contributions

Comments

Lorenzo18d40

There's a typo that breaks "Half An Hour Before Dawn In San Francisco" (one of my favorites): https://github.com/ForumMagnum/ForumMagnum/pull/9045

(You can listen to it here if you miss it: https://res.cloudinary.com/lesswrong-2-0/video/upload/v1712004590/San_Francisco_gujlc3.mp3 )

[This comment is no longer endorsed by its author]Reply1
Lorenzo2mo13

Not saving so you can donate more - I think it's confused, but I can't really judge what's important to you.


Why do you think it's confused? If some others can benefit >100x more from the money compared to 60-year-old Lorenzo, and their interests are just as important, wouldn't it be rational to reallocate money from him to them?

Lorenzo6mo30

I think it's reasonably common that people who post under an alt think that they're keeping these identities pretty separate, and do not think someone could connect them with a few hours of playing with open source tools. And so it's important to make it public knowledge that this approach is not very private, so people can take more thorough steps if better privacy is something they want.


I agree with this. I think sometimes people are pretty clueless. E.g. people post under their first name and use the same IP. (There is at least one very similar recent example, but I can’t link to it.)

I think that a PSA about accounts on LW/EAF/the internet often not being as anonymous as people think could be good, and should mention stylometry, internet archives, timezones, IP addresses, user agents, browser storage; and suggest using TOR, making a new account for every post/comment, scheduling messages at random times, running comments through LLMs, not using your name or leaking information in other ways, and considering using deliberate disinformation (e.g. pretending to be of the opposite gender, scheduling messages to appear to be in a different timezone, …)

Specifically on the "feel less safe", if people feel safer than they actually are then they're not in a position to make well-considered decisions around their safety.

I think this is a very good point.

I could have posted "here's a thing I saw in this other community", but my guess is people would take it less seriously, partly because they think it's harder than it actually is.

I’m not sure about this. I think you could have written that there are very easy ways to deanonymize users, so people who really care about their anonymity should do the things I mentioned above?

I think maybe we're thinking about the risks differently?

Possibly, I think I might be less optimistic that people can/will, in practice, start changing their posting habits. And probably I think it’s more likely that this post lowers the barrier for an adversarial actor to actively deanonymize people. It reminds me a bit of the tradeoffs you mentioned in your previous post on security culture.

I think it was a good call not to post reproducible code for this, for example, although it might have made it clearer how easy it is and strengthened the value of the demonstration.
 

I'm not planning to do more with this. I made my scraping code open source, since scraping is something LW and the EA Forum seem fine with, but my stylometry code is not even pushed to a private repo.

Thank you for this, and I do trust you. On some level, anonymous users already had to trust you before this project, since it’s clearly something anyone with some basic coding experience would be able to do if they wanted, but I think now they need to trust you a tiny bit more, since you now just need to press a button instead of spending a few minutes/hours actively working on it.

 

In any case, I don’t feel strongly about this, and I don’t think it’s important, but I still think that, compared to an informative post without a demonstration, this post increases the probability that an adversarial actor deanonymizes people slightly more than the probability that anonymous users are protected from similar attacks. (Which often are even less sophisticated)

Lorenzo6mo6-4

Personally, I'm a bit unsure about the ethics of this. I understand that you’re not planning to publicly deanonymize the accounts, and I assume you don’t plan to do so privately either.

But I can imagine having more barriers for people to post things “anonymously” (or having them feel less safe when trying to do so) to heavily discourage some of the potentially most useful cases of anonymous accounts.[1] I also expect, as you mention, that some who posted anonymously in the past would not appreciate being privately de-anonymized by someone.

It seems meant to be a demonstration, but I don’t see why people wouldn’t expect this to work on LW/EAF, given that it worked on HN? I also think that people might be worried about you in particular deanonymizing them, given how central you are in the EA community and how some people seem to be really worried about their reputation/potential repercussions for what they post.

  1. ^

    Interestingly, I was just reading this comment from a user updating on the value of anonymous accounts.

Lorenzo7mo30

Though I could see including a LaTeX-to-MathML or a MathML-verbosifier step at build time.

 

This should be something GPT excels at: https://chat.openai.com/share/7e19d5e1-1a17-484c-a2ea-e0d7d2cfd56b If your editor supports gpt plugins 

 

Lorenzo7mo41

I've placed some limit orders on both. It's cheaper than subsidies and should work the same way (if there is no nuclear war)

Does LLaMA have any weird/unspeakable tokens? I've played around with it a bit and I haven't found any (I played with it for a very short time though).

Here are the LLaMA tokens, if anyone's curious. Sadly I couldn't find anything as interesting as " SolidGoldMagikarp"

A lot of this depends on how I approach it: is this something I should work on after the kids go to bed, when I typically write blog posts? Or should I consider trying to go part-time at work, take leave, or quit? I haven't yet talked to people at work about this, but I would lean towards taking leave or going part time: if this is worth doing it's probably worth focusing on. That I think what I'm currently doing is valuable, though, means that there's a higher bar than just "does this seem like a good book to exist."

 

I wonder if it would make sense to write it as a series of blog posts (like the sequences, HPMOR, and "Wait But Why Year One: We finally figured out how to put a blog onto an e-reader"), at least for the first few chapters.

It seems it could provide the usual advantages of agile development, and get you some quick (self-)feedback in a lower-stakes environment (even though as you mention this has many downsides).

As someone who's trying a bit of community building, I would love more books, and the topics you are suggesting are super interesting in terms of helping many people do more good.

Thank you for sharing this, I was wondering about your perspective on these topics.

I am really curious about the intended counterfactual of this move. My understanding is that the organizations that were using the office raised funds for a new office in a few weeks (from the same funding pool that funds Lightcone), so their work will continue in a similar way.

Is the main goal to have Lightcone focus more on the Rose Garden Inn? What are your plans there, do you have projects in mind for "slowing down AI progress, pivotal acts, intelligence enhancement, etc."? Anything people can help with?

Load More