Algon

Wiki Contributions

Comments

Sorted by
Algon20

Hello! How long have you been lurking, and what made you stop?

Algon42

Donated $10. If I start earning substantially more, I think I'd be willing to donate $100. As it stands, I don't have that slack.

Algon178

Reminds me of "Self-Integrity and the Drowning Child" which talks about another kind of way that people in EA/rat communities are liable to hammer down parts of themselves. 

Algon30
  1. RE: "something ChatGPT might right", sorry for the error. I wrote the comment quickly, as otherwise I wouldn't have written it at all.
  2. Using ChatGPT to improve your writing is fine. I just want you to be aware that there's an aversion to its style here.
  3. Kennaway was quoting what I said, probably so he could make his reply more precise.
  4. I didn't down-vote your post, for what it's worth.
  5. There's a LW norm, which seems to hold less force in recent years, for people to explain why they downvote something. I thought it would've been dispiriting to get negative feedback with no explanation, so I figured I'd explain in place of the people who downvoted you.
  6. I don't understand why businesses would be co-financing UBI instead of some government tax. Nor do I get why it would be desirable or even feasible, given the co-ordination issues.
  7. If companies get to make UBI conditional on people learning certain things, then it's not a UBI. Instead, it's a peculiar sort of training program.
  8. What does economic recovery have to do with UBI? 
Algon510

My guess as to why this got down-voted:
1) This reads like a manifesto, and not an argument. It reads like an aspirational poster, and not a plan. It feels like marketing, and not communication. 
2) The style vaguely feels like something ChatGPT might right. Brightly polished, safe and stale.
3) This post doesn't have any clear connection to making people less-wrong or reducing x-risks. 

3) wouldn't have been much of an issue if not for 1 and 2. And 1 is an issue because, for the most part, LW has an aversion to "PR". 2 is an issue because ChatGPT is now a thing so styles of writing which are like ChatGPT's are viewed as likely to have been written by ChatGPT. This is an issue because texts written by ChatGPT often have little thought put into them, are unlikely to contain much that's novel, and frequently have errors. 


What kind of post could you have written which would have been better received? I'll give some examples. 

1) A concrete proposal for UBI that you thought was under-valued
2) An argument addressing some problems people have with UBI (e.g. who pays for all of it? After UBI is implemented and society reaches an equilibrium, won't rents-seeking systems just suck up all the UBI money leaving people no better off than before?). 
3) Or a post which was explicit about wanting to get people interested in UBI, and asked for feedback on potential draft messages. 

In general, if you had informed people of something you genuinely believe, or told them about something you have tried and found useful, or asked sincere questions, then I think you'd have got a better reception. 

Algon60

That makes sense. If you had to re-do the whole process from scratch, what would you do differently this time?

Algon30

Then I cold emailed supervisors for around two years until a research group at a university was willing to spare me some time to teach me about a field and have me help out. 

Did you email supervisors in the areas you were publishing in? How often did you email them? Why'd it take so long for them to accept free high-skilled labour?

Algon20

The track you're on is pretty illegible to me. Not saying your assertion is true/false. But I am saying I don't understand what you're talking about, and don't think you've provided much evidence to change my views. And I'm a bit confused as to the purpose of your post. 

Algon20

conditional on me being on the right track, any research that I tell basically anyone about will immediately be used to get ready to do the thing

Why? I don't understand.

Algon40

If I squint, I can see where they're coming from. People often say that wars are foolish, and both sides would be better off if they didn't fight. And this is standardly called "naive" by those engaging in realpolitik. Sadly, for any particular war, there's a significant chance they're right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending. But the China-America AI race is not like that. The Chinese don't want to race. They've shown no interest in being part of a race. It's just American hawks on a loud, Quixotic quest masking the silence. 

If I were to continue the story, it'd show Simplicio asking Galactico not to play Chicken and Galacitco replying "race? What race?". Then Sophistico crashes into Galactico and Simplicio. Everyone dies, The End.

Load More