I was going for a walk yesterday night, and when I looked up at the sky, I saw something I had never seen before: a bright orange dot, like a star, but I had never seen a star that bright and so orange before. "No... that can't be"- but it was: I was looking at Mars, that other world I had heard so much about, thought so much about.
I never realized until yesterday that I had never seen Mars with my own two eyes until that day- one of the closest worlds that humans could, with minimal difficulty, make into a new home one day.
It struck me then in a way that I never felt before, just how far away the world Mars is. I knew it in an abstract sense, but seeing this little dot in the distance, a dot that I knew to be an object larger even than the Moon, but seeming so small in comparison, made me realize, in my gut, just how far away this other world was, just like how when I stand on top of a mountain, and see small buildings on the ground way below me, I realize that those small buildings are actually skyscrapers far away.
And yet, as far as Mars was that night, it was so bright, so apparent, precisely because it was closer now to us than it normally ever is- normally this world is even further from us than it is now.
In short, I am selling my attention by selling the right to put cards in my Anki deck, starting at the low price of $1 per card.
I will create and add a card (any card that you desire, with the caveat that I can veto any card that seems problematic, and capped to a similar amount of information per card as my usual cards contain) to my Anki deck for $1. After the first ten cards (across all people), the price will rise to $2 per card, and will double every 5 cards from then on. I commit to study the added card(s) like I would any other card in my decks (I will give it a starting interval of 10 days, which is sooner than the usual interval of 20 days I usually use, unless I judge that a shorter interval makes sense. I study Anki every day, and have been clearing my deck at least once every 10 days for the past 5 months, and intend to continue to do so). Since I will be creating the cards myself (unless you know of a high-quality deck that contains cards with the information you desire), an idea for a card is enough even if you don't know how to execute it.
Both question-and-answer and straight text are acceptable forms for cards. Acceptable forms of payment include cash, Venmo, BTC, E...
Religion isn't about believing false things. Religion is about building bonds between humans, by means including (but not limited to) costly signalling. It happens that a ubiquitous form of costly signalling used by many prominent modern religions is belief taxes (insisting that the ingroup professes a particular, easily disproven belief as a reliable signal of loyalty), but this is not neccesary for a religion to successfully build trust and loyalty between members. In particular, costly signalling must be negative-value for an individual (before the second-order benefits from the group dynamic), but need not be negative-value for the group, or for humanity. Indeed, the best costly sacrifices can be positive-value for the group or humanity, while negative-value for the performing individual. (There are some who may argue that positive-value sacrifices have less signalling value than negative value sacrifices, but I find their logic dubious, and my own observations of religion seem to suggest positive-value sacrifice is abundant in organized religion, albeit intermixed with neutral- and negative-value sacrifice)
The rationalist community is averse to religion because it so often goe...
If you know someone is rational, honest, and well-read, then you can learn a good bit from the simple fact that they disagree with you.
If you aren't sure someone is rational and honest, their disagreement tells you little.
If you know someone considers you to be rational and honest, the fact that they still disagree with you after hearing what you have to say, tells you something.
But if you don't know that they consider you to be rational and honest, their disagreement tells you nothing.
It's valuable to strive for common knowledge of you and your partners' rationality and honesty, to make the most of your disagreements.
Does newspeak actually decrease intellectual capacity? (No)
In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves (thereby ensuring stability for the reigning regime)
In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There are two different, closely related, ideas, both of which the book implies, that are worth separating here.
The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas (I think in the absence of other techniques used by Orwell's Oceania to suppress independent thought, such subjects can still be meaningfully communicated and pondered, just less easily than with a rich vocabulary provided)
The second idea, which I worry is an incorrect takeaway people m...
"From AI to Zombies" is a terrible title... when I recommend The Sequences to people, I always feel uncomfortable telling them the name, since the name makes it sound like cookey bull****- in a way that doesn't really indicate what it's about
Asking people to "taboo [X word]" is bad form, unless you already know that the other person is sufficiently (i.e. very) steeped in LW culture to know what our specific corner of internet culture means by "taboo".
Without context, such a request to taboo a word sounds like you are asking the other person to never use that word, to cleanse it from their vocabulary, to go through the rest of their life with that word permanently off-limits. That's a very high, and quite rude, ask to make of someone. While that's of course not what we mean by "taboo", I have seen requests to taboo made where it's not clear that the other person knows what we mean by taboo, which means it's quite likely the receiving party interpreted the request as being much ruder than was meant.
Instead of saying "Taboo [X word]", instead say "could you please say what you just said without using [X word]?" - it conveys the same request, without creating the potential to be misunderstood to be making a rude and overreaching request.
I'm quite scared by some of the responses I'm seeing to this year's Petrov Day. Yes, it is symbolic. Yes, it is a fun thing we do. But it's not "purely symbolic", it's not "just a game". Taking things that are meant to be serious is important, even if you can't see why they're serious.
As I've said elsewhere, the truly valuable thing a rogue agent destroys by failing to live up to expectations on Petrov day, isn't just whatever has been put at stake for the day's celebrations, but the very valuable chance to build a type of trust that can only be built by playing games with non-trivial outcomes at stake.
Maybe there could be a better job in the future of communicating the essence of what this celebration is intended to achieve, but to my eyes, it was fairly obvious what was going on, and I'm seeing a lot of comments by people (whose other contributions to LW I respect) who seemed to completely fail to see what I thought was obviously the spirit of this exercise
I'm quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren't trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)
Even putting this aside, the fact that nobody jumped at the chance to potentially shift a non-trivial (for certain definitions of trivial) amount of funding away from bad organizations and towards good organizations (which I'm investing primarily as a personal financial strategy), seems very worrying to me. While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company, it's bad decision theory to model my actions as only being correlated with myself; and besides, if the funding was redirected, it probably would have gone somewhere without the enormous supply of funds Alphabet has, and very well could have made an important difference, pushing the margins away from failure and towards success.
There's a good chance I may change my mind in the future about this, but currently my response to this information is a substantial shift away from the LW crowd actually being any good at usefully using rationality instrumentally
(For what it's worth, the post made it not at all clear to me that we were talking about a nontrivial amount of funding. I read it as just you thinking a bit through your personal finance allocation. The topic of divesting and impact investing has been analyzed a bunch on LessWrong and the EA Forum, and my current position is mostly that these kinds of differences in investment don't really make much of a difference in total funding allocation, so it doesn't seem worth optimizing much, besides just optimizing for returns and then taking those returns and optimizing those fully for philanthropic impact.)
During today's LW event, I chatted with Ruby and Raemon (seperately) about the comparison between human-made photovoltaic systems (i.e. solar panels), and plant-produced chlorophyll. I mentioned that in many ways chlorophyll is inferior to solar panels - consumer grade solar panels operate in the 10% to 20% efficiency range (i.e. for every 100 joules of light energy, 10 - 20 joules are converted into usable energy), while chlorophyll is around 9% efficient, and modern cutting edge solar panels can go even as high as nearly 50% efficiency. Furthermore,...
Update on my tinkering with using high doses of chocolate as a psychoactive drug:
(Nb: at times I say "caffeine" in this post, in contrast to chocolate, even though chocolate contains caffeine; by this I mean coffee, energy drinks, caffeinated soda, and caffeine pills collectively, all of which were up until recently frequently used by me; recently I haven't been using any sources of caffeine other than chocolate, and even then try to avoid using it on a daily basis)
I still find that consuming high doses of chocolate (usually 3-6 table spoons of dark cocoa ...
I may have discovered an interesting tool against lethargy and depression [1]: This morning, in place of my usual caffeine pill, I made myself a cup of hot chocolate (using pure cacao powder / baking chocolate from the supermarket), which made me very energetic (much more energetic than usual), which stood in sharp contrast to the past 4 days, which have been marked by lethargy and intense sadness. Let me explain:
Last night, I was reflecting on the fact that one of the main components of chocolate is theobromine, which is very similar in structure to caffe...
In Zvi's most recent Covid-19 post, he puts the probability of a variant escaping mRNA vaccines and causing trouble in the US at most at 10%. I'm not sure I'm so optimistic.
One thing that gives reason to be optimistic, is that we have yet to see any variant that has substantial resistance to the vaccines, which might lead one to think that resistance just isn't something that is likely to come up. However, on the other hand, the virus has had more than a year for more virulent strains to crop up while people were actively sheltering in place, and variants ...
One thing that is frustrating me right now is that I don't have a good way of outputting ideas while walking. One thing I've tried is talking into voicememos, but it feels awkward to be talking out loud to myself in public, and it's a hassle to transcribe what I write when I'm done. One idea I don't think I've ever seen is a hand-held keyboard that I can use as I'm walking, and can operate mostly by touch, without looking at it, and maybe it can provide audio feedback through my headphones.
Epistemic: Intend as a (half-baked) serious proposal
I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-l...
Current discourse around AI safety (at least among people who haven't missed) has a pretty dark, pessimistic tone - for good reason, because we're getting closer to technology that could accidentally do massive harm to humanity.
But when people / groups feel pessimistic, it's hard to get good work done - even when that pessimism is grounded in the real-world facts.
I think we need to develop an optimistic, but realistic point of view - acknowledging the difficulty of where we are, but nonetheless being hopeful and full of energy towards finding the solution. Because AI alignment can be solved, we just actually have to put in the effort to solve it, and maybe a lot faster than we are currently prepared to.
Somehow I stumbled across this quote from Deutoronomy (from the Torah / Old Testament, which is the law of religious-Jews):
You shall not have in your bag two kinds of weights, large and small. You shall not have in your house two kinds of measures, large and small. You shall have only a full and honest weight; you shall have only a full and honest measure, so that your days may be long in the land that the Lord your God is giving you. For all who do such things, all who act dishonestly, are abhorrent to the Lord your God.
There's of course the bit about...
This Generative Ink post talks about curating GPT-3, creating a much better output than it normally would give, turning it from quite often terrible to usually pround and good. I'm testing out doing the same with this post, choosing one of many branches every few dozens of words.
For a 4x reduction in speed, I'm getting very nice returns on coherence and brevity. I can actually pretend like I'm not a terrible writer! Selection is a powerful force, but more importantly, continuing a thought in multiple ways forces you to actually make sure you're saying thin...
URLs (Universal Resource Locators) are universal over space, but they are not universal over time, and this is a problem