Recent Discussion

Alternate title: learning from fictional evidence. I've seen echoes of this idea elsewhere but couldn't find a description that suits me.

My main idea is: you can update from your observed reaction to fiction and/or counterfactuals.

The fallacy of generalizing from fictional evidence happens when you treat events having happened in fiction, following the rules of good writing rather than verisimilitude, as observations of reality. The facts may be wrong but, if you suspend your disbelief for a while and get immersed in the story, your emotional reaction will be real.

Compare this to counterfactua... (Read more)

3Vladimir_Nesov11hI don't see why fictional evidence shouldn't be treated exactly the same as real evidence, as long as you don't mix up the referents. There is no fundamental use in singling out reality (there might be some practical use in specializing human minds to reality rather than fiction). Generalization from real evidence to fiction is as much a fallacy as generalization from fictional evidence to reality. A fiction text is a model of fictional territory that can be used to get some idea of what that territory is like (found with the prior of fictional worlds), and to formulate better models, or models of similar worlds (fanfiction). Statements made in a fiction text can be false about the fictional territory, in which case they are misleading and interfere with learning about the fictional territory. Other statements are good evidence about it. One should be confused by false statements about a fictional territory, but shouldn't be confused by true statements about it. And so on and so forth.

I think not mixing up the referents is the hard part. One can properly learn from fictional territory when they can clearly see in which ways it's a good representation of reality, and where it's not.

I may learn from an action movie the value of grit and what it feels like to have principles, but I wouldn't trust them on gun safety or CPR.

It's not common for fiction to be self-consistent enough and preserve drama. Acceptable breaks from reality will happen, and sure, sometimes you may have a hard SF universe were the alternate reality is very lawful and th... (read more)

A couple days ago I surveyed readers for deviant beliefs. The results were funny, hateful, boring and bonkers. One of the submissions might even be useful.

If you care a lot about your mind, it is not unreasonable to avoid advertisements like plague rats, up to and including muting your speakers and averting your gaze.

This extremist position caught my eye because humans have a tendency to underestimate the effect advertising[1] has on us. I never realized how much advertising affected me until I got rid of it.

For nearly a year I have been avoiding junk media. I thought this would make me ha... (Read more)

Most people have weeks where they don't read books. The fact that someone needs to make a conscious decision to go a week without reading books is a sign fo a person who reads a lot of books.

A Quora answer from 2016 (a decade after 4-Hour Workweek) suggest he read 1-4 books/per week at the time. 

1TAG10hLogical positivism/verificationism is the obvious example.
2lsusr11hThank you for this! I've started writing my own fiction too but have so far been too cowardly to post it on Less Wrong until now [https://www.lesswrong.com/posts/zb3hWt99i9Fm93KPq/luna-lovegood-and-the-chamber-of-secrets-part-1-1] .
3lsusr11hYeah, that's better. I have changed it back from "I have read enough books".

Convolutions smooth out hard sharp things into nice smooth things. (See previous post for why convolutions are important). Here are some convolutions between various distributions:

Separate convolutions of uniform, gamma, beta, and two-peaked distributions. Each column represents one convolution. The top row is , the middle row , and the bottom row 

(For all five examples, all the action in the functions  and  is in positive regions - i.e.  > 0 only when x > 0, and  too - though this isn't necessary.) 

Things to ... (Read more)

2Gurkenglas4hThat's counterintuitive. Surely for everyf1,f2…fn−1there's anfnthat'll get you anywhere? IfF{f∗g}=F{f}F{g},fn:=F−1{F{target}/F{f1,f2…fn−1}}.
1Maxwell Peterson4hYes, that sounds right - such an f_n exists. And expressing it in fourier series makes it clear. So the “not much” in “doesn’t much matter” is doing a lot of work. I took his meaning as something like “reasonably small changes to the distributions di in D• = d1•...•dn don’t change the qualitative properties of D•”. I liked that he pointed it out, because a common version of the CLT stipulates that the random variables must be identically distributed, and I really want readers here to know: No! That isn’t necessary! The distributions can be different (as long as they’re not too different)! But it sounds like you’re taking it more literally. Hm. Maybe I should edit that part a bit.

The fourier transform as a map between function spaces is continuous and maps gaussians to gaussians, so we can translate "convoluting nice distribution sequences tends towards gaussians" to "multiplying nice function sequences tends towards gaussians". The pointwise logarithm as a map between function spaces is continuous and maps gaussians to parabolas, so we can translate further to "nice function series tend towards parabolae", which sounds more almost always false than usually true.

Introduction: The Dead Sea Salt Experiment

In this 2014 paper by Mike Robinson and Kent Berridge at University of Michigan (see also this more theoretical follow-up discussion by Berridge and Peter Dayan), rats were raised in an environment where they were well-nourished, and in particular, where they were never salt-deprived—not once in their life. The rats were sometimes put into a test cage with a lever which, if pressed, would trigger a device to spray ridiculously salty water directly into their mouth. The rats pressed this lever once or twice, were disgusted and repulsed by the extreme sa... (Read more)

Thanks for the clarification! I agree if the planner does not have access to the reward function then it will not be able to solve it. Though, as you say, it could explore more given the uncertainty.

Most model-based RL algorithms I've seen assume they can evaluate the reward functions in arbitrary states. Moreover, it seems to me like this is the key thing that lets rats solve the problem. I don't see how you solve this problem in general in a sample-efficient manner otherwise.

One class of model-based RL approaches is based on [model-predictive control](ht... (read more)

2steve21529hUpdate: Both the podcast and the article were interesting, enjoyable, and helpful :-)

The room was buried six kilometers under Mount Olympus. It hovered in a vacuum, suspended above superconducting electromagnets. The whole containment machine was wrapped in a Matryoshka Faraday cage. Officer Scarlet Wei wore a cleansuit. She entered the room through steel door two meters thick, an EMP, an X-ray, an airlock and then another EMP.

The white rectangular room contained one door, a chair, a table, a computer terminal, a mechanical clock and two large buttons. The word "PANIC" in large white friendly letters on a red button. The black button had a white skull drawn it.

If Scarlet press... (Read more)

TLDR
Think "A prediction market, where most questions are evaluated shortly after an AGI is developed." We could probably answer hard questions more easily post-AGI, so delaying them would have significant benefits.

Motivation

Imagine that select pre-AGI legal contracts stay valid post-AGI. Then a lot of things are possible.

There are definitely a few different scenarios out there for economic and political consistency post-AGI, but I believe there is at least a legitimate chance (>20%) that legal contracts will exist for what seems like a significant time (>2 human-experiential years.)

If

... (Read more)
1rossry4hClever, but it hasn't been tried for a good reason. If, say, the next five years of markets are all untethered from reality (but consistent with each other), there's no way to get paid for bringing them into line with expected reality except by putting on the trades and holding them for five years. (The natural one-year trade will just resolve to the unfair market price of the next-year-market market and there's nothing to do about it except wait for longer.) The chained markets end up being no more fair than if they all settled to the final expiry directly.

Yes, I can imagine cases where this setup wouldn't be enough.

Though note that you could still buy the shares the last year. Also, if the market corrects by 10% each year (i.e., a value of a share of yes increases from 10 to 20% to 30% to 40%, etc. each year), it might still be worth it (note that the market would resolve each year to the value of a share, not to 0 or 100).

Also note that the current way in which prediction markets are structured is, as you point out, dumb: you bet 5 depreciating dollars which then go into escrow, rather than $5 worth of, say, S&P 500 shares, which increase in value. But this could change.

Yesterday I asked readers the Thiel Question. 40 people responded. I have combined the responses into a single political platform. You can view the raw responses here.


This country is tired of the Democrats, Republicans, Libertarians and human politics in general. I, a human-aligned AGI dictator, promise to solve this problem once and for all. If I cannot remove humans from government entirely then I will attempt to abolish democracy. If I cannot abolish democracy then I will repeal the 19th amendment and leave the rest to OpenAI.

Morality is not real. Words have no meaning. I tell lies of eve... (Read more)

Eh, if you read the raw results most are pretty innocuous.

Epistemic Status: Cautiously optimistic. Much of this work is in crafting and advancing terminology in ways that will hopefully be intuitive and useful. I’m not too attached to the specifics but hope this could be useful for future work in the area.

Introduction

Strong epistemics or “good judgment” clearly seems valuable, so it’s interesting that it gets rather little Effective Altruist attention as a serious contender for funding and talent. I think this might be a mistake.

This isn’t to say that epistemics haven’t been discussed. Leaders and community members on LessWrong and the EA Forum have ... (Read more)

This sounds like an amazing project and I find it very motivating. Especially the questions around how we'd like future epistemics to be and prioritizing different tools/training.

As I'm sure you are aware, there is a wide academic literature around many related aspects including the formalization of rationality, descriptive analysis of personal and group epistemics, and building training programs. If I understand you correctly, a GPI analog here would be something like an interdisciplinary research center that attempts to find general frameworks with which... (read more)

It is time. The final challenge in the 7-week babble challenge series. 

Let’s become stronger. Let’s go out with a bang. 

On the table in front of you is a candle.

This candle will burn as a metaphor for the light of Science, a little beacon of rationality. It will represent the will to keep practicing and honing our Art. 

Your task is simple. 

Light it. 

You have 1 hour to come up with 100 ways. 

Looking back

Here are the rankings before the final round. (You gain a star for completing a challenge, and lose one for missing a week. I’m not including myself since I’m the g... (Read more)

Okay, that's a fun challenge.

Took me 150 minutes, guess I'm pretty lazy at being creative. I was trying to be too real and not listening to your main tip, seems like.

It is getting more creative creative(or absurd) as the numbers approach 100.

Note: I assumed that we need to light this exact candle on the table. Which can be moved.

 

  1. snap your fingers until the candle lights itself, starting the rational fire of the future art of rationality
  2. light the candle using another candle
  3. in fact the candle is already burning. always was.
  4. summon a fire dragon using th
... (read more)

Luna Lovegood walked through the barrier between Platforms Nine and Ten to Platform Nine and Three-Quarters. Luna wondered what happened to Platform Nine and a Half. Numbers like "three quarters" only appear when you divide an integer in half twice in a row.

Luna looked around for someone who might know the answer and spied a unicorn. She wore clothes, walked on two feet and had curly brown hair. None of that fooled Luna. The unicorn radiated peace and her fingernails were made out of alicorn.

"What happened to Platform Nine and a Half?" Luna asked the unicorn.

"There is no Platform Nine and a Ha... (Read more)

3Measure6hThis is delightful. I noticed that you switched from 9 1/2 to 9 1/4 a couple of times at the beginning. Also "...an Wrackspurt...". Looking forward to seeing the rest.

Thanks! I have fixed "an Wrackspurt" to "a Wrackspurt".