I just got home from a six day meditation retreat and began writing.
The catch is that I arrived at the retreat yesterday.
I knew going in that it was a high variance operation. All who had experience with such things warned us we would hate the first few days, even if things were going well. I was determined to push through that.
Alas or otherwise, I was not sufficiently determined to make that determination stick. I didn’t have a regular practice at all going in, was entirely unfamiliar with the details of how this group operated, and found the Buddhist philosophy involved highly off putting, i... (Read more)
CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).
Topics that may be interesting include (but are not limited to):
Epistemic status: Tentative. I’ve been practicing this on-and-off for a year and it’s seemed valuable, but it’s the sort of thing I might look back on and say “hmm, that wasn’t really the right frame to approach it from.”
In doublecrux, the focus is on “what observations would change my mind?”
In some cases this is (relatively) straightforward. If you believe minimum wage helps workers, or harms them, there are some fairly obvious experiments you might run. “Which places have instituted minimum wage laws? What happened to wages? What happened to unemployment? What happened to worker
... (Read more)In addition to modifying the perceived beauty or distastefulness of a given concept, there are knobs you can turn related to the concepts themselves: nudging, splitting, merging, or even destroying (and assigning all remaining aesthetic value to other, related concepts.
I need help. Pretty much the entire scientific community and everyone I trust as an intellectual role model has said that vaccines are an almost entirely good thing, yet a close member of my family has made a somewhat convincing argument they are dangerous, and I’m terribly confused. I’ve been trying to figure this issue out for months now, and I just can’t. I’ve seen some (a lot of) dark side epistemology used by the more… out there antivaxers (i.e. homeopathy and essential oil people), but although I have a creeping sense some of what my family member is sa... (Read more)
I'd recommend, for each argument, finding someone who makes that argument online, and posting it to skeptics stack exchange. I used to do that years ago and found people were very helpful in doing research and finding good sources on a wide variety of topics.
It feels like community discussion has largely abandoned the topic of AGI having the self-modifying property, which makes sense because there are a lot of more fundamental things to figure out.
But I think we should revisit the question at least in the context of narrow AI, because the tools are now available to accomplish exactly this on several levels. This thought was driven by reading a blog post, Writing BPF Code in Rust.
BPF stands for Berkeley Packet Filter, which was originally for network traffic analysis but has since been used for tracing the Linux kernel. The pitch is that this can n... (Read more)
I'm confused. I read you as suggesting that self-modifying code has recently become possible, but I think that self-modifying code has been possible for about as long as we have had digital computers?
What specific things are possible to do now that weren't possible before, and what kind of AGI-relevant questions does that make testable?
I find myself somewhat confused as to why I should find Part I of “What failure looks like” (hereafter "WFLL1") likely enough to be worth worrying about. I have 3 basic objections, although I don't claim that any are decisive. First, let me summarize WFLL1 as I understand it:
In general, it's easier to optimize easy-to-measure goals than hard-to-measure ones, but this disparity is much larger with ML models than with humans and human-made institutions. As special-purpose AI becomes more powerful, this will lead to a form of differential progress where easy-to-m... (Read more)
In the past 3-4 years, I went through a prolonged and painful life crisis in which I systematically deconstructed my existing worldview and slowly moved away from Evangelical Christianity into something Rationalist or Rationalist-adjacent. In the past 4 months, I've started hanging around the Berkeley Rationality community and am now dating someone embedded therein. At this point my partner is still my main connection to the specific values and practices of the community, and given that my worldview is currently being fleshed-out, she has an outsized influence on what my future beliefs and val
... (Read more)Welcome!
Not sure how relevant can be my advice, because I was never in your position. I was never religious. I grew up in a communist country, which is kinda similar to growing up in a cult, but I wasn't a true believer of that either.
My prediction is that in the process of your change, you will fail to update on some points, and overcompensate on other points. Which is okay, because growing up happens in multiple iterations. What you do wrong in the first step, you can fix in the second one. As long as you keep some basic humility and admit that you ... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
(A traditional folk tale of the rashunuhlist people, as told by Jessica Taylor, and literarily and mathematically adapted by the present author.)
In the days of auld lang syne on Earth-that-was, there was a population of agents playing the Nash demand game under a replicator dynamic with uniform random encounters. Whenever two agents met, each of them would name a number between 0 and 10. If the two numbers added up to 10 or less, both agents would receive of payoff of the number they named. But if the two numbers added up to more than 10, both agents would receive nothing. Agents that received
... (Read more)Based on the quote from Jessica Taylor, it seems like the FDT agents are trying to maximize their long-term share of the population, rather than their absolute payoffs in a single generation? If I understand the model correctly, that means the FDT agents should try to maximize the ratio of FDT payoff : 9-bot payoff (to maximize the ratio of FDT:9-bot in the next generation). The algebra then shows that they should refuse to submit to 9-bots once the population of 9-bots gets low enough (Wolfram|Alpha link), without needing to drop the random encounters ass... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
A dominant framework in rationality is internal alignment [citation needed]: sort out conflicts between parts of yourself, stop working at cross-purposes to yourself, stop doing internal violence, aim to take coherent action based on coherent beliefs towards coherent goals, etc. I think the alternate/complementary orientation of aiming for internal empowerment is often neglected / underemphasized. By internal empowerment I mean prioritizing giving each "part" (subsystem, motive, drive, goal, desire, subagent, whatever) the resources it needs to increase its capability to understand the world,
... (Read more)I think this is great advice. I find in myself and others a common source of psychological shadow is the blocking out of parts of the self in a failed attempt to achieve an end that is ultimately counterproductive even if it occasionally works in limited circumstances.
The recent adversarial collaboration on spiritual experiences on Slate Star Codex includes this paragraph:
It was also discovered that people in the United States, Australia, the United Kingdom, and Scandinavia do not tend to share their spiritual experiences with others. Hood et al. wonder if this is why such spiritual experiences are thought to be uncommon (as fewer people in these societies might have heard reports of others’ spiritual experiences).
This naturally lead me to wonder, what spiritual experiences have LessWrong readers have that they are willing to share, since the readers... (Read more)
Note that I would not usually describe this as a spiritual experience.
An antimeme is a meme with the following three characteristics:
I call these "antimemes" because they exhibit behavior opposite that of regular memes. The typical
... (Read more)Words can't be defined arbitrarily, so I am going to examine your definition first.
First, I am not sure what exactly counts as "mainstream", and why is it even important. What you describe seems like a relationship between a meme and a culture, whether large or small. So you could have "anti-memes of antimemes" as Isnasene describes. Or you could have a polarized society with two approximately equally large cultures, each of them having their own "anti-memes". Or a small minority, such as cult, that strongly ignores the s... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Maybe this is a well known kind of problem but I am a novice and it looks puzzling to me.
Here is a lottery: I have these two choices:
My utility function is .
What should I choose?
Let's compute the expected utilities:
The intuitive result you would expect only holds for utility function which are linear in x (I believe..), since we could then apply the utility function at each step and it would yield the same value as if applied to the whole amount.
Another case would be if you were to receive your utility immediately after playing each game (like in a reinforcement learning algorithm). In those cases is also applied to each outcome separately and would yield the result you would expect.
Also: (b) has a better EV in terms of raw $ and due to law of large numbers we wou... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
Trying to find Katja Grace's account which she mentions she has here for a PM conversation. If someone (or herself) would PM me that would be awesome.
In light of reading Hazard's Shortform Feed -- which I really enjoy -- based on Raemon's Shortform feed, I'm making my own. There be thoughts here. Hopefully, this will also get me posting more.
Hmm. It may actually be possible to regenerate the motor neurons (or repurpose the already existing ones somehow). I'm not sure on the exact differences between them.
Somehow the action I would expect to help is for the person's limbs to be moved by others/machines as if they are acting themselves, because I think the body can adapt somehow?
Difficult to be specific without reading a lot of biology here though.
My post and Twitter thread about the controversy over the 1954 polio vaccine trials generated many replies on Twitter, so here is a followup.
First, I’m very sympathetic to the dilemma that Salk faced. I think it’s a tough problem, and it’s worth thinking about different ways to approach it. I didn’t mean to cast aspersions on Salk.
One way in general to improve this situation is to make sure that all the controls get the treatment immediately after the trial, if it is proved safe and effective. But in this case, that wouldn’t have changed anything. Polio was a... (Read more)
Gamma Andromeda, where philosophical stoicism went too far. Its inhabitants, tired of the roller coaster ride of daily existence, decided to learn equanimity in the face of gain or misfortune, neither dreading disaster nor taking joy in success.
But that turned out to be really hard, so instead they just hacked it. Whenever something good happens, the Gammandromedans give themselves an electric shock proportional in strength to its goodness. Whenever something bad happens, the Gammandromedans take an opiate-like drug that directly stimulates the pleasure centers of their brain, in a dose propor... (Read more)
" So another research program was started, and the result were fully immersive, fully life-supporting virtual reality capsules. Stacked in huge warehouses by the millions, the elderly sit in their virtual worlds, vague sunny fields and old gabled houses where it is always the Good Old Days and their grandchildren are always visiting. "
Is this a reference to the futurama episode with the death star type thing with all the old people in it?
Reply to: Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Eliezer Yudkowsky, listing advantages of a "wizard's oath" ethical code of "Don't say things that are literally false", writes—
Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths.
I mean, that's one hypothesis about the psychological effects of adopting the wizard's code.
A potential problem with this is that human natural language contains a lot of ambiguity. Words can
... (Read more)It seems to me like 'intent to inform' is worth thinking about in the context of its siblings; 'intent to misinform' and 'intent to conceal.' Cousins, like 'intent to aggrandize' or 'intent to seduce' or so on, I'll leave to another time, tho you're right to point out they're almost always present, if just by being replaced by their reaction (like self-deprecation, to be sure of not self-aggrandizement).
Quakers were long renowned for following four virtues: peace, equality, simplicity, and truth. Unlike wizards, they have the benefit of being real, and so
... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this postCross-posted to the EA forum here.
As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments.
My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectivenes... (Read more)
See also My current thoughts on MIRI's "highly reliable agent design" work by Daniel Dewey (Open Phil lead on technical AI grant-making).
From the "What do I think of HRAD?" section:
... This reduces my credence in HRAD being very helpful to around 10%. I think this is the decision-relevant credence.
In my short-form, I write:
[...] This is way more obvious and way more clear in Inadequate Equilibria. Take a problem, a question and deconstruct it completely. It was concise and to the point, I think it's one of the best things Eliezer has written; I cannot recommend it enough.
Just finished Inadequate Equilibria. Now, I'm reading:
What is your verdict?
I'm currently reading through his blog Metamoderna and feel like there are some similarities to rationalist thoughts on there (e.g. this post on what he calls "game change" and this post on what he calls proto-synthesis).
What it says on the tin.
I made a Foretold notebook for predicting which posts will end up in the Best of 2018 book, following the LessWrong review.
You can submit your own predictions as well.
At some point I might write a longer post explaining why I think having something like "futures markets" on these things can create a more "efficient market" for content.
I don’t think this works.
A carpenter might say that his knowledge is trade knowledge and not scientific knowledge, and when challenged to provide some evidence that this supposed “trade knowledge” is real, and is worth something, may point to the chairs, tables, cabinets, etc., which he has made. The quality of these items may be easily examined, by someone with no knowledge of carpentry at all. “I am a trained and skilled carpenter, who can make various useful things for you out of wood” is a claim which is very, very easy to verify.
But as I understand it
... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post