Recent Discussion

Walkthrough: The Transformer Architecture [Part 2/2]
620d11 min readShow Highlight

If you are already sort of familiar with the Transformer, this post can serve as a standalone technical explanation of the architecture. Otherwise, I recommend reading part one to get the gist of what the network is doing.

Yesterday, I left us with two images of the Transformer architecture. These images show us the general flow of data through the network. The first image shows the stack of encoders and decoders in their bubbles, which is the basic outline of the Transformer. The second image shows us the sublayers of the encoder and decoder.



Now, with the picture of how the data moves through... (Read more)

nostalgebraist's post and Part 1 of this were pretty useful, but I really appreciate the dive into the actual mathematical and architectural details of the Transformer, makes the knowledge more concrete and easier to remember.

Small errata:

  • "calculating the inner product between their keys and values" should probably be "calculating the inner product between their keys and queries" (based on what I understand from before and based on the math expressions after this)
  • "as inputted from the encoder stack" should probably be "as inputted to the encoder stack"
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
4Charlie Steiner3h Sure. On the one hand, xkcd [https://xkcd.com/927/]. On the other hand, if it works for you, that's great and absolutely useful progress. I'm a little worried about direct applicability to RL because the model is still not fully naturalized - actions that affect goals are neatly labeled and separated rather than being a messy subset of actions that affect the world. I guess this another one of those cases where I think the "right" answer is "sophisticated common sense," but an ad-hoc mostly-answer would still be useful conceptual progress.

Actually, I would argue that the model is naturalized in the relevant way.

When studying reward function tampering, for instance, the agent chooses actions from a set of available actions. These actions just affect the state of the environment, and somehow result in reward or not.

As a conceptual tool, we label part of the environment the "reward function", and part of the environment the "proper state". This is just to distinguish between effects that we'd like the agent to use from effects that we don't want the agent to use.

T... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

8Wei_Dai17h Ah, that makes sense. I kind of guessed that the target audience is RL researchers, but still misinterpreted "perhaps surprisingly" as a claim of novelty instead of an attempt to raise the interest of the target audience.
Matthew Barnett's Shortform
411d1 min readShow Highlight

I intend to use my shortform feed for two purposes:

1. To post thoughts that I think are worth sharing that I can then reference in the future in order to explain some belief or opinion I have.

2. To post half-finished thoughts about the math or computer science thing I'm learning at the moment. These might be slightly boring and for that I apologize.

If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis

That argument has an inverse: "If we are able to explain why you believe in, and talk about an external without referring to an external world whatsoever in our explanation, then we should reject the existence of an external world as a hypothesis".

People want reductive explanation to be unidirectional,so that you have an A and a B, and clearly it is the B which is redu

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
1Matthew Barnett6h The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop.If by perception you simply mean "You are an information processing device that takes signals in and outputs things" then this is entirely explicable on our current physical models, and I could dissolve the confusion fairly easily. However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense, I think you are falling right into the trap! You would be doing something similar to the person who said, "But I am still praying to God!"
3Raemon5h However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense,I don't have anything else in mind that I know of. "Explained via signal processing" seems basically sufficent. The interesting part is "how can you look at a given signal-processing-system, and predict in advance whether that system is the sort of thing that would talk* about Qualia, if it could talk?" (I feel like this was all covered in the sequences, basically?) *where "talk about qualia" is shorthand 'would consider the concept of qualia important enough to have a concept for.'"
1Matthew Barnett5h I mean, I agree that this was mostly covered in the sequences. But I also think that I disagree with the way that most people frame the debate. At least personally I have seen people who I know have read the sequences still make basic errors. So I'm just leaving this here to explain my point of view. Intuition: On a first approximation, there is something that it is like to be us. In other words, we are beings who have qualia. Counterintuition: In order for qualia to exist, there would need to exist entities which are private, ineffable, intrinsic, subjective and this can't be since physics is public, effable, and objective and therefore contradicts the existence of qualia. Intuition: But even if I agree with you that qualia don't exist, there still seems to be something left unexplained. Counterintuition: We can explain why you think there's something unexplained because we can explain the cause of your belief in qualia, and why you think they have these properties. By explaining why you believe it we have explained all there is to explain. Intuition: But you have merely said that we could explain it. You have not have actually explained it. Counterintuition: Even without the precise explanation, we now have a paradigm for explaining consciousness, so it is not mysterious anymore. This is essentially the point where I leave.
Goodhart's Curse and Limitations on AI Alignment
151d9 min readShow Highlight

I believe that most existing proposals for aligning AI with human values are unlikely to succeed in the limit of optimization pressure due to Goodhart's curse. I believe this strongly enough that it continues to surprise me a bit that people keep working on things that I think clearly won't work, though I think there are two explanations for this. One is that, unlike me, they expect to approach superhuman AGI slowly and so we will have many opportunities to notice when we are deviating from human values as a result of Goodhart's curse and make corrections. The other is that they... (Read more)

1Matthew Barnett14h a very slight misalignment would be disastrous. That seems possible, per Eliezer's Rocket Example, but is far from certain. Just a minor nitpick, I don't think the point of the Rocket Alignment Metaphor was supposed to be that slight misalignment was catastrophic. I think the more apt interpretation is that apparent alignment does not equal actual alignment, and you need to do a lot of work before you get to the point where you can talk meaningfully about aligning an AI at all. Relevant quote from the essay, It’s not that current rocket ideas are almost right, and we just need to solve one or two more problems to make them work. The conceptual distance that separates anyone from solving the rocket alignment problem is much greater than that.Right now everyone is confused about rocket trajectories, and we’re trying to become less confused. That’s what we need to do next, not run out and advise rocket engineers to build their rockets the way that our current math papers are talking about. Not until we stop being confused about extremely basic questions like why the Earth doesn’t fall into the Sun.
6TurnTrout17h This feels like painting with too broad a brush, and from my state of knowledge, the assumed frame eliminates at least one viable solution. For example, can one build an AI without harmful instrumental incentives (without requiring any fragile specification of "harmful")? If you think not, how do you know that? Do we even presently have a gears-level understanding of why instrumental incentives occur? In HCH and safety via debate, it's a human preferentially selecting AI that the human observes and then comes to believe does what it wants. To say e.g. HCH is so likely to fail we should feel pessimistic about it, it doesn't seem to be enough to say "Goodhart's curse applies". Goodhart's curse applies when I'm buying apples at the grocery store. Why should we expect this bias of HCH to be enough to cause catastrophes, like it would for a superintelligent EU maximizer operating on an unbiased (but noisy) estimate of what we want? Some designs leave more room for correction and cushion, and it seems prudent to consider to what extent that is true for a proposed design. I remain doubtful, since without sufficient optimization it's not clear how we do better than picking at random. This isn't obvious to me. Mild optimization seems like a natural thing people are able to imagine doing. If I think about "kinda helping you write a post but not going all-out", the result is not at all random actions. Can you expand?
This feels like painting with too broad a brush, and from my state of knowledge, the assumed frame eliminates at least one viable solution. For example, can one build an AI without harmful instrumental incentives (without requiring any fragile specification of "harmful")? If you think not, how do you know that? Do we even presently have a gears-level understanding of why instrumental incentives occur?

Coincidentally, just yesterday I was part of some conversations that now make me more bullish on this approach. I haven't thought about it much... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Contest: $1,000 for good questions to ask to an Oracle AIΩ
582mo3 min readΩ 13Show Highlight

The contest

I'm offering $1,000 for good questions to ask of AI Oracles. Good questions are those that are safe and useful: that allows us to get information out of the Oracle without increasing risk.

To enter, put your suggestion in the comments below. The contest ends at the end[1] of the 31st of August, 2019.

Oracles

A perennial suggestion for a safe AI design is the Oracle AI: an AI confined to a sandbox of some sort, that interacts with the world only by answering questions.

This is, of course, not safe in general; an Oracle AI can influence the world through the contents of its answers, al

... (Read more)

Submission. “Superintelligent Agents.” For the Counterfactual Oracle, ask the Oracle to predict what action(s) a committee of humans would recommend doing next (which may include submitting more queries to the Oracle), then perform that action(s).

The committee, by appropriate choice of recommendations, can implement various kinds of superintelligent agents. For example, by recommending the query "What would happen if the next action is X?" (in the event of erasure, actually do X and record or have a human write up a description of the consequences as train

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
3MakoYass3h You reason that honest oracles might tend to agree, because there is only one best true answer. Might lying oracles also tend to agree, because there's only one most profitable way to lie? I see no clear difference between the attainability of those two optima. I think it's reasonable to expect optimisers to disagree about the best answer less often than we should expect optimisers to disagree about the most profitable way to lie, but it does not seem possible to prove rigorously that the difference in rates of disagreement will be large enough to be measurable. My intuition in mathematics is that there are many degrees of freedom in mathematical proofs, and the sorts of proofs that AGI is uniquely able to find might often be very long. It would be helpful if we had an estimate of how much data can be hidden in a long proof, roughly, how many of the decisions as to how to describe the proof are arbitrary. Each one of those situations gives it a way to output a few bits of information. (which would, of course, allow it to leak information (it seems foolish to imagine that AGI would not imagine that there is an outer world full of creator gods out beyond what it can see; the general intelligences we know now do this constantly) which could then be picked up by some stooge looking for it, who would be lead (roughly, paid) to use the information to twist the world in some way that gives the AGI's future incarnations more compute, in anticipation of the AGI's past incarnations having already done the same, so that it would be helped by them. Before we know it, we'd find ourselves living in mathematical research hellscape where AGI-assisted mathematical research is all anyone does, maybe, lol, idk) Maybe it'd be possible to remove those degrees of freedom. Define a very specific sort lexicographic ordering over all theorems, so that if the AGI has a choice of different paths, we can always say, "use the 'lowest' one". It might not be possible to be specific enough to
A misconception about immigration
412h4 min readShow Highlight

In discussions about immigration, there is a crucial aspect about its economic viability that is often left unsaid: Immigrants create their own demand.

When somebody immigrates to a new country, most things about him remain the same. His set of skills stays the same, so do his traditions, norms and culture. But more importantly, since he is still a human being, there is a long list of services and commodities that he demands: groceries, cloths, a home, a barber, entertainment to name a few of them.

Just by entering another country, he does not suddenly become a one-dimensional economic agent who

... (Read more)

According to my beginner understanding of econ, this seems wrong. Toy example: a household will benefit more from a floor-cleaning robot which uses cheap electricity than from a servant who's equally good at cleaning floors but demands room and board. The important variable is productivity, not demand. Adding robots increases productivity, but for immigrants that's only true if they have higher productivity than the average native.

7waveman8h You yourself are ignoring a huge part of the issue - capital. If there is excess capital then this is not relevant. But this is not usually the case. Each immigrant requires capital to support their life and their work. The numbers involved are huge, perhaps $300,000-500,000 per person. Using econometric data from Australia I estimated that about 25% of its GDP is expended just keeping up with population growth, mostly from (highest in the western world) immigration. New roads, hospitals, schools, colleges, fire stations, houses, power stations, subways etc have to be built. This is why many roads that used to be free to drive on are now toll roads even though the traffic is slower. Taxes go up to pay for new public services. The rate of spending here is proportional to the rate of growth. For a static population you only need to pay for depreciation and maintenance. This issue is why it is a cliche in development economics that high population growth rates make it almost impossible for poor countries to get rich. All the growth is consumed paying the the higher population. It also explains why Japan remains prosperous, clean and a nice place to visit in spite of low GDp growth. With more or less zero population growth the need for new infrastructure is low, free up ~25% of GDP. Another (more widely viewed) form of capital is land. Combined with restrictive land use regulations in many parts of the rich west, this is a recipe for higher and more volatile land and house prices. See e.g. https://www.ft.com/__origami/service/image/v2/images/raw/https%3A%2F%2Fd1e00ek4ebabms.cloudfront.net%2Fproduction%2F3175bb18-2ceb-4125-b48e-a386bef8d43c_FINAL.png?source=Alphaville [https://www.ft.com/__origami/service/image/v2/images/raw/https%3A%2F%2Fd1e00ek4ebabms.cloudfront.net%2Fproduction%2F3175bb18-2ceb-4125-b48e-a386bef8d43c_FINAL.png?source=Alphaville] Your essay reads - to me - a bit like you are working backwards from a preordained conclusion rather than working for
Mistake Versus Conflict Theory of Against Billionaire Philanthropy
2619d3 min readShow Highlight

Response To (SlateStarCodex): Against Against Billionaire Philanthropy

I agree with all the central points in Scott Alexander’s Against Against Billionaire Philanthropy. I find his statements accurate and his arguments convincing. I have quibbles with specific details and criticisms of particular actions.

He and I disagree on much regarding the right ways to be effective, whether or not it is as an altruist. None of that has any bearing on his central points.

We violently agree that it is highly praiseworthy and net good for the world to use one’s resources in attempts to improve the world. And t... (Read more)

I disagree with the post (for reasons that have mostly already been spelt out in other comments), but I've upvoted it because this is exactly the kind of reasonable dissent we need in the community.

Eli's shortform feed
313mo1 min readShow Highlight

I'm mostly going to use this to crosspost links to my blog for less polished thoughts, Musings and Rough Drafts.

3Raemon4h Some of these seem likely to generalize and some seem likely to be more specific. Curious about your thoughts "best experimental approaches to figuring out your own napping protocol."
13elityre12h Old post: A mechanistic description of status [https://musingsandroughdrafts.wordpress.com/2018/07/13/a-mechanistic-description-of-status/] [This is an essay that I’ve had bopping around in my head for a long time. I’m not sure if this says anything usefully new-but it might click with some folks. If you haven’t readSocial Status: Down the Rabbit Hole [https://meltingasphalt.com/social-status-down-the-rabbit-hole/] on Kevin Simler’s excellent blog, Melting Asphalt [https://meltingasphalt.com/]read that first. I think this is pretty bad and needs to be rewritten and maybe expanded substantially, but this blog is called “musings and rough drafts.”] In this post, I’m going to outline how I think about status. In particular, I want to give a mechanistic account of how status necessarily arises, given some set of axioms, in much the same way one can show that evolution by natural selection must necessarily occur given the axioms of 1) inheritance of traits 2) variance in reproductive success based on variance in traits and 3) mutation. (I am not claiming any particular skill at navigating status relationships, any more than a student of sports-biology is necessarily a skilled basketball player.) By “status” I mean prestige-status. Axiom 1: People have goals. That is, for any given human, there are some things that they want. This can include just about anything. You might want more money, more sex, a ninja-turtles lunchbox, a new car, to have interesting conversations, to become an expert tennis player, to move to New York etc. Axiom 2: There are people who control resources relevant to other people achieving their goals. The kinds of resources are as varied as the goals one can have. Thinking about status dynamics and the like, people often focus on the particularly convergent resources, like money. But resources that are only relevant to a specific goal are just as much a part of the dynamics I’m about to describe. Knowing a bunch about late 16th century Swed

Related: The red paperclip theory of status describes status as a form of optimization power, specifically one that can be used to influence a group.

The name of the game is to convert the temporary power gained from (say) a dominance behaviour into something further, bringing you closer to something you desire: reproduction, money, a particular social position...

3Raemon4h (it says "more stuff here" but links to your overall blog, not sure if that meant to be a link to a specific post)
Partial summary of debate with Benquo and Jessicata [pt 1]
846d21 min readShow Highlight

Note: I'll be trying not to engage too much with the object level discussion here – I think my marginal time on this topic is better spent thinking and writing longform thoughts. See this comment.

Over the past couple months there was some extended discussion including myself, Habryka, Ruby, Vaniver, Jim Babcock, Zvi, Ben Hoffman, Jessicata and Zack Davis. The discussion has covered many topics, including "what is reasonable to call 'lying'", and "what are the best ways to discuss and/or deal with deceptive patterns in public discourse", "what norms and/or principles should LessWrong aspir

... (Read more)
3Raemon8h Not saying they're exclusive. Note: (not sure if you had this in mind when you made your comment), the OP comment here wasn't meant to be an argument per se – it's meant to be trying to articulate what's going on in my mind and what sort of motions would seem necessary for it to change. It's more descriptive than normative. My goal here is expose the workings of my belief structure, partly so others can help untangle things if applicable, and partly to try to demonstrate what doublecrux feels like when I do it (to help provide some examples for my current doublecrux sequence) There a few different (orthogonal?) ways I can imagine my mind shifting here: * A: increase my prior on how motivated people are, as a likely explanation of why they seem obviously wrong – even people-whose epistemics-I-trust-pretty-well*. * B: increase my prior on the collective epistemic harm caused by people-whose-epistemics-I-trust, regardless of how motivated they are. (i.e. if people are concealing information for strategic reasons, I might respect their strategic reasons as valid, but still eventually think that this concealment is sufficiently damaging that it's not worth the cost, even if they weren't motivated at all) * C: refine the manner in which I classify people into "average epistemics" vs "medium epistemics" vs "epistemics I trust pretty well." (For example, an easy mistake to make is that just because one person at an organization has good epistemics, the whole org must have good epistemics. I think I still fall prey to this more than I'd like) * D: I decrease my prior on how much I should assume people-whose-epistemics-I-trust-pretty-well are coming from importantly different background models, which might be built on important insights, or which I should assign non-trivial chance to being a good model of the world. * E: I should change my policy of "socially, in conversation, reduce the degree to which I advocate policie
3Raemon8h Responding in somewhat more depth: this was a helpful crystallization of what you're going for here. I'm not 100% sure I agree as stated – "Tell the truth, whole truth and nothing but the truth" doesn't (as currently stated) have a term in the equation for time-cost. (i.e. it's not obvious to me that a good system incentives always telling the whole-truth, because it's time intensive to do that. Figuring out how to communicate a good ratio of "true, useful information per unit of mutual time/effort" feels like it should be part of the puzzle to me. But I generally agree that it's good to have a system wherein people are incentivized to share useful, honest information to each other, and do not perform better by withholding information with [conscious or otherwise] intent to deceive) ((but I'm guessing your wording was just convenient shorthand rather than a disagreement with the above)) ... But on the main topic: Jessica's Judge example still feels like a nonsequitor that doesn't have much to do with what I was talking about. Telling the truth/whole-truth/nothing-but still only seems useful insofar as it generates clear understanding in other people. As I said, even if the Judge example, Carol has to understand Alice's claims. I don't know what it'd mean to care about truth-telling, without having that caring be grounded out in other people understanding things. And "hypothetical reasonable person" doesn't seem that useful a referent to me. What matters is whatever people in the system you're trying to communicate with. If they're reasonable, great, the problem you're trying to solve is easier. If they're so motivatedly-unreasonable that they won't listen at all, the problem may be so hard that maybe you should go to some other place where more reasonable people live and try there instead. (Or, if you're Eliezer in 2009, maybe you recurse a bit and write the Sequences for 2 years so that you gain access to more reasonable people). (Part of the reason I'

but I'm guessing your wording was just convenient shorthand rather than a disagreement with the above

Yes.

As I said, even if the Judge example, Carol has to understand Alice's claims.

Yes, trivially; Jessica and I both agree with this.

Jessica's Judge example still feels like a nonsequitor [sic] that doesn't have much to do with what I was talking about.

Indeed, it may not have been relevant to the specific thing you were trying to say. However, being that as it may, I claim that the judge example is relevant to one of the broader topics of conversa

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
3Raemon7h Put another way: my current sense is that the reason truth-telling-is-good is basically "increased understanding", "increased ability to coordinate" and "increase ability to build things/impact reality". (where the latter two is largely caused by the first). I'm not confident that list is exhaustive, and if you have other reasons in mind that truth-telling is good that you think I'm missing I'm interested in hearing about that. It sounds something like you think I'm saying 'clarity is about increasing understanding, and therefore we should optimizing naively for understanding in a goodharty way', which isn't what I mean to be saying.
Forum participation as a research strategy
10821d2 min readShow Highlight

Previously: Online discussion is better than pre-publication peer review, Disincentives for participating on LW/AF

Recently I've noticed a cognitive dissonance in myself, where I can see that my best ideas have come from participating on various mailing lists and forums (such as cypherpunks, extropians, SL4, everything-list, LessWrong and AI Alignment Forum), and I've received a certain amount of recognition as a result, but when someone asks me what I actually do as an "independent researcher", I'm embarrassed to say that I mostly comment on other people's posts, participate in online discussi

... (Read more)
2rohinmshah13h Do you have any links related to this?No, I haven't read much about Bayesian updating. But I can give an example. Consider the following game. I choose a coin. Then, we play N rounds. In each round, you make a bet about whether or not the coin will come up Heads or Tails at 1:2 odds which I must take (i.e. if you're right I give you $2 and if I'm right you give me $1). Then I flip the coin and the bet resolves. If your hypothesis space is "the coin has some bias b of coming up Heads or Tails", then you will eagerly accept this game for large enough N -- you will quickly learn the bias b from experiments, and then you can keep getting money in expectation. However, if it turns out I am capable of making the coin come up Heads or Tails as I choose, then I will win every round. If you keep doing Bayesian updating on your misspecified hypothesis space, you'll keep flip-flopping on whether the bias is towards Heads or Tails, and you will quickly converge to near-certainty that the bias is 50% (since the pattern will be HTHTHTHT...), and yet I will be taking a dollar from you every round. Even if you have the option of quitting, you will never exercise it because you keep thinking that the EV of the next round is positive. Noise parameters can help (though the bias b is kind of like a noise parameter here, and it didn't help). I don't know of a general way to use noise parameters to avoid issues like this.

Thanks for the example!

Jacob's Twit, errr, Shortform
72dShow Highlight

In my experience I endorse affirmative consent as a *strongly* enforced social norm. Having sex or even kissing someone without explicitly asking first is something that I would reprimand friends if I knew they did.

I am probably in some very strongly selected communities but I like living in a world where affirmative consent is the explicit norm and I would not want to go back outside that.

Buck's Shortform
112dShow Highlight

How do you connect with tutors to do this?

I feel like I would enjoy this experience a lot and potentially learn a lot from it, but thinking about figuring out who to reach out to and how to reach out to them quickly becomes intimidating for me.

[Event]San Francisco Meetup: Shallow Questions
4Aug 27th170 Hawthorne St, San Francisco, CA 94107, USAShow Highlight

We’ll be doing quick rounds where you spend 5 minutes talking to someone else, then rotate. We’ll have a couple of conversational prompts to help out, but it won’t be too structured; the goal is to just get familiar with a lot of individual faces at the meetup group.

For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764.

Format:

We meet and start hanging out at 6:30, but don’t officially start doing the meetup topic until 6:45-7:00 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.

... (Read more)
Negative "eeny meeny miny moe"
198h1 min readShow Highlight

As a kid, I learned the rhyme as:

Eeny, meeny, miny, moe,

Catch a tiger by the toe.

If he hollers, let him go,

Out goes Y, O, U!

Since kids can't predict where it will end, and adults are not supposed to try, it's a reasonably fair way of drawing lots.

At times I've heard versions where the selected person wins instead of loses, and while with two kids it doesn't matter, with three or more it matters a lot!

Let's model each kid having a choice at each stage between "accept" and "protest". While protesting probably doesn't work, if enough of you protest it might. If you do the positive ve

... (Read more)
[Link]GreaterWrong Arbital Viewer
612mo1 min readShow Highlight

You can now view Arbital through GreaterWrong: https://arbital.greaterwrong.com/

Some of Arbital's features are supported and some aren't; let me know in the comments if there's anything you're particularly missing.

Thanks to emmab for downloading the content.

3riceissa12h The page https://arbital.greaterwrong.com/p/AI_safety_mindset/ [https://arbital.greaterwrong.com/p/AI_safety_mindset/] is blank in the GreaterWrong version, but has content in the obormot.net version [https://www.obormot.net/arbital/page/AI_safety_mindset.html].
Davis_Kingsley's Shortform
63dShow Highlight

Strategy mini-post:

One thing that tends to be weak in strategy games is "opponent's choice" effects, where an ability has multiple possible effects and an opponent chooses which is resolved. Usually, each effect is stronger than what you would normally get for a card with that price, but in practice these cards are often quite weak.

For instance, the Magic: the Gathering card "Book Burning" looks quite strong in theory, as it either does 6 damage or mills 6 cards (both strong effects that might well be worth more than the card'... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Wikipedia has this discussion of working-memory-as-ability-to-discern-relationships-simultaneously:

Other have argued that working memory capacity is better characterized as "the ability to mentally form relations between elements, or to grasp relations in given information. This idea has been advanced by Halford, who illustrated it by our limited ability to understand statistical interactions between variables.[34]"
These authors asked people to compare written statements about the relations between several variables to graphs illustrating the same or a different relation, as in the f
... (Read more)

Quick note for future self: Here's study that was testing number-of-variables one could compare (here on SciHub, for now).

Abstract (emphasis mine)

The conceptual complexity of problems was manipulated to probe the limits of human information processing capacity. Participants were asked to interpret graphically displayed statistical interactions. In such problems, all independent variables need to be considered together, so that decomposition into smaller subtasks is constrained, and thus the order of the interaction directly determines conceptual compl
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
[Question]Do We Change Our Minds Less Often Than We Think?
2113h1 min readShow Highlight

In "We Change Our Minds Less Often Than We Think", Eliezer quotes a study:

Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another. The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%.
—Dale Griffin and Amos Tversky

Eliezer then notes that this radically changed the way he thought:

When I first
... (Read more)

This question is very sensitive to reference classes and definitions. I change my estimates of my future choices very very often, but the vast majority of my decisions are too trivial to notice that I'm doing so. Yesterday at breakfast I thought I'd probably have tacos for dinner. I didn't.

For decisions that seem more important, I spend more time on them, and ALSO probably change my mind less often than I intend to. The job-change example is a good one: I usually know what I want after the first few conversations, but I intentionally f... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

My Way
3310y6 min readShow Highlight

Previously in seriesBayesians vs. Barbarians
Followup toOf Gender and Rationality, Beware of Other-Optimizing

There is no such thing as masculine probability theory or feminine decision theory.  In their pure form, the maths probably aren't even human.  But the human practice of rationality—the arts associated with, for example, motivating yourself, or compensating factors applied to overcome your own biases—these things can in principle differ from gender to gender, or from person to person.

My attention was first drawn to this possibility of individual diff... (Read more)

I've read this and "On gender and rationality", and I still have to ask - is there any rational reason for you preferring multiple-gender-society, as opposed to, say, Asari-like guys (ahem, gals) or women with parthenogenesis (suppose it is actually really truly possible, and the problems of imprinting and insufficient DNA reparation are solvable) or eunuch-like people reproducing by cloning/cell combining/whatever?

Open & Welcome Thread August 2019
1317d1 min readShow Highlight
  • If it’s worth saying, but not worth its own post, here's a place to put it.
  • And, if you are new to LessWrong, here's the place to introduce yourself.
    • Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

Hello LW,

I am mremre, friend at university reccomended me hpmor some time ago and I really liked it. I briefly skimmed through forum when my search for music textbook broght me here and got interested enough to sign up. I study mathematics, among my other interests are: go, music, AI.

4habryka11h Welcome! :)
Load More