Recent Discussion

Mediums Overpower Messages
363d1 min readShow Highlight

I've observed that consuming certain kinds of media make me smarter and other kinds of media makes me dumber.

Makes me dumber:

  • Videogames
  • YouTube
  • News

Makes me smarter:

  • Books
  • Audiobooks
  • Direct messaging apps

By "smarter" I mean it holistically causes me to behave in a way that increases my overall rate of learning and quality of life. By "dumber" I mean the opposite.

For a long time I rejected this conclusion. Surely playing Kerbal Space Program must be more educational than reading Yu-gi-oh! manga. Nope. Yu-gi-oh! beats it by a long shot. I ran a long series of subjective personal experimen

... (Read more)

In my experience, the content can make a huge difference. For example videogames like Factorio or Minecraft almost feel like writing software to me. Because they require creativity and problem solving skills, just like coding.

There are also huge differences in YouTube videos. Some are very informative and thought-provoking. Watching them does certainly not make me "dumber".

I also don't understand why you distinguish between Books (fiction) and Books (nonfiction). Isn't that the same medium but with different content?

2lifelonglearner9h One thing here that seems important to note is what each medium does to your attention and what sort of cognitive work it facilitates: To borrow a few items from your list: * Videogames: literally a Skinner box that gives you reinforcement to keep doing the thing. * Web surfing / news feeds / blogs / movies: makes you a passive consumer of the content. * Direct messaging: requires you to spend time thinking about your response. * Writing software / making videos / drawing comics: puts you in a position to think about the message you want to convey, teaching to others requires you to bridge inferential gaps, look at your models. * Spaced repetition: literally designed to make you remember stuff.
An1lam's Short Form Feed
141y1 min readShow Highlight

In light of reading Hazard's Shortform Feed -- which I really enjoy -- based on Raemon's Shortform feed, I'm making my own. There be thoughts here. Hopefully, this will also get me posting more.

9An1lam13h Anki's Not About Looking Stuff UpAttention conservation notice: if you've read Michael Nielsen's stuff about Anki, this probably won't be new for you. Also, this is all very personal and YMMV. In a number of discussions of Anki here and elsewhere, I've seen Anki's value measured in terms of time saved by not having to look stuff up. For example, Gwern's spaced repetition post includes a calculation of when it's worth it to Anki-ize threshold, although I would be surprised if Gwern hasn't already thought about the claim going to make. While I occasionally use Anki to remember things that I would otherwise have to Google, e.g. statistics, I almost never Anki-ize things so that I can avoid Googling them in the future. And I don't think in terms of time saved when deciding what to Anki-ize. Instead, (as Michael Nielsen discusses in his posts) I almost always Anki-ize with the goal of building a connected graph of knowledge atoms about an area in which I'm interested. As a result, I tend to evaluate what to Anki-ize based on two criteria: 1. Will this help me think about this domain without paper or a computer better? 2. In the Platonic graph of this domain's knowledge ontology, how central is this node? (Pedantic note: it's easier to visualize distance to the root of the tree, but this requires removing cycles from the graph.) To make this more concrete, let's look at an example of a topic I've been Anki-izing recently, causal inference. I just started Anki-izing this topic a week ago, so it'll be easier for me to avoid idealizing the process. Looking at my cards so far, I have questions about and definitions of things like "d-separation", "sufficient/admissible sets", and "backdoor paths". Notably, for each of these, I don't just have a cloze card to recall the definition, I also have image cards that quiz me on examples and conceptual questions that clarify things I found confusing upon first encountering these concepts. I've found that making these
1riceissa10h I would be surprised if Gwern hasn’t already thought about the claim going to make I briefly looked at gwern's public database [https://www.gwern.net/Spaced-repetition#see-also] several months ago, and got the impression that he isn't using Anki in the incremental reading/learning way that you (and Michael Nielsen) describe. Instead, he seems to just add a bunch of random facts. This isn't to say gwern hasn't thought about this, but just that if he has, he doesn't seem to be making use of this insight. In the Platonic graph of this domain’s knowledge ontology, how central is this node? I feel like the center often shifts as I learn more about a topic (because I develop new interests within it). The questions I ask myself are more like "How embarrassed would I be if someone asked me this and I didn't know the answer?" and "How much does knowing this help me learn more about the topic or related topics?" (These aren't ideal phrasings of the questions my gut is asking.) knowing that I’ll remember at least the stuff I’ve Anki-ized has a surprisingly strong motivational impact on me on a gut level In my experience, I often still forget things I've entered into Anki either because the card was poorly made or because I didn't add enough "surrounding cards" to cement the knowledge. So I've shifted away from this to thinking something more like "at least Anki will make it very obvious if I didn't internalize something well, and will give me an opportunity in the future to come back to this topic to understand it better instead of just having it fade without detection". there’s O(5) actual blog posts about it I'm confused about what you mean by this. (One guess I have is big-O notation, but big-O notation is not sensitive to constants, so I'm not sure what the 5 is doing, and big-O notation is also about asymptotic behavior of a function and I'm not sure what input you're considering.) I think there are few well-researched and comprehensive blog posts, but I've found
2TurnTrout11h Although I haven't used Anki for math, it seems to me like I want to build up concepts and competencies, not remember definitions. Like, I couldn't write down the definition of absolute continuity, but if I got back in the zone and refreshed myself, I'd have all of my analysis skills intact. I suppose definitions might be a useful scaffolding?

You're right on both accounts. Maybe I should've discussed this in my original post... At least for me, Anki serves different purposes at different stages of learning.

Key definitions tend to be useful in the early stages, especially if I'm learning something on and off, as a way to prevent myself from having to constantly refer back and make it easier to think about what they actually mean when I'm away from the source. E.g., I've been exploring alternate interpretations of d-separation in my head during my commute and it helps tha... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I've said in one of my posts:

I'm OK saying:
'The body has an almost infinite number of potential positions'

And I am OK with it, but not completely. Something's niggling at me and I don't know what.

Am I missing something? Or is the statement valid?

(No link to the post containing my reasoning because I don't want to contaminate anyone else's thoughts...)

I contaminated myself because I thought it was a weird question by itself and had a smell of the answer being embedded in the context. In general "Does this analysis miss anything? I have not included analysis to avoid bias" isn't really an answerable question.

One of the tricks ot ponder on meanings is to wonder what the meaning stands in opposition to. To me it seems the statement plays the role of "enumerating the positions is not a fruitful way of approaching handling of positions" and if it is not enumerable then by theorycraft... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

2ChristianKl3h I don't think it changes much. It still leaves the question of what counts as one position open. Defining what the term means is likely to be very beneficial to be able to later make exact statements about positions. Any good science operationalizes the terms it cares about.
2clone of saturn17h What would it mean for the statement to be false? What would a body with a finite number of potential positions be like?
2Richard_Kennaway18h I don't understand this one!Ah, multiple negatives. Cancelling them out: there will always be more to discover or create in the arts of posture and movement.
Why "Referer"?
726m2 min readShow Highlight

When you click a link, by default your browser sends a request like:

   GET /your-page HTTP/1.1
   Host: your-site
   Referer: https://other-site/with-url
   [other headers]
It's telling the server what page it wants ( https://your-site/your-page) and it includes a Referer saying that you came from https://other-site/with-url.

But why Referer and not Referrer? Let's look back. The original version of HTTP had a much simpler request format:

   GET /your-page
No Host:, no Referer: no headers at all. This initial version, implemented in the WWW browser, became know... (Read more)

I am currently conducting research on the persuasiveness of contrarian positions. I'm crowdsourcing a list of unpopular beliefs that people may have for which plausible-enough sounding arguments could be made in support of them.

Existing topics that I use include, "climate change is not primarily caused by human activity", "nuclear proliferation is good for global stability" (an argument made by Kenneth Waltz), and "recycling is bad for the environment." While I don't agree with any of these, compelling-sounding cases can be made for them.




I wish I could delete the thread and start over with "what unpopular opinions do you hold?" as there are more responses critiquing the research methodology than attempting to answer the question.

Your assumptions about the research interest are incorrect (although likely no fault of your own, as I was being vague intentionally). The actual experiment tests different argumentative techniques on certain kinds of positions, depending on the initial level of background support that a position has (contrarian or conventional).

See the comment I mad... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

3bbleeker3h Carnivore diets [ https://www.reddit.com/r/slatestarcodex/comments/dlq3by/what_is_up_with_carnivore_diets/ ]
4Charlie Steiner7h Humans are incredible un-secure systems, so compelling arguments can be made for almost any position that anyone takes seriously. Political, identity, and commercialized issues are where you'll find the most pre-existing examples, simply because that's where people have incentives (psychological or tangible) to make arguments whether or not a position is true. I guess you're asking for examples that we (presumed intellectuals) find most compelling, but note that there's a serious selection effect going on here, because now you're not selecting merely contrarian ideas, you're selecting contrarian ideas and arguments that are pre-filtered for appeal to the sort of person you're interested in researching. You'll get a very different set of ideas and arguments here than if you ask alternative medicine practitioners what arguments they find compelling. And if you use these different sets of arguments in a study, I predict you'll find they convince quite different sets of people. To give a really on the nose example, consider the contrarian position "I have the power to make a rubber band colder than the surrounding room just by pulling on it." There are two different convincing arguments for this, which might convince very different groups. One argument is that this is actually a fact of thermodynamics, because rubber bands actually become more ordered when you stretch them (like straightening our a tangled string) and more disordered when allowed to relax, and this actually causes a change in entropy, which causes a change in temperature, and so they become colder when you pull on them. This is a fairly convincing argument, especially in our society where we might be disposed to believe a sciencey argument just on its tone and vocabulary. Another argument is that I know this because late one night I was playing around with a rubber band, and I noticed that if I focused really hard on the temperature of the rubber band, it became colder when I pulled on it to stretc
1leggi7h Free speech. Total or should be restricted to 'civil speech'?
2Viliam16h It is not obvious to me from reading the text whether you are aware of the distinction between "private property" and "personal property" in Marxism. So, just to make sure: "private property" refers to the means of production (e.g. a factory), and "personal property" refers to things that are not means of production (e.g. a house where you live, clothes, food, toys). The ownership of "private property" should be collectivized (according to Marx/ists), because... simply said, you can use the means of production to generate profit, then use that profit to buy more means of production, yadda yadda, the rich get exponentially richer on average and the poor get poorer. With "personal property" this effect does not happen; if you have one table and I have two tables, there is no way for me to use this advantage to generate further tables, until I become the table-lord of the planet. (There seem to be problems with this distinction. For example, things can be used either productively or unproductively; I can use my computer to create software or browse social networks. Some things can be used productively in unexpected ways; even the extra table could be used in a workshop to produce stuff. I am not a Marxist, but I suppose the answer would probably be something like "you are allowed to browse the web on your personal computer, but if we catch you privately producing and selling software, you get shot".) Marx was able to write that private property is done away with for 9/10s of the population, I don’t know how true it was at the time, but it certainly isn’t true today.So, is this the confusion of Marxist terms, or do you mean that today more than 10% of people own means of production? In which sense? (Not sure if Marx would also count indirect ownership, such as having your money in an index fund, which buys shares of companies, which own the means of production.) Did Marx actually argue for abolishing "personal proprety" (according to his definition, i.e. ownership
2Richard_Kennaway2h For many people nowadays, their own brain is their means of production, often assisted by computers and their software, but those are cheap compared what what can be earned by using them. Marx did not know of such things, of course, but how do modern Marxists view this type of private ownership of means of production? For that matter, how did Marx view a village cobbler who owned his workshop and all his tools? Hated exploiter of his neighbours? How narrow was his motte here?

IANAM[1], but intuitively it seems to me that an exception ought to be made (given the basic idea of Marxist theory) for individuals who own means of production the use of which, however, does not involve any labor but their own.

So in the case of the village cobbler, sure, he owns the means of production, but he’s the only one mixing his labor with the use of those tools. Clearly, he can’t be exploiting anyone. Should the cobbler take on an assistant (continuing my intuitive take on the theory), said assistant would presumably have to now receive some suit

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

https://www.warner.senate.gov/public/index.cfm/pressreleases?id=3F2AA8B6-36F8-453B-9B59-FC886871CEB9

Tossing this out as I know many here are deep into computer systems and may well have interest or knowledge of the underlying social media (using this in a very broad sense as I think it might go past just FB, Instagram, YouTube, Vimio, Meetups or other settings) technologies.

Status of the thought is "ligh bulb just turned" on but not clear it is really shining a light on anything sensible.

Have not studied the law but from the few news stories I've seen the goal is to require a ... (Read more)

Technical AGI safety research outside AIΩ
335d3 min readΩ 15Show Highlight

I think there are many questions whose answers would be useful for technical AGI safety research, but which will probably require expertise outside AI to answer. In this post I list 30 of them, divided into four categories. Feel free to get in touch if you’d like to discuss these questions and why I think they’re important in more detail. I personally think that making progress on the ones in the first category is particularly vital, and plausibly tractable for researchers from a wide range of academic backgrounds.

Studying and understanding safety problems

  1. How strong are the econo
... (Read more)

[copying from my comment on the EA Forum x-post]

For reference, some other lists of AI safety problems that can be tackled by non-AI people:

Luke Muehlhauser's big (but somewhat old) list: "How to study superintelligence strategy"

AI Impacts has made several lists of research problems

Wei Dai's, "Problems in AI Alignment that philosophers could potentially contribute to"

Kaj Sotala's case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psy... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

This is a response to Abram's The Parable of Predict-O-Matic, but you probably don't need to read Abram's post to understand mine. While writing this, I thought of a way in which I think things could wrong with dualist Predict-O-Matic, which I plan to post in about a week. I'm offering a $100 prize to the first commenter who's able to explain how things might go wrong in a sufficiently crisp way before I make my follow-up post.

Dualism

Currently, machine learning algorithms are essentially "Cartesian dualists" when it comes to themselves and their environment. (Not a philosophy major -- let

... (Read more)
2John_Maxwell15h it seems plausible that someone training the Predict-O-Matic like that would think they're doing supervised learning, while they're actually closer to RL. How's that?

Assuming that people don't think about the fact that Predict-O-Matic's predictions can affect reality (which seems like it might have been true early on in the story, although it's admittedly unlikely to be true for too long in the real world), they might decide to train it by letting it make predictions about the future (defining and backpropagating the loss once the future comes about). They might think that this is just like training on predefined data, but now the Predict-O-Matic can change the data that it's evaluated against, so t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Hi there!

I'm a new member and now feeling giddy from the amount of contents presented here on the site. I've read those welcome and FAQ posts, they all point to the library's core reading as material for beginners. BUT... I've just finished the Preface post of R:A-Z, and from the author's very words, it looks like this series focus more on the big, vague things than those hands-on lessons.

So my questions are: among it and the Codex and HPMOR, which is the most newbie-friendly? In your opinion, what are the best 9 sequences rookies like me should read to get to a somewh... (Read more)

3Answer by gilch7h If you're looking for shortcuts, perhaps start with: * Facing the Singularity [https://intelligenceexplosion.com/] for the hope and peril that kicked off this whole movement. Smarter Than Us [https://smarterthan.us/] might go a little deeper. * Highly Advanced Epistemology 101 for Beginners [https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs] for the epistemic rationality and maybe * Hammertime [https://www.lesswrong.com/s/qRxTKm7DAftSuTGvj] for the instrumental rationality. I am not sure how approachable the above will be to a "rookie", because I can only speak from my own experience. I read Drexler's Engines of Creation, and Kurzweil's The Singularity Is Near and was fascinated by these topics. I kept finding LessWrong on Google searches about them and I thought the writing was good (especially Three Worlds Collide [https://www.lesswrong.com/s/qWoFR4ytMpQ5vw3FT]). So after jumping around a bit as I discovered LessWrong in the first place, I read the original blog posts that went into R:AZ in pretty much the order written. I did end up downloading an ebook version of Eliezer's posts to do it (not the slightly reorganized R:AZ ebook, which was not available at the time), which I read on my phone in my spare time. In my opinion, you should at least try to read all of the R:AZ sequences, even though there are more than 9. It's quite long, but you'll get insights from it long before you finish. There's also some redundancy as the concepts build on each other. So if you're struggling getting through an essay, I would say read it aloud and move on even if you don't completely get it. (This might be easier with the audiobook.) Some essays are easier to read than others, and I think that some are more valuable than others. If that's still too much for you, I can try to point out the individual essays I think are especially important: * Raising the Sanity Waterline [https://www.greaterwrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline]

I'm currently in Fake beliefs and, as you correctly said, more than half the times I don't really get them. I think part of this is because EY wrote using such big words and complicated grammar that confuse non-native speakers.

However, I'm not a fan of jumping ships and will try to wade through R:A-Z before committing to another sequence. You convinced me! :) That said, HPMOR seems to be very appealing to beginners since it combine something new & strange (rationality) with something most of us are familiar with already (wizardry, lol).

4Answer by eigen14h You can really test the waters and see for yourself; it's not that the content is going to go anywhere. With that said, I started with the sequences (R:A-Z) and while reading it, I also read HPMOR (which being fiction, it was a really fast read). Then I mixed some of CODEX in there. (So that's the order I recommend following). HPMOR really ruined a big chunk of fiction for me; there are no characters with the self-awareness that those on HPMOR have. In The CODEX, when Scott Alexander tries to find if AA works [https://www.lesswrong.com/s/B384FrQNrxSq4hZoS/p/CfX6pGepdjQYELSpK], he cannot resist himself but to dig deeper and look at the underlying reason of why something is the way it's. Just like a physicist looking at natural phenomena, he investigates, just as well, human nature. The Sequences changed my mind.
3Raemon15h There's room for epistemology practice to be hands-on, relevant and exercise-based without being about "productivity."

Previous posts:

In the previous post, I talked about what I've learned. That post focused on bigger things. But there were a lot of smaller, more miscellaneous things that I've learned too. Those are the things that I want to talk about in this post.

People think of a mobile app when you say you're building an "app"

Even when I clarify and try to explain that it's a web app, most people are still confused. So sometimes I call it a website, which I hate because that sort of implies tha... (Read more)

Thank you, great sharing.

I want to add my 2 cents about things I disagree.

1) In the last post there was a thought like "I'm rational and proactive, and they're lazy and irrational, and therefore I do not prosper". That can be very dangerous mindset and it's incorrect by definition I think. That's not about pro poker players don't want to take +EV decisions. Vice versa, they have a lot of +EV decisions (check out new software, watch/read theory, discuss hands on forums, private discussions, group/individual classes, soft ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Thanks for the post!

You definitely highlight that there's a continuum here, from "most deliberation-like" being actual humans sitting around thinking, to "least deliberation-like" being highly-abstracted machine learning schemes that don't look much at all like a human sitting around thinking, and in fact extend this continuum past "doing meta-philosophy" and towards the realm of "doing meta-ethics."

However, I'd argue against the notion (just arguing in general, not sure if this was your implication) t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

2John_Maxwell8h Speed. In AI takeoff scenarios where a bunch of different AIs are competing with each other, the deliberation process must produce some answer quickly or produce successive answers as time goes on (in order to figure out which resources are worth acquiring). On the other hand, in takeoff scenarios where the first successful project achieves a decisive strategic advantage, the deliberation can take its time. I suspect a better way to think about this is the quality of the deliberation process as a function of time available for deliberation, but time available for deliberation might itself vary over time (pre- vs post- acquisition of decisive strategic advantage).
Calibrating With Cards
313mo2 min readShow Highlight

In this post, I'll try to bring together two things I enjoy: rationality and magic. Like Hazard, I've also practiced close-up magic for a good amount of time now. After recently seeing Tyler Alterman make a Facebook post about estimations and System 1, it occurred to me that there are a few calibration exercises you can do with a deck of playing cards. The three exercises below are all variants of cutting/manipulating a deck of cards, and then trying to intuit something about the deck.

This serves three purposes:

  1. Get a feel for your System 1:
    1. The goal of the following three exercises is
... (Read more)
4Ben Pace17h My thoughts: * I tried these exercises for about 15-20 mins. In that time, I tried cutting-and-counting like 10 times, and I was off-by-one every time *curses*. I managed to repeat the cut a few times - I am unsure whether you're supposed to move the cards around a bit after cutting, or whether you're allowed to hold them identically when repeating; they're often a little bit uneven at the place I just cut, which feels like an unfair advantage. I did not manage to get the riffle-peek-then-cut right any times. I took a few goes at riffling really slowly, and failed often to see the card (because two cards would flip together), which I felt unhappy about until I saw you explain that sometimes you don't always see it on a single pass through, and then I felt like I wasn't failing too hard. * I think this will be really useful if I ever seriously try to learn close-up magic, and much better than other intros I've seen. I really appreciated being guided in what to notice. It reminds me of The Inner Game of Tennis, which talks a lot about using S2 to guide where your attention is as you do the task, and then letting S1 take in that data and use it better - as opposed to getting S2 to tell S1 what to do (or not do) directly. I feel like the OP doesn't split close-up card magic into tricks but atomic moves, and helps guide my attention to learning the key skills that make-up those moves. * I feel some motivation to try to write a similar guide for playing the classical guitar, if only for my own benefit of making the moves explicit, and running metacognition on what to notice.

Thanks for trying these out, Ben!

If you ever are interested in learning close-up magic some more, I have lots more thoughts on what good resources are for learning / have strong opinions on what makes a good magic effect. I haven't written about them for the LW audience, but maybe more of this hybrid stuff will manifest later on.

Invisible Choices, Made by Default
193d1 min readShow Highlight

There are two popular language learning software platforms: Anki and Duolingo. Anki is hard, free and effective. Duolingo is easy, commercial and ineffective.

The number of Duolingo users far outstrips the number of Anki users. Duolingo has 8 million downloads on the Play Store. Anki has 40 thousand. So there are 200 Duolingo users for every Anki user[1]. If you ask a random language learner what software to use they'll probably suggest Duolingo. If you ask a random successful language learner what software to use they'll probably suggest Anki. Most language learners are unsuccessful.

It should

... (Read more)

Actually making the cards is what stops me.

2Elizabeth16h I've found Anki really terrible for learning, even for simple things like vocabulary; what it does is help me remember things I've already at least half-learned.
All I know is GoodhartΩ
112d3 min readΩ 4Show Highlight
...

[Retracted my other reply due to math errors]

This is only true for the kind of things humans typically care about; this is not true for utility functions in general. That's the extra info we have.

While I generally agree that there can be utility functions that aren't subject to Goodhart, I don't think that this strictly pertains to humans. I expect that when the vast majority of agents (human or not) use scientific methods to develop a proxy for the thing it wants to optimize, they will found that proxy to break down upon intense optimizatio... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

1Isnasene13h This is only true for the kind of things humans typically care about There are utility functions for which Goodhart doesn't apply but I think it's more generally agent-centric than just human-centric. I think that the vast majority of proxies developed by agents for the sake of optimizing a harder-to-measure function will experience Goodhart (and, to me, the term "utility function" and "proxy" imply that this is what's happening). Moreover, I think that Goodhart only doesn't apply in the case where making the proxy function arbitrarily large doesn't also change the behavior of the observed universe an arbitrary amount. You can define a utility function for which this is true but the ones that I've thought of so far are associated with weird discontinuities. The mathematical spitballing I did in making this claim: If we have a utility function U and a proxy utility V that represents U, we expect the plot of coordinates (x=V(World State), y=U(World State) to be roughly sublinear since * (x=V(World State), y=U(World State) is upper-bounded by (x=U(World State), y=U(World State) or y=x * we can re-scale V however we want so increasing a unit of V corresponds to an increasing a unit of U at some arbitrary location on the (rough) curve This indicates that, if the noisiness of the relationship between V and U increases as a linear or superlinear function of V, it could wash-out any positive effects of increasing V. Since the noise also cannot actually improve the performance of V above the upperbound U, the symmetry of this noise as something that may improve performance is broken and leads to an overall downtrend in utility as V is increased more and more. When would one expect a linear/superlinear increase in the V vs U to actually happen? You might expect this if 1. The proxy V was built to model U based on measurements of their relationship that all occurred in a kind of environment with V in a certain domain. 2. Increasing V outside that domain chan
2Stuart_Armstrong17h Thanks! Error corrected.

Originally posted at Living Within Reason.


Last week, Jacob Falkovich, of the Putanumonit blog, put up a post trying to figure out why rationalists are disproportionately polyamorous. He notes that about 5% of Americans engage in consensual nonmonogamy, while 17% of Americans in the 2014 Less Wrong survey indicated that they did. My expectation is that the number for both is slightly higher today. In service of this goal, Falkovich developed several theories and surveyed a number of his readers. His results ended up inconclusive.

Since this involves the intersection of the two themes of this blo... (Read more)

Oh, I see. Reading through his post again, I think I actually agree with you that Jacob was conflating the two. Thanks for clarifying, the whole conversation seems reasonable now.

5Evan Rysdam13h Ahh, I see. Thanks for this analysis, now I see where the posts above mine were coming from.
5Ben Pace13h Happy to hear :)
8Viliam16h Yet another factor: Nerds are usually worse at lying and other social skills. Therefore a nerd would consider polyamory in a situation where a non-nerd would be like "I can simply cheat on my spouse, why make it unnecessarily complicated". In other words, don't just compare consensual nonmonogamy" but also nonmonogamy in general. If the numbers for nonmonogamy in general are more similar, it can mean that rationalists are less likely to lie about their behavior; that the more frequent thing is not nonmonogamy but consent.

I'm giving out $1,000 of prize money for the best exercises submitted in the next two weeks on a topic of interest to the LW community. I'm planning to distribute $1,000, with $500 of that go to the first place.

To submit some exercises, leave a comment here linking to your exercise(s) by midnight at the end of Friday 20th September PDT (San Francisco time), and I'll announce the winners by the Friday two weeks later (give me the time to try a bunch out). You're welcome to post them as a LW post, on your shortform feed, or privately link them to me in a PM if you want, thoug... (Read more)

Now announced, see the relevant top-level comment in this thread. Thanks for checking.

34Ben Pace15h This comment records the prizewinners of the Rationality Exercises Prize of September 2019 ($1,000), and shares some of my thoughts on the submissions. Rationality Exercise Prizewinners A prize of $125 goes to elriggs, for their sequence Arguing Well [https://www.lesswrong.com/posts/FthQatMGBajcFt7Sj/arguing-well-sequence]. I did the first post, which had 11 simple exercises on Scott Alexander’s fallacy of ‘Proving too much’, plus 2 reflection steps. A key thing that elriggs did (and TurnTrout below), was to not divorce the exercises from the understanding and the explanation - they weren’t added on at the end, but were part of the learning. Elriggs’ reminded me a bit of The Art of Problem Solving, where the act of solving the problems how you discover the mathematics. Each post combines of a wealth of examples with points where you stop and try to generalise the rule / make your algorithm explicit. The latter part especially helped me resolve some confusions I had. I wrote down my experience and more specific thoughts doing the exercises in a comment here [https://www.lesswrong.com/posts/tAqPJhdWE2tMJnWvg/proving-too-much-w-exercises#dxL2A6egMN5A85e4F] . A prize of $125 goes to whales, for Exercises #1 [http://whaaales.com/exercise-1/] and #4 [http://whaaales.com/exercise-4/]. Exercise #4 was ~75% of the reason I gave whales a prize, and is (roughly) about having an integrated world-model by looking at social science results. I’m having a hard time saying why I liked whales’ exercises. If I try to point at what I liked about them, I’ll say that I think they were picked to be fairly simple-yet-confusing, and also permitted clear answers at the end - not unlike all the problems in the book Thinking Physics - and they helped me to notice my confusion along the way. Something about them felt very provocative/opinionated in a positive way, which differed from the other prizes. I approached them expecting to get something out of them, and managed to get out value

Follow-up to: Status Regulation and Anxious Underconfidence


 

Somehow, someone is going to horribly misuse all the advice that is contained within this book.

Nothing I know how to say will prevent this, and all I can do is advise you not to shoot your own foot off; have some common sense; pay more attention to observation than to theory in cases where you’re lucky enough to have both and they happen to conflict; put yourself and your skills on trial in every accessible instance where you’re likely to get an answer within the next minute or the next week; and update hard on single pieces... (Read more)

It seems to be hard in practice to draw the line between "only picking certain bets" and "doing things I'm best at" (though the theoretical difference is obvious - maximizing P(win) by events and maximizing usefulness of one's skills to win by skills). The latter seems to be a good practice - yet your attack on the former seems to indirectly hamper it.

I think that links to Chapman's texts should contain some disclaimer that "rationality" as defined by Chapman is something completely different from "rationality" as defined by Less Wrong. (Chapman said that once himself; sorry I don't remember the link.) Just to avoid the confusion that these texts were written specifically as a reaction to Less Wrong or the rationalist community.

Specifically, a lot of the advice to Chapman!rationalists is already in the Sequences. "A map is not the territory" gets mentioned over an... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Load More