Hi everyone, I have recently started reading this site again after having had a break for a couple years.
What I like most about LessWrong (apart from the name), is that it contains so much good quality information about rationality, all in one place. And the posters maintain such a high standard of reasoning. One thing I would like to see more of on the site, though, is analyses of common human behaviour.
Also, it would be great to see some humour on here.
Welcome!
Some of Robin Hanson's material has engaged more with topics like fashion, sports and holidays, so you might also find some value in that.
Hi. I'm new to LessWrong and looking forward to exploring it. I've been slowly wading into the rationale spheres out there, one sea at a time, over the past couple of years. I'm LA-based, so I'll attend the LA meetups when I'm able.
About me: robotics engineer focused on mobility and surface sampling, among other areas; trying to learn more and eventually do some work in the newly emerging field of philosophy of engineering (actively exploring grad-school options in this area); enjoys volleyball and live music; not a very habitual forum-poster, so I'll probably post infrequently and mostly read/comment from a distance.
Thanks, LW Team, for the very comprehensive FAQ and About pages. That should make the experience of beginning to swim around in LW less confusing than most online spaces.
Hey all. I've recently come into this blog after the SSC-NYT-gate and I'm very intrigued by the content posted and the community. I've skimmed a number of articles and I'm felt immediately drawn by every single thing I read.
About me: I'm a robotics engineer currently living in NY. You can follow me in twitter @wannabegroncho
When thinking about information asymmetry in transactions (e.g. insurance, market for lemons), I can think of several axes for comparison:
Insurance-like transactions pick "after", "buyer", and "action": the person buying the insurance can choose to act more carelessly after purchasing the insurance.
Market for lemons cases pick "before", "seller", and "good": prior to the transaction, the seller of the good knows more about the quality of the good.
So in many typical cases, the three axes "align", and the former is called a moral hazard and the latter is called adverse selection.
But there are examples like an all-you-can-eat buffet that sets a single price (which encourages high-appetite people to eat there). This case picks "before", "buyer", and "action". So in this case, 2/3 of the axes agree with the insurance-like situation, but this case is still classified as adverse selection because the official distinction is about (1).
Wikipedia states "Where adverse selection describes a situation where the type of product is hidden from one party in a transaction, moral hazard describes a situation where there is a hidden action that results from the transaction" (i.e. claims that (3) is the relevant axis) but then on the same page also states "For example, an all-you-can-eat buffet restaurant that sets one price for all customers risks being adversely selected against by high appetite" (i.e. classifies this example as adverse selection, even though classifying according to (3) would result in calling this a moral hazard).
Does anyone know why (1) is the most interesting axis (which I'm inferring based on how only this axis seems to have names for the two ends)?
Sometimes it seems to me that LW is too ready to build for the ages. The best heuristics, the best textbooks, the best meetup practices. Either there should be mandatory time thresholds for re-evaluations, or we should maybe relax about this. (Except for the Boring Advice Repository, perhaps.)
Chris McKinstry was one of two AI researchers who committed suicide in early 2006. On the SL4 list, a kind of precursor to Less Wrong, we spent some time puzzling over McKinstry's final ideas.
I'm mentioning here (because I don't know where else to mention it) that there was a paper on arxiv recently, "Robot Affect: the Amygdala as Bloch Sphere", which has an odd similarity to those final ideas. Aficionados of AI theories that propose radical identities connecting brain structures, math structures, and elements of cognition, may wish to compare the two in more detail.
I've been wondering about design differences between blogs and wikis. For example:
I find the above differences interesting because they can't be explained (or are not so easy to explain) just by saying something like "a wiki is a collaborative online reference where each page is a distinct topic while a blog is a chronological list of articles where each article tends to have a single author"; this explanation only works for things like emphasis on publication date (wikis are not chronological, so don't need to emphasize publication date), availability of full history (wikis are collaborative, so having a full history helps to see who added what and to revert vandalism), display of authorship (blogs usually have a single author per post so listing this makes sense, but wiki pages have many authors so listing all of them makes less sense), standardized section names (a blog author can just ramble about whatever, but wikis need to build consistency in how topics are covered), and tone/writing style (blogs can just be one author's opinions, whereas wikis need to agree on some consistent tone).
Has anyone thought about these differences, especially what would explain them? Searching variations of "wikis vs blogs" on the internet yields irrelevant results.
I have a bunch of thoughts on this, some quick ones:
The reading experience on wikis is very heavily optimized for skimming. This causes some of the following design choices:
Most of the wikis I know use a variable width for the body text, rather than a narrow fixed width that is common on many websites (including blogs)
This is only because most wiki administrators use the default wiki layout/skin. For the major wiki systems, many layouts exist that use fixed body width. (e.g. Skins for MediaWiki, Skins for PmWiki)
Most of the wikis I know have a separate discussion page, whereas most blogs have a comments section on the same page as the content
In any decent wiki system, it is trivial to put (or mirror/transclude/etc.) the comments onto the main page.
I think wikis tend to have smaller font size than blogs
This is, obviously, trivially customizable.
Wikis make a hard distinction between internal links (wikilinks) and external links, going so far as to discourage the use of external links in the body text in some cases
As mentioned in another response, this seems to just be Wikipedia.
Has anyone thought about these differences, especially what would explain them? Searching variations of “wikis vs blogs” on the internet yields irrelevant results.
What would explain them is just some contingent design choices of the default layouts of some popular systems (e.g. MediaWiki) and some popular wikis (e.g. Wikipedia), and most wiki administrators not really giving a lot of thought to whether to deviate from those defaults.
Here are some examples I found of non-Wikipedia-related wikis discouraging the use of external links:
Links to external sites should be used in moderation. To be candidate for linking, an external site should contain information that serves as a reference for the article, is the subject of the article itself, is official in some capacity (for example, run by id Software), or contains additional reading that is not appropriate in the encyclopedic setting of this wiki. We are not a search engine. Extensive lists of links create clutter and are exceedingly difficult to maintain. They may also degrade the search engine ranking of this site.
Elinks should be constrained to one section titled "External links" at the end of a page. Elinks within the main content of a page are discouraged, and should be avoided where possible.
If you want to link to a site outside of Wookieepedia, it should almost always go under an "External links" heading at the end of an article. Avoid using an external link when it's possible to accomplish the same thing with an internal link to a Wookieepedia article.
Avoid using external links in the body of a page. Pages can include an external links section at the end, pointing to further information outside IMSMA Wiki.
External links should not be used instead of wikilinks unless absolutely necessary.
[1.] Most of the wikis I know use a variable width for the body text, rather than a narrow fixed width that is common on many websites (including blogs)
[2.] Most of the wikis I know have a separate discussion page, whereas most blogs have a comments section on the same page as the content
[3.] I think wikis tend to have smaller font size than blogs
[4.] Wikis make a hard distinction between internal links (wikilinks) and external links, going so far as to discourage the use of external links in the body text in some cases
1. I've haven't seen blogs with a fixed width for body text. (I've seen blogs which have a (front) page of fixed width views of articles, each which conclude with a "Keep Reading" link.)
2. Wikis think they're a paper - similar works may be referenced via a number, that references a list of sources. (Perhaps there's an official style guide they're following/imitating that's external.)
3. This seems to boil down to "Wikis are longer than blogs." (Might also be the cause of 1.)
4. I don't think I've seen this outside Wikipedia. It could be caused by wikis imitating encyclopedias/papers, or wikipedia. It could be an attempt to capture/hold attention.
- I’ve haven’t seen blogs with a fixed width for body text. (I’ve seen blogs which have a (front) page of fixed width views of articles, each which conclude with a “Keep Reading” link.)
Most blogs have a fixed body text width. Observe:
(All links are to individual post pages, not the blog’s front page.)
That’s ten examples, including a cooking blog, a tabletop RPG blog, a naval history blog, a regular history blog, an economics blog, etc. All have fixed body text widths.
I mixed up width and length, my bad. So variable width is when there's text, and occasionally stuff on the sides like diagrams, and the text goes further out when the stuff isn't there, and is pulled back when there is?
Fixed width vs. variable width simply has to do with the way in which the width of the main text column changes when you change the width of the viewport (i.e., the browser window).
To easily see the difference, go to GreaterWrong.com, click on any post, and then look to the top right; you’ll see three small buttons, like this:
This is the width selector. Click on any of the three icons to select that width. The left-most button (‘normal’) and the middle button (‘wide’) are fixed-width layouts; the right-most button (‘fluid’) is a variable-width layout. Try resizing your browser window (changing its width) after selecting each of the options, and you’ll see what I am talking about.
Variable-width is the web's default, so it's definitely not harder to do. Many very old websites (10+ years old) use variable width, before anyone started thinking about typography on the web, so in terms of web-technologies, that's definitely the default.
80.000 hours recommands a Machine Learning PhD as a high-impact career path – can help understand the most important technology, and a powerful entry ticket into many relevant jobs. And ML jobs are often well-paid, so they allow earning-to-give as a backup option.
I don't really like the Machine Learning institute at my university, but the professor who leads the institute for information security (whom I do like) has offered to combine the two fields. He said they're primarily interested in "privacy-preserving ML".
How valuable would such a mixed PhD be, compared to a pure ML one? And also, how much does the prestige of the university matter? Should I try to get into the most prestigious one in my country, even if it takes longer? I really struggle to approach these questions. I'm also unsure how much I should care about how I feel about the professor.
I liked Robin Hanson's recent post Unending Winter Is Coming (it matches my understanding of at least one scenario of how physics/computation/war is likely to work in the future), but find his lack of urgency for trying to solve the problem puzzling. To quote my comment there:
Our descendants may not have the same opportunities to solve this problem that we have. At a minimum it seems much easier (even if still very hard in an absolute sense) to solve this problem (for example by coordinating to build a Singleton / strong world government) before human or post-human civilization starts spreading into the stars. It seems like lack of urgency is only justifiable if one was very certain that space colonization is far in the future, and I don't see how that belief is justifiable.
I stumbled on this site randomly one day a few months back at work and the very thing that caught my attention was the name, lesswrong. I read a few posts, maybe one or two, and bookmarked the site. Today, I was in dire need to read something informative, rational, and mind training; just when I started scrolling the bookmarks, again, the title grabbed my attention. Completely forgetting what this is about, I opened the site, in the curiosity of "lessWrong", ended signing up, deciding to be a member. As much as I am excited to read more here, I am looking forward to write something.
Hey everyone. Like others, I found this site while looking for information on Zettlekasten. Pleasantly surprised to find its connections to Slate Star Codex, which I have enjoyed reading in the past.
I'm interested in learning about xrisks and rationality, and hopefully taking ideas across disciplines and contexts. Seattle area.
Hello! I have been exploring LessWrong on a non-regular basis for a few months, and have recently been advised to create my own account.
I am a french aerospace student, who will (likely) start a PhD next year. I am particularly interested in mitigating xrisks, and would like to orient my career in such a direction. I enjoy ethics, epistemology, and philosophy of mind. A fact which I think is relevant: I am probably a bit less fond of bayesianism as most people here.
I'm looking for a comment from /u/wei_dai; it had something to do along the lines of deciding what to work on (or do, or study) week by week, and then updating/changing after the week (maybe in a post about UDT?) Does someone know what I'm talking about? Search function, wei_dai posts and google has turned up nothing. Thanks for anyone's help!
Comments like this one and this one come to mind, but I have no idea if those are what you're thinking of. If you could say more about what you mean by "updating/changing after the week", what the point he was trying to make was, and more of the context (e.g. was it about academia? or an abstract decision in some problem in decision theory?), then I might be able to locate it.
First, thank you so much for helping me. No, those are not the comments I had in mind.
It was more something like the texts he has up on his web-page: http://www.weidai.com/stock-options.txt
It was concise and technical (Like, let X be the set of decisions you could make... and the conclusion was why it does make sense to decide things on a week by week basis) and I think it was just a comment here on this website, but I am not sure. Anyways, don't waste time looking, I just searched a bit more and I could not find it; I will, most likely, message him after the holidays.
There's a post somewhere in the rationalsphere that I can't relocate for the life of me. Can anybody help?
The point was communication. The example given was the difference between a lecture and a sermon. The distinction the author made was something like a professor talking to students in class, each of whom then goes home and does homework by themselves, versus a preacher who gives his sermon to the congregation, with the expectation that they will break off into groups and discuss the sermon among themselves.
I have a vague memory that there were graphics involved.
I have tried local search on LessWrong, site search of LessWrong, and browsing a few post histories that seemed like they might be the author based on a vague sense of aesthetic similarity. I was sure it was here, but now I fear it may have been elsewhere or it is hidden in some other kind of post.
What audiobooks should I listen to? Things I'd like to learn more about:
I listen to econtalk, but "every econtalk episode" doesn't have enough density of what I'm interested in here.
Also, I think there's an important difference between books and podcasts in how carefully constructed and checked they are.
Eh, depends on the book (see e.g. Guzey's takedown of Why We Sleep).
Plus most EconTalk episodes are about books the guests have written, and Russ is a fairly critical host. I'd actually wager that the epistemic quality of the median EconTalk book is higher than that of the median nonfiction English-language book.
The Great Transformation is excellent economic history, though it's old. Lou Keep's essay on it is a good entry point.
Whence I come: 78 years old, Ph. D. social psych, musician, lately reading a lot of philosophy. Just wondering if the community here thought Hume was an idiot and the latest findings about emotions being a necessary part of decision-making horrifying. bill w
Hume was an idiot
Idiot about what?
emotions being a necessary part of decision-making horrifying
Horrifying but true?
Welcome!
wondering if the community here thought Hume was an idiot
Just searched old posts, and apparently at least one person on LW thought Hume was a candidate for the Greatest Philosopher in History. That's an obscure post with only one upvote though, so can't be considered representative of the community's views.
In general I think this community tends to be not too concerned with evaluating long-dead philosophers, and instead prefers to figure out what we can, informed by all the knowledge we currently have available from across scientific disciplines.
Historical philosophers may have been bright and made good arguments in their time. But they were starting from a huge disadvantage to us, if they didn't have access to a modern understanding of evolution, cognitive biases, logic and computability, etc.
For a fairly representative account of how LW-ers view mainstream philosophy, see: Less Wrong Rationality and Mainstream Philosophy and Philosophy: A Diseased Discipline.
wondering if the community here thought... the latest findings about emotions being a necessary part of decision-making horrifying
I'm not sure exactly what you're referring to. But in general I think the community is pretty on-board with thinking that there's a lot that our brains do besides explicit verbal deductive reasoning, and that this is useful.
And also that you'll reason best if you can set up a sort of dialogue between your emotional, intuitive judgments and your explicit verbal reasoning. Each can serve as a check on the other. Neither is to be completely trusted. And you'll do best when you can make use of both. (See Kahneman's work on System 1 and System 2 thinking.)
is there any explanation of the current Karma System? The main thing I can find is this. (you need to scroll to 'The karma system', for some reason you can click on subsections to go to them, but you can't link to them).
Also why do I see a massive message that says 'habryka's commenting guidelines' when I am writing this comment, but there are no guidelines or link? Is this just a weird extra ad for your own name?
The commenting guidelines allows users to set their own norms of communication for their own private posts. This lets us experiment with different norms to see which work better, and also allows the LessWrong community to diversify into different subcommunities should there be interest. It says habryka's guidelines because that's who posted this post; if you go back through the other open threads, you will see other people posted many of them, and different commenting guidelines here and there. I think the posts that speak to this the most are:
[Meta] New moderation tools and moderation guidelines (by habryka)
Meta-tations on Moderation: Towards Public Archipelago (by Raemon)
Yeah, my current commenting guidelines are empty. Other users have non-empty commenting guideliens.
The FAQ covers almost all the site-functionality, including karma. Here is the relevant section:
https://www.lesswrong.com/faq#Karma___Voting
You can also link to subsections, if you just right-click on the relevant section in the ToC and select "Copy Link Address".
My mistake. This is the "Hello, world" place. But the default sort of this one should probably not be by "top scoring"? However, like the visible top scored comment, I, too, have been away from LW for some years. (Nice to see that I can change the sort order without losing my draft. Evidence of solid code. However the newly visible top comment is from 5 months ago? Months between comments?)
I was reminded of LW by The AI Does Not Hate You, though I'm pretty sure I've seen other references to it over the years. So far my impressions are mostly favorable, except that it seems to use 1D karma and I'm averse to reducing people (or even my opinions about people) to any single dimension.
I am somewhat interested in the mentioned books related to LW, but I have trouble finishing ebooks. Perhaps the first thing I should seek is advice on how to enjoy ebooks? I still enjoy many dead trees every year. (And I confirmed that none of my local libraries has any of them. I pretty regularly use a half dozen library systems.)
I'm (re-)reading up on absolutely continuous probability spaces right now. The defintion for the expected value I find everywhere is this:
(1):
The way to interpret this formula is that we're integrating over the target space of rather than the domain, and is a probability density function over the target space of . But this formula seems highly confusing if that is left unsaid ( doesn't even appear in it – what the heck?). If one begins with a probability density function over a probability space and then wants to compute the expected value of a random variable , I think the formula is:
(2):
It seems utterly daft to me to present (1) without first presenting (2) if the idea is to teach the material in an easily understandable way. Even if one never uses (2) in practice. But this is what seems to be done everywhere – I googled a bunch, checked wikipedia, and dug out an old script, I haven't found (2) anywhere (I hope it's even correct). Worse, none of them even explicitly mention that ranges over rather than over after presenting (1). I get each random variable does itself define a probability space where the distribution is automatically over but I don't think this is a good argument not to present (2). This concept is obviously not going to be trivial to understand.
Stuff like this makes me feel like almost no-one thinks for themselves unless they have to, even in math. I'm interested in whether or not fellow LW-ians share my intuition here.
There seems to be a similar thing going on in linear algebra, where everyone teaches concepts based on the determinant, even though doing it differently makes them far easier. But there it feels more understandable, since you do need to be quite good to see that. This case here just feels like people aren't even trying to optimize for readability.
When I learned probability, we were basically presented with a random variable X, told that it could occupy a bunch of different values, and asked to calculate what the average/expected value is based on the frequencies of what those different values could be. So you start with a question like "we roll a die. here are all the values it could be and they all happen one-sixth of the time. Add each value multiplied by one-sixth to each other to get the expected value." This framing naturally leads to definition (1) when you expand to continuous random variables.
On one hand, this makes definition (1) really intuitive and easy to learn. After all, if you frame the questions around the target space, you'll frame your understanding around the target space. Frankly, when I read your comment, my immediate reaction was "what on earth is a probability space? we're just summing up the ways the target variable can happen and claiming that its a map from some other space to the target variable is just excessive!" When you're taught about target space, you don't think about probability space.
On the other hand, defintiion (2) is really useful in a lot of (usually more niche) areas. If you don't contextualize X as a map between a space of possible outcomes as a real number, things like integrals using Maxwell Boltzmann statistics won't make any sense. To someone who does, you're just adding up all the possibilities weighted by a given value.
When I learned probability, we were basically presented with a random variable X, told that it could occupy a bunch of different values, and asked to calculate what the average/expected value is based on the frequencies of what those different values could be. So you start with a question like "we roll a die. here are all the values it could be and they all happen one-sixth of the time. Add each value multiplied by one-sixth to each other to get the expected value." This framing naturally leads to definition (1) when you expand to continuous random variables.
That's a strong steelman of the status quo in cases where random variables are introduced as you describe. I'll concede that (1) is fine in this case. I'm not sure it applies to cases (lectures) where probability spaces are formally introduced – but maybe it does; maybe other people still don't think of RVs as functions, even if that's what they technically are.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.
The Open Thread sequence is here.