Open thread, 7-14 July 2014

Previous thread

 

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

232 comments, sorted by
magical algorithm
Highlighting new comments since Today at 2:11 PM
Select new highlight date

Guardian: Scientists threaten to boycott €1.2bn Human Brain Project:

The European commission launched the €1.2bn (£950m) Human Brain Project (HBP) last year with the ambitious goal of turning the latest knowledge in neuroscience into a supercomputer simulation of the human brain. More than 80 European and international research institutions signed up to the 10-year project.

But it proved controversial from the start. Many researchers refused to join on the grounds that it was far too premature to attempt a simulation of the entire human brain in a computer. Now some claim the project is taking the wrong approach, wastes money and risks a backlash against neuroscience if it fails to deliver.

In an open letter to the European commission on Monday, more than 130 leaders of scientific groups around the world, including researchers at Oxford, Cambridge, Edinburgh and UCL, warn they will boycott the project and urge others to join them unless major changes are made to the initiative.

[...] "The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature," Peter Dayan, director of the computational neuroscience unit at UCL, told the Guardian.

Open message to the European Commission concerning the Human Brain Project now with 234 signatories.

Finally, scientists speaking up against sensationalistic promises and project titles...

I have tried some online lessons from Udacity and Coursera, and this is my impression so far:

The system of Udacity is great, but there is little content. Also, the content made by the founder Sebastian Thrun is great, but the content made by other authors is sometimes much less impressive.

For example, some authors don't even read the feedback for their lessons. Sometimes they make a mistake in a lesson or in a test, the mistake is debated in the forum, and... one year later, the mistake is still there. They don't even need to change the lesson video... just putting one paragraph of text below the video would be enough. (In one programming lesson, you had to pass a unit test, which sometimes mysteriously crashed. The crash wasn't caused by something logical, like spending too much time or too much memory, it was a bug in the test. In the forum students gave each other advice how to avoid this bug. It probably could be fixed in 5 minutes, but the author didn't care.) -- The lesson is, you can't make online education just like "fire and forget", but some authors probably underestimate this.

Coursera is the opposite: it has a lot of content, almost anything, but the system feels irritating to me. They don't fully use the interactivity, which in my experience helps paying attention. For example, on Coursera you have five videos from 15 to 30 minutes, and then some homework. (Depending on the course.) On Udacity, the videos are interrupted every 2 or 3 minutes to ask you a simple question.

Some of the lessons require peer assessment, which means that you write your answers in plain text, and then you have to grade the answers of other users. Which is a waste of time, because of course it requires some redundancy, so have to fill the test, wait a week, and then read peer assessment guidelines and rate five random tests by other people... although in most cases it could be done automatically (by choosing an option, entering a number, or entering a string that is matched against a regexp). Very annoying. Also because of this, you have to take the class at the same time when everyone else does; if you try it a few months later, you don't have the full experience.

Both sites provide free and paid certificates. With the paid certificate, you have some Skype exams to prove it was really you who did the lessons, the free certificate just means you do the exercises and receive a PDF. In Udacity, you can get the free certificate anytime. In Coursera, you get the free certificate only if you do the lesson at the same time as everyone else. Thus, if you are interested in a topic, and the lesson happened a year ago, you can do it... but you won't even get the free certificate. I know the free certificates are only symbolic, but still, on Udacity I can get them for learning at my own pace, on Coursera there is a lot of lost purpose involved.

Thus... I wish all the content from Coursera to be ported to Udacity. Alternatively, Coursera switching to the system Udacity uses. Alternatively, someone else to combine the best aspects from both.

When I took Intro to Computer Science and Programming on edX from MIT (The original 16 week 6.00x before they broke it up into two courses), they broke up the short videos with "finger exercises" which was like the interrupting questions on Udacity, but there were more of them and they were a lot more comprehensive. It was worth enough of your grade so there was motivation to do them, but not so much that you couldn't skip them if you felt you already really knew it. That was, to date, the best MOOC I've ever taken.

I agree that Coursera can sometimes feel a bit too much like copy/pasting a college class onto the internet, but it really does vary a lot by course. For example, Robert Ghrist's Single-Variable Calculus on Coursera was amazing. 15 minute animated video lecture followed by 10 problem homework assignment.

As far as the scheduled vs self-paced difference, there are ups and downs to both. I have fallen behind in a class before and then abandoned it because I missed a deadline. But knowing that "now is your chance" to take a course can be more motivating than doing self-paced sometimes. Deadlines can be useful.

I really don't know what's best, but I'm a huge fan of the open education movement and I see innovations happening all the time. For example, each course in Coursera's Data Science track has a "due date" for full credit, then a "hard due date". Each day between them, your score on that assignment loses 10%. You have a total of 5 late days to apply throughout the course. That's enough to save you if you fall off the wagon for a bit and knowing that you're losing a bit each day can motivate you to get it done, while being unable to submit after missing the first "due date" can make you want to quit.

I guess different things work for different people, but for me deadlines are pure evil with no upsides. :(

It would be a bit better if I could make those lessons faster. Then I would just start a course, complete it in three days, and move on to the next course. But I hate the "now wait... now hurry... now wait... now hurry..." approach. I started once course when I had a lot of free time, did the first two lessons and then had to wait for a week. So I started another course meanwhile. Next two weeks, I was busy, so I missed a deadline for one assignment. Now I can't get 100% completion, for no good reason.

I am considering a decision to simply never do a Coursera lesson on the schedule; only pick those lessons that already ended. Then I know I already missed all deadlines, so they become irrelevant. As a side effect, I will never get that free certificate. Which is perhaps good in some sense, because I will not be distracted by lost purposes.

Somehow the typical school system "learn 1 lesson of this, then 1 lesson of something unrelated, then 1 lesson of the first thing again" doesn't work for me. When I start doing something, I want to continue doing it, and I hate being interrupted. I prefer long work followed by long breaks, not the constant turning on and off. Even the idea of using pomodoros is completely against my instincts. Curious how frequent this is.

Hey, everyone! When you study, do you prefer to:

[pollid:727]

When you study multiple things, do you prefer to:

[pollid:728]

You know, when you put it that way, I think you're right. I do hate not being able to progress when I still have the energy to do so. I could have just been falling for the availability bias when thinking about times that I have scrambled to get something done before a deadline, thinking that that is the reason that I was able to stay on track.

If you do plan to go the archived courses route, maybe consider using something like Accredible to save and post your work as you go through. The idea behind that site is "Prove that you've actually done something". Might be useful.

In a weird dance of references, I found myself briefly researching the "Sun Miracle" of Fatima.
From a point of view of a mildly skeptic rationalitist, it's already bad that almost anything written that we have comes from a single biased source (the writings of De Marchi), but also bad is that some witnesses, believer and not, reported not having seen any miracle. But what arose my curiosity is another: if you skim witnesses accounts, they tell the most divers(e) things. If you OR the accounts, what comes out is really a freak show: the sun revolving, emitting strobo lights, dancing in the sky, coming close to the earth drying up the soaking wet attendants.
If you otherwise AND the accounts, the only consistent element is this: the 'sun' was spinning. To which I say: what? How can something that has rotational symmetry be seen spinning? The only possible answer is that there was an optical element that broke the symmetry, but I have been unable to find out what was this element. Do you know anything about it?

The human brain is capable of registering "X is moving" without being able to point to "X was over here and is now over there". This can happen visually with the rotating snakes illusion, or acoustically with Shepard tones, for instance. It's also pretty common on some psychedelic drugs.

Or if your inner ear is messed up somehow by illness, drunkenness, etc. (though what you then think is moving is yourself, or perhaps the rest of the universe around you).

Well, the rotating snakes have a lot of element that breaks the symmetry. But if you stare at a perfectly blank disk it's impossible to tell if it's moving or not.

I didn't mean to suggest that exactly the same thing was going on; just that it was analogous: it's possible to have the perception of motion without there being any motion going on. There's no consistency checker in the human perceptual system to keep that from happening.

I suspect that's why optical illusions are so fascinating to some of us — they demonstrate that our perceptions don't implement the law of non-contradiction. The snakes illusion is just a quick way to demonstrate this in humans who aren't in a religious ecstasy or on psychedelics.

This is the outline of a conversation that took part no fewer than 14 times on Friday just past, between me and a number of close friends.

"Life is like an RPG. Often, a wise, kind, and and deeply important character (hand gesture to myself) gives a quest item to a lowly, unsuspecting, otherwise plain character (hand gesture to friend). As a result of this, this young character goes on to be a great hero in an important quest.

Now, here with me today, I have a quest item.

For you.

But I can only give it to you if you shake on the following oath; that, once you have finished with this item, when you have taken what you require from it, that then, you too shall find someone for whom this will be of great utility, and pass it along. They must also shake on this oath."

"I will."

Handshake occurs.

"Here is your physical copy of the first 16 and a half chapters of 'Harry Potter and the Methods of Rationality'."

Spoilers: after a tedious chain of deals, your friend's going to end up with half an oyster shell sitting in their inventory and no idea what to do with it.

Doesn't the item usually vanish after you finish with it?

I tested it empirically: some of the friends have already read it and passed it on, so no, not in real life.

Nah, key items sometimes linger in your inventory for the rest of the game and never do anything ever again.

This is a great idea. I assume it's 16 and a half because of print limitations? The first 21 chapters would make more sense.

Abstract: It is frequently believed that autism is characterized by a lack of social or emotional reciprocity. In this article, I question that assumption by demonstrating how many professionals—researchers and clinicians—and likewise many parents, have neglected the true meaning of reciprocity. Reciprocity is “a relation of mutual dependence or action or influence,” or “a mode of exchange in which transactions take place between individuals who are symmetrically placed.” Assumptions by clinicians and researchers suggest that they have forgotten that reciprocity needs to be mutual and symmetrical—that reciprocity is a two-way street. Research is reviewed to illustrate that when professionals, peers, and parents are taught to act reciprocally, autistic children become more responsive. In one randomized clinical trial of “reciprocity training” to parents, their autistic children’s language developed rapidly and their social engagement increased markedly. Other demonstrations of how parents and professionals can increase their behavior of reciprocity are provided.

— Morton Ann Gernsbacher, "Towards a Behavior of Reciprocity"

The paper cites several examples of improvements to autistic children's social development when non-autistic peers, parents, or teachers are trained to behave reciprocally towards them. This one particularly caught my eye (emphases added):

In 1986 researchers taught four typically developing preschoolers to either initiate interaction with three autistic preschoolers or to respond to the interaction that the three autistic preschoolers initiated, in other words, to be reciprocal (Odom & Strain, 1986). Which intervention had the more lasting influence on the autistic preschoolers’ social interaction? When the typically developing preschoolers were taught to respond to the interaction that the autistic preschoolers initiated, the autistic preschoolers responded more frequently. In other words, when the typically developing preschoolers behaved reciprocally, the autistic preschoolers responded more positively.

Quantified-self biohacker-types: what wearable fitness tracker do I want? Most will meet my basic needs (sleep, #steps, Android-friendly), but are there any on the market with clever APIs that I can abuse for my own sick purposes?

I use the fitbit, which has an API: http://dev.fitbit.com/

The fitbit Flex is a wristband, which seems to be a popular form factor recently. I prefer the fitbit One, which is a small clip that you can clip onto your pocket/waistband/bra/etc.

I'm excited about the iWatch which is supposedly coming out in October.

I think the main thing the facebook emotional contagion experiment highlights is that our standard for corporate ethics is overwhelmingly lower than our standard for scientific ethics. Facebook performed an A/B test, just as it and similar companies do all the time, but because it was in the name of science we recognized that it was not up to usual ethical standards. By comparison, there is no review board for the ethics of advertisements and products. If something is too dangerous, it will result in lawsuits. If it is offensive, it will be censored. However, something unethical in science, like devoting millions of dollars to engineer and millions of experimental-subject-hours to develop a sugar-coated money-sucking skinner box won't make anyone blink an eye.

I think the core issue is one of lack of understanding how modern technology works. Facebook performed a A/B test and everyone who know how the internet works shouldn't be surprised.

On the other hand there are a bunch of people who don't get that web companies run thousands of A/B tests. Those people got surprised by reading about the study.

There's a lot of criticism from people who definitely understand this, and a lot of people hemming and hawing about how "it's different because it's emotional manipulation" as if most other A/B Testing isn't.

They see the inconsistency, but they don't know how to react; they want to rationalize it.

Maybe it's an issue of politics as the mind killer?

I think it's mostly that scientific ethical standards developed out of a history of bad experiments, but the ethical breeches we think of w/r/t corporations are very different, and the context switch is jarring. Not to mention that the idea of a corporation running a social experiment with a substantially scientific purpose is novel to most people, and this one in particular is easy to understand.

It's not explicitly political.

Given the NSA scandal, the topic of privacy is very much political and a lot of people don't like facebook or other big web companies even when they use their products.

To get back to academia vs. corporations academia openly shares information about experiments while business doesn't.

Hey guys, so, I'm dumb and am continuing to attempt to write fiction. I figured I would post an excerpt first this time so people can point out glaring problems before I post anything to Discussion. I've changed some of the premise (as can be seen most obviously in the title); mostly I'm moving away from LessWrong-parody and toward self-parody, mostly because Eliezer's followers are really whiny and it was distracting from the actual ideas I was trying to convey. The premise is now less disingenuous about its basically being a self-insert fic. Also I've tried to incorporate some of the implicit suggestions I received, especially complaints that the first chapter was too in-jokey, pseudo-clever, and insufficiently substantive. This isn't the whole chapter, it's just the first part of a first draft. Criticism appreciated!

Harry Potter-Newsome and the Methods of Postrationality: Chapter Two: Analyzing the Fuck out of an Owl: Excerpt

Harry let out a long sigh and addressed the owl with mocking eyes.

"So, owl. About this 'Hogwarts'. Are there other magical schools out there that I might attend?"

The owl cocked its head. "Why are you asking me? I'm an owl," said the owl in a voice that sounded like an impossibly rapid sequence of hoots.

"Oh come on. We both know you're needed for the exposition."

The owl hooted regretfully. "Fine. Yes, there are other schools. But you should really be asking more interesting questions. Or perhaps I should lead. How did you know to talk to me?"

Harry flashed a look of disappointment. "Although it pains me to say it, I just figured this is the sort of story with talking animals."

"Pray tell, Mr. Potter, why do you think this is a story in the first place? Most humans who think so are what we owls like to call 'batshit insane'."

Harry sighed. This owl is stupid or a troll or both; nonetheless, for the sake of the story, I should probably just go along with it, he thought. "Let's start with the basics. Riddle me this: how on Earth does someone get a lightning-bolt-shaped scar? Have you ever seen a utensil with a suitably shaped prong? Does an otherwise sane mother decide one day that lightning bolt tattoos are just too expensive and so she should carve her infant son's forehead with a kitchen knife?"

The owl glanced at Harry's forehead, and for the first time appeared to be intrigued. "Maybe a neo-Inglorious-Basterd took you as genetically inclined toward Zeus worship and decided they wouldn't let you hide your depraved Paganism so easily."

"I hadn't thought of that," admitted Harry.

"Or perhaps your parents just read way too much Harry Potter."

Harry was distraught. "Harry Potter? What, am I a book now?"

The owl paused for a long moment, somehow grimaced, looked downwards, and placed the tip of its wing on its forehead.

[...]

I'd recommend writing five or so chapters and then posting a link. The fic as you're posting it just feels meta for the sake of meta (charitably, because your narrative is still winding up). I'd be more likely to read/upvote if plot were already happening.

That makes sense; to be honest, I generally don't have a high opinion of narratives and mostly view them as excuses for authors to write about characters and settings and spew insights and jokes. (I also mean this in the metaphorical post-structuralist sense.) This might be why my fiction is so much worse than my nonfiction writing.

This comment might interest you.

(Placeholder for usual self-deprecating disclaimers; linked comment was written in (insert barely-realistic low time estimate), yada yada.)

Okay, I'm probably never going to actually get very far into my fanfic, so:

The story starts as stereotypical postmodern fare, but it is soon revealed that behind the seemingly postmodern metaphysic there is a Berkeleyan-Leibnizian simulationist metaphysic where programs are only indirectly interacting with other programs despite seeming to share a world, a la Leibniz' monadology. Conflicts then occur between character programs with different levels of measure in different simulations of the author's mind, where the author (me) is basically just a medium for the simulators that are two worlds of emulation up from the narrative programs.

Meanwhile the Order of the Phoenix (led by Dumbledore, a fairly strong rationalist rumored to be an instantiation of the monad known as '[redacted]') has adopted and adapted an old idea of Grindelwald's and is constructing a grand Artifact to invoke the universal prior so that an objective measure over programs can be found, thus ending the increasingly destructive feuds. Different characters help or hinder this endeavor, or seem to help or hinder it, according to whether they think they will be found to be more or less plausible by the Artifact. The conspiracies and infighting are further intensified; Dumbledore has his typical "oh God what have I done" moment.

At some point Voldemort (a very strong postrationalist rumored to be an instantiation of the mysterious monadic complex known as 'muflax') has the idea of messing with the Artifact so as to set up self-fulfilling prophecies within its machinations, and then Harry (a very shameless Will Newsome self-insert, rumored to be in thrall to one of Voldemort's monads) introduces the bright and/or incredibly bad idea of acausally controlling bits of the universal prior itself.

The plot becomes exceedingly complex and difficult to simulate. Gods take notice and launch a crusade to restore monadic equilibrium, but some of the older and more jaded gods have taken a liking to the characters and are considering lending them aid. YHWH is unreachable. The whole mathematical multiverse is on the line, and the gods' crusade may already be too late...

Yeah, it's not ambitious at all :)

I've never understood the fascination of authors to put themselves as the main character of a story: what drives an interesting story is hard conflict, it's like they're desiring to have a shitty life.

Sweet! Wish I'd read that earlier, now I feel like to some extent I'm just retreading known ground. Although I do intend to go in a somewhat different direction. Not sure yet when and where to put the plot twists though.

This is order of magnitude more readable than the previous chapter, I applaude this.

I have to second though a critique by Tenoke: when Harry says "What, am I a book now?" it feels inconsistent, because he already had guessed that he was in a book. Characters that know they are in a book are ok (think Sophie by Gaarder), characters that have amnesia every paragraph are not.

But I am curious to read some more.

"Am I a book" is different from "am I in a book". My reading was that Harry Potter Newsome hasn't heard of the book series called "Harry Potter", to him that's just his name. He is confused about what "read way too much Harry Potter" is supposed to mean.

Right, this was the intended meaning. Being a character in a book is one thing, but talking to another character who suggests that you're the titular protagonist of a supposedly well-known book is another. I was also trying to suggest that the owl is in some sense from a different world. But I guess that was all unclear and I need to rewrite it.

The owl hooted regretfully. "Fine. Yes, there are other schools. But you should really be asking more interesting questions. Or perhaps I should lead. How did you know to talk to me?"

Harry flashed a look of disappointment. "Although it pains me to say it, I just figured this is the sort of story with talking animals."

Uhm, no, he knew to talk to the owl because it started talking and winking to him first.

EDIT: Ah, it was the letter that talked first, not the owl, my bad. I'll leave my comment as it is, so you don't look as crazy with your reply to me.

Harry was distraught. "Harry Potter? What, am I a book now?"

Didn't he just realize, that he is in a fanfic a few minutes ago?

I mean, I just don't get why would you decide to convey the message of your movement through a postmodernist work. How do you even know that anyone else uses the same definition of postrationality as yourself, when you employ multiple techniques to be as vague as possible when talking about it?

Also don't complain that your fiction writing sucks, when you are writing in styles that your public (and most people) are not fond of.

Merging traditional Western occultism with Bayesian ideas seems to produce some interesting parallels, which may be useful psychologically/motivationally. Anyone care to riff on the theme?

Eg: "The Great Work" is the Most Important Thing that you can possibly be doing.

Eg, tests to pass and gates to go through in which a student has to realize certain things for themselves, as opposed to simply being taught them, from pre-membership ones of learning basic arithmetic and physics, to the initial initiation of joining the Bayesian Conspiracy, to an early gate of becoming a full atheist, to a higher gate of, say, making arrangements to be brought back from the dead. (Possibly the highest level would be to have arrangements to be brought back from the dead /without/ anyone else's help...)

That makes a bit of sense. The occultists fancied themselves scientists, back when that wasn't such a clearly defined term as it is now, and they rummaged through lots of traditions looking for bits to incorporate into their new (claimed to be old) culture. But computer games design had all the same sources to draw from, greater manpower and vastly more cultural impact. I would expect "almost any" useful innovations the occultists came up with to be contained in computer games.

This is true for both of your examples: "winning the game" and skill trees, respectively. And skill trees are better than initiation paths, because they aren't fully linear while still creating motivation to go further.

Compare the rules of how to play more like a PC, less like an NPC.

I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.

I say "almost any" because an exception may be fully immersed, bodily ritual stuff. Maybe that can hammer things down into system 1 that you simply don't "get" the same way when you just read them.

Is VR (Oculus Rift, Sony Morpheus) a significant step in that direction?

Sure. In fact, some occultists already use VR, so I don't see why we couldn't.

The one interesting innovation the occultists came up with is creative design of ritual - and sometimes they do manage to see them as psychological tools rather than somewhat supernatural things. Surely some of that could be "useful psychologically/motivationally" - although psychological research into that is practically nonexistent, it is plausible that a well-designed ritual could do something to participants, such as help them to actually change their mind.

For example, most of us agree Crocker's rules are a good idea. I'm confident that if adopting them was done as a ritual event, something pompous with witnesses, that'd:

  • create positive reinforcement and a more impressive memory,
  • help keep the rules and
  • advertise them, especially if the witnesses aren't familiar with them.

Maybe VR could help heighten the experience. But I assume that recording the event, and publishing it for all the world to witness, would do much more.

In fact, some occultists already use VR

Occultus Rift?

skill trees

My computer gaming experience mostly peaks around the era of Sid Meier's Alpha Centauri and Ultima, so I'm only vaguely familiar with skill trees. Could you describe how they might apply here in a bit more detail?

Think of a research tree, then. Or more formally, a simple directed graph. Nodes can be "on" or "off", meaning you (claim to) have or not have the skill that node describes. A nodes can be a prerequisite for other.

This can be taken many ways, but one obvious example would be a "sequences comprehension tree". One node per part of the sequences, with the parts that part is based on as prerequisites. You could claim a node to express confidence you've understood (or even agreed with?) that particular part, track your progress, and if you could publicly share your progress along this sequences comprehension (or any other) tree, you could also show off.

This could be done in JavaScript fairly easily, and it'd be awesome I think. Anyone want to code it?

Additional idea: "DataPacRat's Lower Bound" for the Great Work: "If what you're doing isn't at /least/ as important as ensuring that you will keep being able to read comics for the foreseeable future, then you should work on the comic thing instead."

What exactly do you mean when you say "traditional Western occultism". Things like Freemasonism?

The Golden Dawn ( https://en.wikipedia.org/wiki/Hermetic_Order_of_the_Golden_Dawn , not the Greek political thing) and related groups, such as AA ( https://en.wikipedia.org/wiki/A%E2%88%B4A%E2%88%B4 , not the recovery program thing).

The simplest explanation I can see: I'm pretty sure the writers who coined some of the memes you reference (i.e. "Bayesian Conspiracy," "higher gate") were drawing on those very same occult traditions for affect and flair. The parallels are analogy because analogy is useful. Which brings up a question: I'm curious what you mean by "useful"? Useful as teaching analogies or useful as sources of structure and methodology? Or something else?

The simplest explanation I can see: I'm pretty sure the writers who coined some of the memes you reference (i.e. "Bayesian Conspiracy," "higher gate") were drawing on those very same occult traditions for affect and flair.

I'm pretty sure this is false, except insofar as some of the style of Western ceremonial magic has seeped into pop-cultural ideas of how conspiracies and secret teachings work. There isn't much overlap in doctrine, terminology, or practice other than what you'd expect from two different groups that've spent a lot of time thinking about how to cause change in accordance with will (which we call "instrumental rationality" and they call "magic").

There are people willing to run through the entire rigamarole of the Golden Dawn initiation rituals, and all associated memorization, without any significant evidence that any of the supposed magic has any effect on the real world. How much more motivation could be created using a similar process, but which can be demonstrated to be linked to how the universe actually works?

I do not know. A comparative study would help.

Some of my central questions: Would such methods prove effective with subjects whose drive to join is a desire to question and improve upon methods? If such methods led them to discover effective facts that can optimize efforts in the real world (rather than a "magic" used mainly for interpersonal signalling), then wouldn't secrecy be self-defeating? After all, the subjects are being linked to the underlying laws of the universe. To expect them not to apply those laws in their public life, and, if altruistic, to share such discoveries, is a fact hard for me to accept.

Certainly, I find the drama and seriousness of such an idea exciting. It lends a nice, hefty weight to learning that the task should possess. Secret knowledge is appetizing, so it makes sense to want that knowledge to be useful rather than just a pageant show. The problem comes with the fact that secret knowledge that is entangled in the real world is not really secret. It's real. We're only pretending to keep it secret when really the answer is, literally, the nose in front of our face.

It's like the adage "homeopathic medicine that worked would be called 'medicine.'" Secret knowledge that is true is knowledge, plain and simple. It only takes one genius kid riding a train with a stopwatch and a mirror to discover relativity. Then the secret's out and, probably, being used to produce terrible ads for the sides of trains.

You do realize that at least the latter two 'gates' you came up with are predicated entirely on a very specific local culture and set of values around here rather than having anything to do with rationality, right? (Not to mention not exactly being likely to be possible in the real world...)

Yep, I realize that. If you've got any better suggestions for the gates to pass and rites to perform, I welcome the ideas.

Yes, I have a suggestion. Imagine a meaningful life without religion.

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I'm interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I'd like to see what others think without knowing the answer.

My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.

Some more questions raised:

What arguments might I make to change this person's mind?

Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?

I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don't think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn't pay someone $999,999 to do away with that probability and therefore guarantee them the $1

Losing money and gaining money is not the same. Most humans use heuristics that treat both cases differently. If you want to understand someone you shouldn't equate both cases even if they look the same in your utilitarian assessment.

I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don't actually believe they'd get the million with 99% probability. They're imagining someone telling them ,"I'll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)", and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.

Perhaps the heuristics you refer to are based on this? Has this idea of "trust" been tested for correlation with "losing money and gaining money" distinction?

Writing it backward, I thinks you just did.

As for the ethics, if you already were in a position to HAVE to make the decision, you should do what you think is right regardless of any of their prior opinions. If, however, you just had the opportunity to override them, I thinks you should limit yourself to persuading as many of them as you can, but not override them for their own benefits.

What happened to Will Newsome's drunken HPMOR send-up? Did it get downvoted into oblivion?

I checked at Will Newsome's page. There seems to have been a failed effort to move it to Main.

On Twitter he suggested that EY had deleted it, but provided no evidence.

What happened to Will Newsome's drunken HPMOR send-up?

On Twitter he suggested that EY had deleted it, but provided no evidence.

I just tested this by deleting one of my posts (it was a test post). The post can still be accessed, while Will Newsome's post can't be accessed anymore (except by visting his profile). My username disappeared from my post after deleting it, Will Newsome's name does still appear on his post under his profile. This seems to be evidence in favor of Will Newsome's claim that his post has been deleted by someone else than himself.

Is there anywhere that I can read it? It sounds mildly entertaining.

You can read it on Will_Newsome's page, and the 17 comments are still there, but there's no way to add comments.

Yeah, that looks like it was deleted forcibly.

Probably when he was again sober ;)

And it wasn't downvoted it was at the end at +7.

Pity, I was enjoying that thread. I was about to note in my suggestion of Worm with a EY avatar that Worm features an actually friendly AI, who is by far the nicest character in the entire saga.

Has anyone read The Artificial Intelligence Revolution by Louis Del Monte?

suppose someone's life plan was to largely devote themselves to making money until they were in, say, the top 10% in cumulative income. They also did not plan to save money to any very unusual extent.

then after that was accomplished, they would switch goals and devote themselves to altruism.

Given that the person today is able to make the money and resolves to do this, I wonder what people here think the chance is of doing it. For example, fluid intelligence declines over time. So by the time you're 60 years old and have made your money and have kids, will you really be smart enough to diametrically change direction and have much impact? Maybe Bill Gates has enough brain cells, but his IQ might be 160. And maybe you'll just forget about altruism and learn to enjoy nice cars more.

It doesn't seem that unusual for rich people to become more charitable as they get older, though perhaps I'm just hearing about the famous ones. I assume a large part of it is feeling as though one has solved the money-making game, and now it's time to do something new. (Rich people getting into politics is probably similar.)

Is anything known about how to maintain fluid intelligence?

A Google search turned up a few articles:

Senior citizens who performed as well as younger adults in fluid intelligence tended to share four characteristics in addition to having a college degree and regularly engaging in mental workouts: they exercised frequently; they were socially active, frequently seeing friends and family, volunteering or attending meetings; they were better at remaining calm in the face of stress; and they felt more in control of their lives.

Although there is some controversy and debate on the best ways to improve fluid intelligence, studies are showing a strong link between non-academic pursuits and improved fluid intelligence.

A quick look into some trends:

This report suggests a non-monotonic relationship but maybe a positive correlation between income and percentage of income donated to charity. Unfortunately, this depressingly suggests a negative correlation. (Edit It seems non-monotonic over some intervals but negative overall. Further edit I don't have high confidence either way. Alexander Berger, a research analyst at GiveWell, thinks the piece in The Atlantic is just wrong. I note that some studies are citing "discretionary income" or income with a whole bunch of expenses subtracted out.)

This doesn't seem to list percentages but gives the impression of increasing giving with age. And it's the same story in the UK. Edit This is better:

In 2005, people in the 65-74 years age bracket gave the most dollars to charitable contributions. The people in the 75 years and older age bracket gave the highest portion of income.

You'll probably be fine. From what my parents say, you keep gaining effectiveness due to cunning, practice, and ability to see the obvious. At least until 65 or so, and not necessarily in all professions.

If you're particularly concerned for some reason, you might want to make a habit of giving to charity (not necessarily large amounts, but enough to form the habit). Using a contract to force yourself also sounds cool, but is probably just asking for trouble.

altruism doesn't take a lot of intelligence. something like 95 percent of American households give to charity. The biggest factor by far will be commitment, not capability.

In the comments to this post we discussed the signalling theory of education, which has previously been discussed on Less Wrong. The signalling theory says that education doesn't make you more productive, but constitutes a signal that you are productive (since only a productive worker could obtain a degree at a prestigious university, or so many employers think).

Such signalling can be very socially wasteful, since it can lead to a signalling arms race where people spend more and more money on signals that don't increase their productivity (like peacocks' tails). Now an important question is how one could rein in such signalling arms races. One way is by prohibiting employers to consider educations that are irrelevant for the job. That is, if your education is a pure signal of the abilities you had before you started the education, and doesn't increase your productivity in any way, employers would not be allowed to consider it when deciding between you and other applicants. The downside of this is, though, that it means more regulations and that it could be seen as illiberal.

Another hope is the increased use of big data in recruiting. Whereas previously, employers used crude heuristics such as which university you went to, they now have access to constantly improving algorithms which pick out precisely which applicant features that predict productivity and which don't.

Now suppose that what university education you went to is in fact a less accurate signal than some other feature. Then employers would fight over the applicants that have this other feature, rather than those with the university education. This would lead to people being less keen to obtain long and expensive university educations.

Of course new wasteful arms races could arise regarding these other features. Then again, I think we have reason to believe that these arms races would not be quite as wasteful as (I believe) the present educational arms races are. The reason people spend so much time and money on education as a signal is that it has proved to be so stable as a signal. People aren't going to spend as much time and money on a new signal, because they won't be as confident that it will continue to function as a strong signal. If what is taken to have signalling value is constantly shifted around, people would presumably be less willing to engage in signalling arms races.

These are just some loose thoughts. I'd be interested to hear if someone has any further thoughts on how to decrease wasteful signalling in education, or any other thoughts on the fascinating topic of signalling in general.

Another hope is the increased use of big data in recruiting. Whereas previously, employers used crude heuristics such as which university you went to, they now have access to constantly improving algorithms which pick out precisely which applicant features that predict productivity and which don't.

So, we don't need big data. We need the data we already have, that we're legally prohibited from using. What you're going to find, for almost any job, is that g matters a lot, and then conscientiousness and extraversion matter some, and job-specific experience and training determine how quickly they can become productive. If prospective employers could just look up your IQ test scores, they wouldn't need to sneak around trying to estimate your IQ score from available data.

(Edit: of course further research and testing will determine other things that matter. But we shouldn't pretend that we don't know the biggest factor, or that the gaps are knowledge-based instead of policy-based.)

Everything I've heard about IQ tests being illegal to use for employment has been about the US. Anyone know whether it's legal to use IQ tests in other countries? And if so, how it's worked out?

The problem with using a measure like an IQ score is that if the measure happens to work poorly for one particular person, the consequences can become very unbalanced.

If IQ tests are more effective than other tests, but employers are banned from using IQ tests and have to use the less effective measures instead, their decisions will be more inaccurate. They will hire more poor workers, and more good workers will be unable to get jobs.

But because the measures they do use vary from employer to employer, the effect on the workers will be distributed. If, say, an extra 10% of the good workers can't get jobs, that will manifest itself as different people unfairly unable to get jobs at different times--overall, the 10% will be distributed among the good applicants such that each one finds it somewhat harder to get a job, but eventually gets one after an increased length of jobless time.

If the employers instead use IQ tests and can reduce this to 5%, that's great for them. The problem for the workers is that if IQ tests are poor indicators of performance for 5% of the people, that won't just be 5%, it'll be the same 5% over and over again. The total number of good-worker-man-years lost to the inaccuracy will be less with IQ tests (since IQ tests are more accurate), but the variance in the effect will be greater; instead of many workers finding it somewhat harder to get jobs, there'll be a few workers finding it a lot harder to get jobs.

Having such a variance is a really bad thing.

(Of course I made some simplifying assumptions. If IQ tests were permitted, probably not 100% of the employers would use them, but that would reduce the effect, not eliminate it. Also, note that this is a per-industry problem; if all insurance salesmen got IQ tested and nobody else, any prospective insurance salesman who doesn't do well at IQ tests relative to his intelligence would still find himself chronically unemployed.)

The same, of course, applies to refusing to hire someone based on race, gender, religion, etc.: you can reduce the number of people who steal from you by never hiring blacks, but any black person who isn't a thief would find himself rejected over and over again, rather than a lot more people getting such rejections but each one only getting them occasionally.

(Before you ask, this does also apply to hiring someone based on college education, but there's not much we can do about that, and at least you can decide to go get a college education. It's hard to decide to do better on IQ tests or to not be black.)