If you want people to ask you stuff reply to this post with a comment to that effect.

More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.

If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.

New to LessWrong?

New Comment
634 comments, sorted by Click to highlight new comments since: Today at 9:32 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I've been getting an increasing number of interview requests from reporters and book writers (stemming from my connection with Bitcoin). In the interest of being lazy, instead of doing more private interviews I figure I'd create an entry here and let them ask questions publicly, so I can avoid having to answer redundant questions. I'm also open to answering any other questions of LW interest here.

In preparation for this AMA, I've updated my script for retrieving and sorting all comments and posts of a given LW user, to also allow filtering by keyword or regex. So you can go to http://www.ibiblio.org/weidai/lesswrong_user.php, enter my username "Wei_Dai", then (when the page finishes loading) enter "bitcoin" in the "filter by" box to see all of my comments/posts that mention Bitcoin.

I was surprised to see, both on your website and the white paper, that you are part of Mercatoria/ICTP (although your level of involvement isn't clear based on public information). My surprise is mainly because you have a couple of comments on LessWrong that discuss why you have declined to join MIRI as a research associate. You have also (to my knowledge) never joined any other rationality-community or effective altruism-related organization in any capacity.

My questions are:

  1. What are the reasons you decided to join or sign on as a co-author for Mercatoria/ICTP?
  2. More generally, how do you decide which organizations to associate with? Have you considered joining other organizations, starting your own organization, or recruiting contract workers/volunteers to work on things you consider important?

I seem to have missed this question when it was posted.

You have also (to my knowl­edge) never joined any other ra­tio­nal­ity-com­mu­nity or effec­tive al­tru­ism-re­lated or­ga­ni­za­tion in any ca­pac­ity.

With the background that I have an independent source of income and it's costly to move my family (we're not near any major orgs) so I'd have to join in a remote capacity, I wrote down this list of pros and cons of joining an org (name redacted) that tried to recruit me recently:


  1. More access to internal discussions at X, private Google Docs, discussions at other places (due to affiliation with X), people to discuss/collaborate with.
  2. Get my ideas taken more seriously (by some) due to X affiliation
  3. Possibly make me more productive through social pressure/expectation


  1. Feeling of obligation possibly make me less productive
  2. As a personal cost, social pressure to be productive feeling unpleasant
  3. Less likely to post/comment on various topics due to worry about damaging X’s reputation (a lot of X people don’t post much, maybe partly for this reason?)
  4. Get my ideas taken less seriously (by some) due to perception of bias (e.g., having a financial interest in people taking A
... (read more)
1[comment deleted]1y

I received a PM from someone at a Portuguese newspaper who I think meant to post it publicly, so I'll respond publicly here.

You have contacted Satoshi Nakamoto. Does it seem to you only one person or a group of developers?

I think Satoshi is probably one person.

Does bitcoin seem cyberpunk project to you? In that case, can one expect they ever disclose identity?

Not sure what the first part of the question means. I don't expect Satoshi to voluntarily reveal his identity in the near future, but maybe he will do so eventually?

In that case, the libertarian motivation wouldn't be a risk to anyone who invest in the community? Like one this gets all formal and legal, it blow?

Don't understand this one either.

Is it important to know right now its origins? The author from the blog LikeinMirrorr, who says the most probable name is Nick Szabo, argues there is a concern on risk: if Szabo/ciberpunk is the source no risk, but it maybe this bubble - pump-and-dump scheme to enrich its original miners - or a project from federal goverment to track underground transactions. What is your view on this?

I'm pretty sure it's not a pump-and-dump scheme, or a government project.

Do you also t

... (read more)
I had the article jailbroken recently, and the relevant parts (I hope I got it right, my version has scrambled-up text) are: I actually meant to email you about this earlier, but is there any chance you could post those emails (you've made them half-public as it is, and Dustin Trammell posted his a while back) or elaborate on Nick not knowing C++? I've been trying to defend Szabo against the accusations of being Satoshi*, but to be honest, his general secrecy has made it very hard for me to rule him out or come up with a solid defense. If, however, he doesn't even know C or C++, then that massively damages the claims he's Satoshi. (Oh, one could work around it by saying he worked with someone else who did know C/C++, but that's pretty strained and not many people seriously think Satoshi was a group.) * on Reddit, HN, and places like http://blog.sethroberts.net/2014/03/11/nick-szabo-is-satoshi-nakamoto-the-inventor-of-bitcoin/ or https://likeinamirror.wordpress.com/2013/12/01/satoshi-nakamoto-is-probably-nick-szabo/ (my response) / http://likeinamirror.wordpress.com/2014/03/11/occams-razor-who-is-most-likely-to-be-satoshi-nakamoto/
8Wei Dai10y
Sure, I have no objection to making them public myself, and I don't see anything in them that Satoshi might want to keep private, so I'll forward them to you to post on your website. (I'm too lazy to convert the emails into HTML myself.) Sorry, you misunderstood when I said "Nick isn't known for being a C++ programmer". I didn't mean that he doesn't know C++. Given that he was a computer science major, he almost certainly does know C++ or can easily learn it. What I meant is that he is not known to have programmed much in C or C++, or known to have done any kind of programming that might have kept one's programming skills sharp enough to have implemented Bitcoin (and to do it securely to boot). If he was Satoshi I would have expected to see some evidence of his past programming efforts. But the more important reason for me thinking Nick isn't Satoshi is the parts of Satoshi's emails to me that are quoted in the Sunday Times. Nick considers his ideas to be at least an independent invention from b-money so why would Satoshi say "expands on your ideas into a complete working system" to me, and cite b-money but not Bit Gold in his paper, if Satoshi was Nick? An additional reason that I haven't mentioned previously is that Satoshi's writings just don't read like Nick's to me.
Done: http://www.gwern.net/docs/2008-nakamoto (Sorry for the delay, but a black-market was trying to blackmail me and I didn't want my writeup to go live so I was delaying everything.)
Thanks. I see. Unfortunately, this damages my defense: I can no longer say there's no evidence Szabo doesn't even know C/C++, but I have to confirm that he does. Your point about sharpness is well-taken, but the argument from silence here is very weak since Szabo hasn't posted any code ever aside from a JavaScript library, so we have no idea whether he has been keeping up with his C or not. Good question. I wonder if anyone ever asked Satoshi about what he thought of Bit Gold? I've seen people say the opposite! This is why I put little stock in people claiming Satoshi and $FAVORITE_CANDIDATE sound alike (especially given they're probably in the throes of confirmation bias and would read in the similarity if at all possible). Hopefully someone competent at stylometrics will at some point do an analysis.
I've been working hard on this in my book. (Nearly there by the way). I posted this on Like In A Mirror but put it here as well in case it doesn't get approved. Yes, the writing styles of Szabo and Satoshi are the same. Apart from the British spelling. And the different punctuation habits. And the use of British expressions like mobile phone and flat and bloody. And Szabo’s much longer sentences. And the fact that Szabo doesn’t make the same spelling mistakes that Satoshi does. Ooh and the fact that Szabo’s writing has a lot more humour to it than Satoshi’s. Szabo is one of the few people that has the breadth, depth and specificity of knowledge to achieve what Satoshi has, agreed. He is the right age, has the right background and was in the right place at the right time. He ticks a lot of the right boxes. But confirmation bias is a dangerous thing. It blinkers. And you need to think about the dangers your posts are creating in the life of a reclusive academic. Satoshi is first and foremost a coder, not a writer. Szabo is a writer first and coder second. To draw any serious conclusions you need to find some examples of Szabo’s c++ coding. You also need to find some proof a Szabo’s hacking (or anti-hacking) experience. Satoshi has rather a lot of this. And you need to consider the possibility that Satoshi learnt his English on both sides of the Atlantic. And that English was not his first language. I don’t think it was.
Szabo has extensively studied British history for his legal and monetary theories (it's hard to miss this if you've read his essays), so I do not regard the Britishisms as a point against Szabo. It's perfectly easy to pick up Britishisms if you watch BBC programs or read The Economist or Financial Times (I do all three and as it happens, I use 'bloody' all the time in colloquial speech - a check of my IRC logs shows me using it 72 times, and at least once in my more formal writings on gwern.net, and 'mobile phone' pops up 3 or 4 times in my chat logs; yet I have spent perhaps 3 days in the UK in my life). And Satoshi is a very narrow, special-purpose pseudonymic identity which has one and only one purpose: to promote and work on Bitcoin - Bitcoin is not a very humorous subject, nor does it really lend itself to long essays (or long sentences). And I'm not sure how you could make any confident claims about spelling mistakes without having done any stylometrics, given that both Szabo and Satoshi write well and you would expect spelling mistakes to be rare by definition.
Points noted. All well made. Mine was a heated rebuttal to the Like IN A Mirror post. I could only find one spelling mistake in all Satoshi's work and a few punctuation quibbles. It's a word that is commonly spelt wrong - but that Szabo spells right. I don't want to share it here because I'm keeping it for the book
Thank you so much Wei Dai. My idea with second question was to understand if there is like an anarchist motivation around bitcoin that may have some risks in the future. I mean, if somehow when it reaches Wall Street the original developers can do anythink to affect credibility. You say you don't think it was Szabo. Have you ever try to know who he was? Could you share who is your solid hunch and why? Is relevant to know Satoshi? If you know what you know today, would you have patented bmoney? Do you think bitcoin inventers would have done the same? Kind regards Marta
6Wei Dai10y
Ok, I think I see what you're getting at. First of all, crypto-anarchy is very different from plain anarchy. We (or at least I) weren't trying to destroy government, but just create new virtual communities that aren't ruled by the threat of violence. Second I'm not sure Satoshi would even consider himself a crypto-anarchist. I think he might have been motivated more by a distrust of financial institutions and government monetary authorities and wanted to create a monetary system that didn't have to depend on such trust. All in all, I don't think there is much risk in this regard. I haven't personally made any attempts to find out who he is, nor do I have any idea how. My guess is that he's not anyone who was previously active in the academic cryptography or cypherpunks communities, because otherwise he probably would have been identified by now based on his writing and coding styles. I think at this point it doesn't matter too much, except to satisfy people's curiosity. No, because along with a number of other reasons not to patent it, the whole point of b-money was to have a money system that governments can't control or shut down by force, so how would I be able to enforce the patent? I don't think Satoshi would have patented his ideas either, because I think he is not motivated mainly to personally make money, but to change the world and to solve an interesting technical problem. Otherwise he would have sold at least some of his mined Bitcoins in order to spend or to diversify into other investments.
Thank you so much Wei Dai for all the answers. You say other previously active member would have been identified base on this writing and coding style. There is exacly what Skye Grey says he/she's doing for matching Szabo with Satoshi on the blog LikeinaMirror - he say's he's 99,9% sure Szabo is Satoshi. https://likeinamirror.wordpress.com/2014/03/ Dorian Nakamoto theory may have any ground? What made you think Satoshi motivation was distrust rather than crypto-anarchy? Someone that have loose money for instance in Lehman Brothers banrupcy? It was also in 2008 Why is anonimity important to crypto community? Just to confirm, Wei Dai is a pseudonym? Thank you again
5Wei Dai10y
I agree with gwern's answers and will add a couple of my own. No, I doubt it. 1. We think it's cool because the technology falls out of our field of research. 2. Anonymity provides privacy and security against physical violence, and cryptographers tend to care about privacy and security.
Grey's post is worthless. I haven't written a rebuttal to his second, but about his first post, see http://www.reddit.com/r/Bitcoin/comments/1ruluz/satoshi_nakamoto_is_probably_nick_szabo/cdr2vgu Because he said so. Haven't you done any background reading? (And how many private individuals could have lost money in Lehman Brothers anyway...) Seriously? No, it's real.
The concerns in this space go beyond personal safety, though that isn't an insignificant one. For safety, It doesn't matter what one can prove because almost by definition anyone who is going to be dangerous is not behaving in an informed and rational way, consider the crazy person who was threatening Gwern. It's also not possible to actually prove you do not own a large number of Bitcoins-- the coins themselves are pseudonymous, and many people can not imagine that a person would willingly part with a large amount of money (or decline to take it in the first place). No one knows which, if any, Bitcoins are owned by the system's creator. There is a lot of speculation which is know to me to be bogus; e.g. identifying my coins as having belonged to the creator. So even if someone were to provably dispose of all their holdings, there will be people alleging other coins. The bigger issue is that the Bitcoin system gains much of its unique value by being defined by software, by mechanical rule and not trust. In a sense, Bitcoin matters because its creator doesn't. This is a hard concept for most people, and there is a constant demand by the public to identify "the person in charge". To stand out risks being appointed Bitcoin's central banker for life, and in doing so undermine much of what Bitcoin has accomplished. Being a "thought leader" also produces significant demands on your time which can inhibit making meaningful accomplishments. Finally, it would be an act which couldn't be reversed.
That's a fair point. There is some amount of personal risk intrinsic to being famous. In this specific case there is also certainly a political element involved which could shift the probabilities significantly. This is also fair. I more assumed that if the most obvious large quantity were destroyed it would act to significantly dissuade rational attackers. Why not go kidnap a random early Google employee instead if you don't have significant reason to believe the inventor's wealth exceeds that scale? But yes, in any case, it's not a perfect solution. I don't see it as a required logical consequence that Bitcoin matters because the inventor is unknown. It stands on its own merit. You don't have to know or not know anything about the inventor to know if the system works. I guess you're maybe assuming there's a risk the majority would amend the protocol rules to explicitly grant the inventor this power? They could theoretically do that without their True Name being known. Or perhaps there's a more basic risk that people would weigh the inventor's opinion above all and as such the inventor and protocol would be newly subject to coercion? It doesn't seem to me like this presents a real risk to the system (although perhaps increased risk to the inventor.) I think this would assume ignorance controls a majority of the interest in the system and that it's more fragile than it appears. Please correct as necessary. I put a few words in your mouth there for the sake of advancing discussion. My intuition is that this may be the most significant factor from the inventor's perspective. It is certainly a valid concern. Obviously true. Do the risks presented outweigh the potential benefits to humanity? I don't know but I think it's fair to say the identity of the creator does in fact matter-- but not necessarily to the continued functioning of Bitcoin.
Why do you think so?
1Wei Dai8y
This is interesting and something I hadn't thought about. Now I'm more curious who Satoshi is and why he or she or they have decided to remain anonymous. Thanks! You might want to post your idea somewhere else too, like the Bitcoin reddit or forum, since probably not many people will get to read it here.
Bruce Wayne: As a man, I'm flesh and blood, I can be ignored, I can be destroyed; but as a symbol... as a symbol I can be incorruptible, I can be everlasting. --Batman Begins
1. What do you think are the most interesting philosophical problems within our grasp to be solved? 2. Do you think that solving normative ethics won't happen until a FAI? If so, why? 3. You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy? 4. Do you have any role models?

What do you think are the most interesting philosophical problems within our grasp to be solved?

I'm not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It's also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn't foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.

If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I'd cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.

Do you think that solving normative ethics won't happen unti

... (read more)
FWIW, I have always been impressed by the consistent clarity and conciseness of your LW posts. Your ratio of insights imparted to words used is very high. So, congratulations! And as an LW reader, thanks for your contributions! :)
Thanks. I have some followup questions :) 1. What projects are you currently working on?/What confusing questions are you attempting to answer? 2. Do you think that most people should be very uncertain about their values, e.g. altruism? 3. Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why? 4. Where do you hang out online these days? Anywhere other than LW? Please correct me if I've misrepresented your views.
9Wei Dai10y
If you go through my posts on LW, you can read most of the questions that I've been thinking about in the last few years. I don't think any of the problems that I raised have been solved so I'm still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I've just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain? As a side note, I don't know if it's good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one's mind. But now that I have so many unanswered questions that I'm all very interested in, it's hard to stay on any of them for very long. So reader beware. :) Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they're too confident about altruism then they're likely to be too confident about many other philosophical problems, but even then I don't stress it too much. I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I'm on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they'd naturally focus more on the immediate engineering issues. Among people working on FAI, I guess they either have not thought as much about philosophical proble
Pain isn't reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I'm including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one's limits for its own sake, and staying attached to losing sports teams. I think this leads to the question of what people are trying to maximize.
One issue is that an altruist has a harder time noticing if he's doing something wrong. An altruist with false beliefs is much more dangerous than an egotist with false beliefs.
What is he doing, by the way? Wikipedia says he's still alive but he looks to be either retired or in deep cover...
Is this you? https://mercatoria.io/
Nm, I see that it's listed on your home page in the "companies I'm involved with" section.
Good morning Wei, Thank you for doing this. It seems like an excellent solution. My name's Dominic Frisby. I'm an author from the UK, currently working on a book on Bitcoin (http://unbound.co.uk/books/bitcoin). Here are some questions I'd like to ask. 1. What steps, if any, did you take to coding up your b-money idea? If none, or very few, why did you go no further with it? 2. You had some early correspondence with Satoshi. What do you think his motivation behind Bitcoin was? Was it, simply, the challenge of making something work that nobody had made work before? Was it the potential riches? Was it altruistic or political, maybe - did he want to change the world? 3. In what ways do you think Bitcoin might change the world? 4. How much of a bubble do you think it is? 5. I sometimes wonder if Bitcoin was invented not so much to become the global reserve digital cash currency, but to prove to others that the technology can work. It was more gateway rather than final destination – do you have a view here? That's more than enough to be going on with. With kind regards Dominic
6Wei Dai10y
1 - I didn't take any steps to code up b-money. Part of it was because b-money wasn't a complete practical design yet, but I didn't continue to work on the design because I had actually grown somewhat disillusioned with cryptoanarchy by the time I finished writing up b-money, and I didn't foresee that a system like it, once implemented, could attract so much attention and use beyond a small group of hardcore cypherpunks. 2 - It's hard for me to tell, but I'd guess that it was probably a mixture of technical challenge and wanting to change the world. 3 and 4 - Don't have much to say on these. Others have probably thought much more about these questions over the past months and years and are more qualified than I am to answer. 5 - I haven't seen any indication of this. What makes you suspect it?
Thanks Wei. You efforts here is much appreciated and your place in heaven is assured. In reply to your 5. My suspicion is not based on any significant evidence. It's just a thought that emerged in my head as I've followed the story. It's a psychological thing, almost macho - people like to solve a problem that nobody else has been able to prove something to themselves (and others). Also from his comment 'we can win a major battle in the arms race and gain a new territory of freedom for several years' I infer that he didn't think it would last foreever . Anyway THANK YOU WEI for taking the time to do this. Dominic
Have you read Satoshi's original emails?
about 70 million times. Even more times than I've read the Lord of the Rings
I was asking a serious question.
Do you mean the ones on the cryptography mailing list or the ones to Wei Dai? I've read them both. Not the ones to Adam Back though
5Wei Dai10y
I received this question via email earlier. Might as well answer it here as well. In b-money the money creation rate is not fixed, but instead there are mechanisms that give people incentives to create the right amount of money to ensure price stability or maximize economic growth. I specified the PoW to have no other value in order to not give people an extra incentive to create money (beyond what the mechanism provides). But with Bitcoin this doesn't apply since the money creation rate is fixed. I haven't thought about this much though, so I can't say that it won't cause some other problem with Bitcoin that I'm not seeing.
4Wei Dai10y
I received another question from this same interlocutor: Hmm, I’m not sure. I thought it might have been the optimizations I put into my SHA256 implementation in March 2009 (due to discussions on the NIST mailing list for standardizing SHA-3, about how fast SHA-2 really is), which made it the fastest available at the time, but it looks like Bitcoin 0.1 was already released prior to that (in Jan 2009) and therefore had my old code. Maybe someone could test if the old code was still faster than OpenSSL?
What do you make of the decision to use C++? Do you have any opinions of the original coding beyond the 'inelegant but amazingly resilient' meme? Was there anything that stood out about it?
2Wei Dai10y
It seems like a pretty standard choice for anyone wanting to build such a piece of software... No I haven't read any of it.
The correct pronunciation of your name. Wei - is it pronounced as in 'way' or 'why'? And Dai - as in 'dye' or 'day'? Thank you.

It's Chinese Pinyin romanization, so pronounced "way dye".

ETA: Since Pinyin is a many to one mapping, and as a result most Chinese articles about Bitcoin put the wrong name down for me, I'll take this opportunity to mention that my name is written logographically as 戴维.

Since the birth and early growth of Bitcoin, how has your view on the prospects for crypto-anarchy changed (if at all)? Why?
7Wei Dai10y
My views haven't changed very much, since the main surprise of Bitcoin to me is that people find such a system useful for reasons other than crypto-anarchy. Crypto-anarchy still depends on the economics of online security favoring the defense over the offense, but as I mentioned in Work on Security Instead of Friendliness? that still seems to be true only in limited domains and false overall.
Assuming the security risk of growing economic monopolization build in in the dna of proof of work (as well as proof of stake)  is going to prevail in the coming years: Do you think it is possible to create a more secure proof of democratic stake? I know that would require a not yet existing proof of unique identity first. So the question implies also: Do you think a proof of unique identity is even possible? P.S.: Ideas flowing around the web to solve the later challenge are for example: * non-transferable proof of signature knowledge in combination with e-passports * web of trust * proof of location  - simultanous solved AI-resistant captchas
Which philosophical views are you most certain of, and why? e.g. why do you think that multiple universes exist (and can you link or give the strongest argument for this?)
0Wei Dai6y
I talked a bit about why I think multiple universes exist in this post. Aside from what I said there, I was convinced by Tegmark's writings on the Mathematical Universe Hypothesis. I can't really think of other views that are particularly worth mentioning (or haven't been talked about already in my posts), but I can answer more questions if you have them?
Thanks, I'll ask a couple more. Do you think UDT is a solution to anthropics? What is your ethical view (roughly, even given large uncertainty) and what actions do you think this prescribes? How have you changed your decisions based on the knowledge that multiple universes probably exist (AKA, what is the value of that information)?
I'm doing a thesis paper on Bitcoin and was wondering if you, being specifically stated as one of the main influences on Bitcoin by Satoshi Nakamoto in his whitepaper references,could give me your take on how Bitcoin is today versus whatever project you imagined when you wrote "b-money". What is different? What is the same? What should change?
Hi. At http://www.weidai.com/everything.html you say: I don't understand what you mean saying that the future is more random than the past. Care to explain?
In some recent comments over at the Effective Altruism Forum you talk about anti-realism about consciousness, saying in particular "the case for accepting anti-realism as the answer to the problem of consciousness seems pretty weak, at least as explained by Brian". I am wondering if you could elaborate more on this. Does the case for anti-realism about consciousness seem weak because of your general uncertainty on questions like this? Or is it more that you find the case for anti-realism specifically weak, and you hold some contrary position? I am especially curious since I was under the impression that many people on LessWrong hold essentially similar views.
5Wei Dai6y
I do have a lot of uncertainty about many philosophical questions. Many people seem to have intuitions that are too strong or that they trust too much, and don't seem to consider that the kinds of philosophical arguments we currently have are far from watertight, and there are lots of possible philosophical ideas/positions/arguments that have yet to be explored by anyone, which eventually might overturn their current beliefs. In this case, I also have two specific reasons to be skeptical about Brian's position on consciousness. 1. I think for something to count as a solution to the problem of consciousness, it should at minimum have a (perhaps formal) language for describing first-person subjective experiences or qualia, and some algorithm or method of predicting or explaining those experiences from a third-person description of a physical system, or at least some sort of plan for how to eventually get something like that, or an explanation of why that will never be possible. Brian's anti-realism doesn't have this, so it seems unsatisfactory to me. 2. Relatedly, I think a solution to the problem of morality/axiology should include an explanation of why certain kinds of subjective experiences are good or valuable and others are bad or negatively valuable (and a way to generalize this to arbitrary kinds of minds and experiences), or an argument why this is impossible. Brian's moral anti-realism which goes along with his consciousness anti-realism also seems unsatisfactory in this regard.
Hi Wei. Do you have any comments on the Ethereum, ICO (Initial Coin Offering) and hard forks of Bitcoin? Do you think they will solve the problem of fixed monetary supply of Bitcoin since they somehow brought much more "money" (or securities like stock, not sure how to classify them)? Do you have any comments about the scaling fight of Bitcoin between larger blocks and 2nd-layer payment tunnels such as Lightning Network ?
Hello, We are students in 11th grade from Paris, 17 years old. We're doing a project on the bitcoin and cryptomoney. This project is part of the high school diploma and we were wondering if we could ask you a few questions about the subject. First what is the "bitcoin" for you and what is it's use? Do you think cryptomoney could totally replace physical money and would it be better? How long have you been working on the subject and what do you stand for? Thank you.
0Wei Dai6y
I'm not the best person to ask these questions. I spent a few years in the 1990s thinking about how a group of anonymous people on the Internet can pay each other with money without outside help, culminating in the publication of b-money in 1998. I haven't done much work on it since then. I don't currently have strong views on cryptocurrency per se, but these thoughts are somewhat relevant.
1Wei Dai8y
I don't follow Bitcoin development very closely, basically just reading about it if a story shows up on New York Times or Wired. If you're curious as to why, see this post and this thread.
1Wei Dai8y
Yes, that looks likely to be the case. That's part of it. If decentralized cryptocurrency is ultimately good for the world, then Bitcoin may be bad because its flawed monetary policy prevents or delays widespread adoption of cryptocurrency. But another part is that cryptocurrency and other cypherpunk/cryptoanarchist ideas may ultimately be harmful even if they are successful in their goals. For example they tend to make it harder for governments to regulate economic activity, but we may need such regulation to reduce existential risk from AI, nanotech, and other future technologies. If one wants to push the future in a positive direction, it seems to me that there are better things to work on than Bitcoin.
I thought for sure you were SN. In any case, I'd still much rather hang out with you than this Australian guy.
Sorry to be a bother but I had another related thought. I'm reminded of a reply you made to a post on Robin Hanson's blog: The link to shark fin soup is interesting. Did you mean to imply you were also concerned about the possible environmental impact of Bitcoin mining? I don't recall you mentioning that concern since. Maybe you consider the verdict still out on that issue or have since found reason to be unconcerned? I also find it a bit amusing and maybe even prescient. Here we are in 2016 (as far as we know) and China is overwhelmingly the largest producer of hashcash. The hunt also shows no immediate signs of slowing down..
Thanks, Wei. That really clarifies your position for me and includes a thought I hadn't previously considered but will certainly spend more time thinking about, re: decentralization risk. Obviously you feel it's very important to tackle the problem of FAI and I think that's a worthy pursuit. If you happen to have a mental list, mind sharing other ideas for useful things a programmer who hopes to make a positive impact could work on? It might be inspirational. Thanks again.

I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.

Opinions I express here and elsewhere are mine alone, not MIRI's.

To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).

When do you estimate that MIRI will start writing the code for a friendly AI?

Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.

This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).

Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

This is not a MIRI official estimate and you really should have disclaimed that.

OK, I will edit this one as well to say that.
We're so screwed, aren't we?
Yes, but not because of MIRI. Along with FHI, they are doing more than anyone to improve our odds. As to whether writing code or any other strategy is the right one--I don't know, but I trust MIRI more than anyone to get that right.
Oh yes, I know that. It just says a lot that our best shot is still decades away from achieving it's goal. Which, to be fair, isn't saying much.
Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable. I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.
And it will be quite likely at that point that we are much closer to having an AGI that will foom than to having an AI that won't kill us and that it is too late.
I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments. How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used? I am explicitly saying that MRI or FAI are pointless, or anything like that. I just want to point out that they posture as if they were saving the world from imminent destruction, while it is no where certain weather said danger is really the case.
1%? I believe that it is nearly impossible to use a foomed AI in a safe manner without explicitly trying to do so. That's kind of why I am worried about the threat of any uFAI developed before it is proven that we can develop a Friendly one and without using whatever the proof entails. Anyway, I wasn't aware that we use a 100% of our current technology in a safe way.
You may have a different picture of current technology than I do, or you may be extrapolating different aspects. We're already letting software optimize the external world directly, with slightly worrying results. You don't get from here to strictly and consistently limited Oracle AI without someone screaming loudly about risks. In addition, Oracle AI has its own problems (tell me if the LW search function doesn't make this clear). Some critics appear to argue that the direction of current tech will automatically produce CEV. But today's programs aim to maximize a behavior, such as disgorging money. I don't know in detail how Google filters its search results, but I suspect they want to make you feel more comfortable with links they show you, thus increasing clicks or purchases from sometimes unusually dishonest ads. They don't try to give you whatever information a smarter, better informed you would want your current self to have. Extrapolating today's Google far enough doesn't give you a Friendly AI, it gives you the making of a textbook dystopia.
What are the error bars around these estimates?
The first estimate: 50% probability between 2015 and 2020. The second estimate: 50% probability between 2020 and 2035. (again, taking into account all the conditioning factors).
Um. The distribution is asymmetric for obvious reasons. The probability for 2014 is pretty close to zero. This means that there is a 50% probability that a serious code project will start after 2020. This is inconsistent with 2017 being a median estimate.
Unless he thinks it's very unlikely the project will start between 2017 and 2020 for some reason.
Good point. I'll have to re-think that estimate and improve it.
If some rich individual were to donate 100 million USD to MIRI today, how would you revise your estimate (if at all)?
Can you elaborate on the types of toy code that you (or others) have tried in terms of illustrating theoreoms?
I have not tried any. Over the years, I have seen a few online comments about toy programs written by MIRI people, e.g., this, search for "Haskell". But I don't know anything more about these programs that those brief reports.

I've talked to a former grad student (fiddlemath, AKA Matt Elder) who worked on formal verification, and he said current methods are not anywhere near up to the task of formally verifying an FAI. Does MIRI have a formal verification research program? Do they have any plans to build programming processes like this or this?

I don't know anything about MIRI research strategy than is publicly available, but if you look at what they are working on, it is all in the direction of formal verification. After speaking to experts in formal verification of chips and of other systems, and they have confirmed what you learned from fiddlemath. Formal verification is limited in its capabilities: Often, you can only verify some very low-level or very specific assertions. And you have to be able to specify the assertion that you are verifying. So, it seems that they are taking on a very difficult challenge.
Your published dissertation sounds fascinating, but I swore off paper books. Can you share it in digital form?
Sure, I'll send it to you. If anyone else wants it, please contact me. I always knew that Semitic Noun Patterns would be a best seller :-)
5Eliezer Yudkowsky10y
(Problem solved, comment deleted.)
Meta: I think this was an important thing to say, and to say forcefully, but it might have been worth expending a sentence or so to say it more nicely (but still as forcefully). (I don't want to derail the thread and will say no more than this unless specifically asked.)
Will do.
2Eliezer Yudkowsky10y
What do you think is the liklihood of AI boxing being successful and why (interested in reasons, not numbers).
I don't think I have anything to say that hasn't been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is "embodied" and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated. Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.
My question is similar to the one that Apprentice posed below. Here are my probability estimates of unfriendly and friendly AI, what are yours? And more importantly, where do you draw the line, what probability estimate would be low enough for you to drop the AI business from your consideration?
Even a fairly low probability estimate would justify effort on an existential risk. And I have to admit, a secondary, personal, reason for being involved is that the topic is fascinating and there are smart people here, though that of course does not shift the estimates of risk and of the possibilities of mitigating it.
What probability would you assign to this statement: "UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years."

I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.

I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."

And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of humanity"; perhaps "50 years" should be used instead.

Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for "by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly."

Thank you. I didn't phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.
Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today's knowledge levels? One-tenth the time seems like a good estimate.

Ask me anything. I'm the author of Singularity Rising.

What, if anything, do you think a lesswrong regular who's read the sequences and all/most of MIRI's non-technical publications will get out of your book?

Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.

How much time did it take you to write the singularity book? How much money has it brought you?

Same question about your microeconomics textbook. Also, what motivated you to write it given that there must be about 2^512 existing ones on the market?

Hard to say about the time because I worked on both books while also doing other projects. I suspect I could have done the Singularity book in about 1.5 years of full time effort. I don't have a good estimate for the textbook. Alas, I have lost money on the singularity book because the advance wasn't all that big, and I had personal expenses such as hiring a research assistant and paying a publicist. The textbook had a decent advance, still I probably earned roughly minimum wage for it. Surprisingly, I've done fairly well with my first book, Game Theory at Work, in part because of translation rights. With Game Theory at Work I've probably earned several times the minimum wage. Of course, I'm a professor and part of my salary from my college is to write, and I'm not including this.

I wanted to write a free market microeconomics textbook, and there are very few of these. I was recruited to write the textbook by the people who published Game Theory at Work. Had the textbook done very well, I could have made a huge amount of money (roughly equal to my salary as a professor) indefinitely. Alas, this didn't happen but the odds of it happening were well under 50%. Since teaching microeconomics is a big part of my job as a college professor, there was a large overlap between writing the textbook and becoming a better teacher. My textbook publisher sent all of my chapters to other teachers of microeconomics to get their feedback, and so I basically got a vast amount of feedback from experts on how I teach microeconomics.

Why did you decide to run for Massachusetts State Senate in 2004? Did you ever think you had a chance of winning?

No. I ran as a Republican in one of the most Democratic districts in Massachusetts, my opponent was the second most powerful person in the Massachusetts State Senate, and even Republicans in my district had a high opinion of him.

Why did you run?

I wanted to get more involved in local Republican politics and no one was running in the district and it was suggested that I run. It turned out to be a good decision as I had a lot of fun debating my opponent and going to political events. Since winning wasn't an option, it was even mostly stress free.

I have a political question/proposition I have been pondering, and you, an intelligent semi-involved Massachusetts Republican, are precisely the kind of person who could answer it usefully. May I ask it to you in a private message?
Haven't read your book so not sure if you have already answered this. what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk? How much risk is increased for what increase in growth? Are there safe paths? (Maybe catch up growth in india and china is safe??)
Greater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I'm not sure how this nets out. Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI. The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.
Did you see any shifts in opinion (even in a small audience) following on your book?
Not really. Someone (I forgot who) wrote that I helped them see the race to create AI as a potential existential risk. I promoted the book on numerous radio shows and I hope I convinced at least a few people to do further research and perhaps donate money to MIRI, but this is just a hope.

Why do you think that it is so hard to get through to people?

Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.

Yet almost no one gets on board with the MIRI/FHI program.


I have thought a lot about this. Possible reasons: most humans don't care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it's impossible for people to imagine an intelligence AI that doesn't have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples' religious beliefs.

Your question is related to why so few signup for cryonics.

I don't know about anyone else, but I find it hard to believe that provable Friendliness is possible. On the other hand, I think high-probability Friendliness might be possible.
I agree with you that a lot of people think that way, but I have spoken to quite a few smart people who understand all the points -- I probe to figure out if there are any major inferential gaps -- and they still don't get on the bandwagon. Another point is simply that we cannot all devote time to all important things; they simply choose not to prioritize this.
Do you think "The Singularity" is a useful concept, or would it be better to discuss the constituent issues separately?
Yes it is useful. I define the singularity as a threshold of time at which machine intelligence or increases in human intelligence radically transform society. As similar incentives and technologies are pushing us towards this, it's useful to lump them together with a single term.

I'm a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.

What do you feel are the most pressing unsolved problems in AGI? Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)? How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?
In AGI? If you mean "what problems in AI do we need to solve before we can get to the human level", then I would say: * Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces). * Ways to cope with domain adaptation and model mis-specification. * Robust and modular statistical procedures that can be fruitfully fit together. * Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment). To some extent this reflects my own biases, and I don't mean to say "if we solve these problems then we'll basically have AI", but I do think it will either get us much closer or else expose new challenges that are not currently apparent. I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain). In general I think this is one of many possible scenarios, e.g. it's also possible that sub-human AI would already have control of much of the world's resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn't stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future. Not viable.
Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what's your estimate?
I don't personally work on AGI and I don't think the majority of "AGI progress" comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don't even know if thinking in terms of "number of AI researchers" is the right framing. That said, I'll try to answer your question. I'm worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the "human-level" AI vision, but also there are many people who are in the field who don't go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn't aware of or didn't think of as being AI-relevant, and who turn out to be working on problems that are important to me.
4Ben Pace10y
How did you come up with the course content for SPARC?
We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn't really know what we were doing and just wanted to get something off the ground. Over time we've asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don't spend teaching.
Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects? (I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don't have a clean way of handling computational constraints, and I think it's important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity. Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree). Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it's something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.
Is there any uptake of MIRI ideas in the AI community? Of HPMOR?
I wouldn't presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I've only spent serious time at a few universities. However, I can speculate based on the data I do have. I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong's existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I'm mostly going off of demographics, I don't know that many who have told me they read HPMOR. There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we're going to end up killing everyone, although not for too long. To address your comment in the grandchild, I certainly don't speak for Norvig but I would guess that "Norvig takes these [MIRI] ideas seriously" is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like "Hey you guys just said a bunch of stuff, based on what people in AI actually do, here's the parts that seem true and here's the part that seem false." It's also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. "Norvig takes the singularity seriously" seems much more likely to be true to me, though again, I'm far from being in a position to make informed statements about his views.
Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.
What's the quote? You may very well have better knowledge of Norvig's opinions in particular than I do. I've only talked to him in person twice briefly, neither time about AGI, and I haven't read his book.
Russell and Norvig, Artificial Intelligence: A Modern Approach. Third Edition, 2010, pp. 1037 - 1040. Available here.
I think the key quote here is:
Hm...I personally find it hard to divine much about Norvig's personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).
When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.
Interesting, thanks.
Like Mark, I'm not sure I was able to parse your question, can you please clarify?
Right, there was a typo. I've fixed it now. I'm just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously. And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it's popular in top physics departments.
What does that question mean?
Sorry, typo now fixed. See my response to jsteinhardt below.

My primary interest is determining what the "best" thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.

Why do you believe that there are god-like beings that interact with humans? How confident are you that this is the case?

I believe so for reasons you wouldn't find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn't harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they're falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is es... (read more)

Can you please describe one example of the firsthand evidence you're talking about? Also, I honestly don't know what the everyday sense of supernatural is. I don't think most people who believe in "the supernatural" could give a clear definition of what they mean by the word. Can you give us yours? Thanks.
I realize it's annoying, but I don't think I should do that. I give a definition of "supernatural" here. Of course, it doesn't capture all of what people use the word to mean.
Why not?
Where does the 5% threshold come from?
Psychologically "5%" seems to correspond to the difference between a hypothesis you're willing to consider seriously, albeit briefly, versus a hypothesis that is perhaps worth keeping track of by name but not worth the effort required to seriously consider.
(nods) Fair enough. Do you have any thoughts about why, given that the gods apparently do not want their existence to be common knowledge, they allow selected individuals such as yourself to obtain compelling evidence of their presence?
I don't have good thoughts about that. There may be something about sheep and goats, as a general rule but certainly not a universal law. It is possible that some are more cosmically interesting than others for some reason (perhaps a matter of their circumstances and not their character), but it seems unwise to ever think that about oneself; breaking the fourth wall is always a bold move, and the gods would seem to know their tropes. I wouldn't go that route too far without expectation of a Wrong Genre Savvy incident. Or, y'know, delusionally narcissistic schizophrenia. Ah, the power of the identity of indiscernibles. Anyhow, it is possible such evidence is not so rare, especially among sheep whose beliefs are easily explained away by other plausible causes.
Do you think the available evidence, overall, is so finely balanced that somewhere between 5% and 95% confidence (say) is appropriate? That would be fairly surprising given how much evidence there is out there that's somewhat relevant to the question of gods. Or do you think that, even in the absence of dramatic epiphanies of one's own, we should all be way more than 95% confident of (something kinda like) theism? I think I understand your statement about social epistemic rationality but it seems to me that a better response to the situation where you think there are many many bits of evidence for one position but lots of people hold a contrary one is to estimate your probabilities in the usual way but be aware that this is an area in which either you or many others have gone badly wrong, and therefore be especially watchful for errors in your thinking, surprising new evidence, etc.
No, without epiphanies you probably shouldn't be more than 95% confident, I think; with the institutions we currently have for epistemic communication, and with the polarizing nature of the subject, I don't think most people can be very confident either way. So I would say yes, I think between 5% and 95% would be appropriate, and I don't think I share your intuition that that would be fairly surprising, perhaps because I don't understand it. Take cold fusion, say, and ask a typical college student studying in psychology how plausible they think it is that it has been developed or will soon be developed et cetera. I think they should give an answer between 5% and 95% for most variations on that question. I think the supernatural is in that reference class. You have in mind a better reference class? I agree the response you propose in your second paragraph is good. I don't remember what I was proposing instead but if it was at odds with what you're proposing then it might not be good, especially if what I recommended requires somewhat complex engineering/politics, which IIRC it did.
What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it. I sometimes find it difficult to tell hallucinations from normal experiences, even though my reasoning faculty is intact during sleep paralysis and even though I know perfectly well that these things happen to me. Here are two stories to illustrate. Recently, my son was ill and sleeping fitfully, frequently waking up me and my wife. After one restless episode late in the night he had finally fallen asleep, snuggling up to my wife. I was trying to fall asleep again, when I heard footsteps outside the room. "My daughter (4 years old) must have gotten out of bed", I thought, "she'll be coming over". But this didn't happen. The footsteps continued and there was a light out in the hall. "Odd, my daughter must have turned on the light for some reason." Then through the door came an infant, floating in the air. V orpnzr greevsvrq ohg sbhaq gung V jnf cnenylmrq naq pbhyq abg zbir be fcrnx. V gevrq gb gbhpu zl jvsr naq pel bhg naq svanyyl znantrq gb rzvg n fhoqhrq fuevrx. Gura gur rkcrevrapr raqrq naq V fnj gung gur yvtugf va gur unyy jrer abg ghearq ba naq urneq ab sbbgfgrcf. "Fghcvq fyrrc cnenylfvf", V gubhtug, naq ebyyrq bire ba zl fvqr. Here's another somewhat older incident: I was lying in bed beside my wife when I heard movement in our daughter's room. I lay still wondering whether to go fetch her - but then it appeared as if the sounds were coming closer. This was surprising since at that time my daughter didn't have the habit of coming over on her own. But something was unmistakeably coming into the room and as it entered I saw that it was a large humanoid figure with my daughter's face. V er
You are arguing, if I understand you aright, (1) that the gods don't want their existence to be widely known but (2) that encounters with the gods, dramatic enough to demand extraordinary explanations if they aren't real, are commonplace. This seems like a curious combination of claims. Could you say a little about why you don't find their conjunction wildly implausible? (Or, if the real problem is that I've badly misunderstood you, correct my misunderstanding?)
Could a future neuroscience in principle change this, or do you have a stronger notion of incommunicability?
It is possible the beings in question could have predicted such advances and accounted for them. But it seems some sufficiently advanced technology, whether institutional or neurological, could make the evidence "communicable". But perhaps by the time such technologies are available, there will be many more plausible excuses for spooky agents to hide behind. Such as AGIs.
Incommunicable in the anthropic sense of formally losing its evidence-value when transferred between people, in the broader sense of being encoded in memories that that can't be regenerated in a trustworthy way, or in the mundane sense of feeling like evidence but lacking a plausible reduction to Bayes? And - do you think you have incommunicable evidence? (I just noticed that your last few comments dance around that without actually saying it.) (I am capable of handling information with Special Properties but only privately and only after a multi-step narrowing down.)
There might be anthropic issues, I've been thinking about that more the last week. The specific question I've been asking is 'What does it mean for me and someone else to live in the same world?'. Is it possible for gods to exist in my world but not in others, in some sense, if their experience is truly ambiguous w.r.t. supernatural phenomena? From an almost postmodern heuristic perspective this seems fine, but 'the map is not the territory'. But do we truly share the same territory, or is more of their decision theoretic significance in worlds that to them look exactly like mine, but aren't mine? Are they partial counterfactual zombies in my world? They can affect me, but am I cut off from really affecting them? I like common sense but I can sort of see how common sense could lead to off-kilter conclusions. Provisionally I just approach day-to-day decisions as if I am as real to others as they are to me. Not doing so is a form of "insanity", abstract social uncleanliness. The memories can be regenerated in a mostly trustworthy way, as far as human memory goes. (But only because I tried to be careful; I think most people who experience supernatural phenomena are not nearly so careful. But I realize that I am postulating that I have some special hard-to-test epistemic skill, which is always a warning sign. Also I have a few experiences where my memory is not very trustworthy due to having just woken up and things like that.) The experiences I've had can be analyzed Bayesianly but when analyzing interactions with supposed agents involved a Bayesian game model is more appropriate. But I suspect that it's one of many areas where a Bayesian analysis does not provide more insight than human intuitions for frequencies (which I think are really surprisingly good when not in a context of motivated cognition (I can defend this claim later with heuristics and biases citations, but maybe it's not too controversial)). But it could be done by a sufficiently experienced Bayesian
As best I can tell, a full reduction of "existence" necessarily bottoms out in a mix of mathematical/logical statements about which structures are embedded in each other, and a semi-arbitrary weighting over computations. That weighting can go in two places: in a definition for the word "exist", or in a utility function. If it goes in the definition, then references to the word in the utility function become similarly arbitrary. So the notion of existence is, by necessity, a structural component of utility functions, and different agents' utility functions don't have to share that component. The most common notion of existence around here is the Born rule (and less-formal notions that are ultimately equivalent). Everything works out in the standard way, including a shared symmetric notion of existence, if (a) you accept that there is a quantum mechanics-like construct with the Born rule, that has you embedded in it, (b) you decide that you don't care about anything which is not that construct, and (c) decide that when branches of the quantum wavefunction stop interacting with each other, your utility is a linear function of a real-valued function run over each of the parts separately. Reject any one of these premises, and many things which are commonly taken as fundamental notions break down. (Bayes does not break down, but you need to be very careful about keeping track of what your measure is over, because several different measures that share the common name "probability" stop lining up with each other.) But it's possible to regenerate some of this from outside the utility function. (This is good, because I partially reject (b) and totally reject (c)). If you hold a memory which is only ever held by agents that live in a particular kind of universe, then your decisions only affect that kind of universe. If you make an observation that would distinguish between two kinds of universes, then successors in each see different answers, and can go on to optimize those
Can you explain why you believe this? To me it doesn't seem like complex hallucination is that common. I know about 1% of the population is schizophrenic and hallucinates regularly, and I'm sure non-schizophrenics hallucinate occasionally, but it certainly seems to be fairly rare. Can you describe your own experience with these gods? ETA: To clarify, I'm saying that I don't think hallucination is common, and I also don't believe that gods are real. I don't see why there should be any tension between those beliefs.
I agree complex recurrent hallucination in otherwise seemingly psychologically healthy people is rare, which is why the "gods"/psi hypothesis is more compelling to me. For the hallucination hypothesis to hold it would require some kind of species-wide anosognosia or something like it.
I think you misunderstood me.... My position is: Most people don't claim to have seen gods, and gods aren't real. A small percentage of people do have these experiences, but these people are either frauds, hallucinating, or otherwise mistaken. I don't see why you think the situation is either [everyone is hallucinating] or [gods are real]." It seems clear to me that [most people aren't hallucinating] and [gods aren't real.] Are you under the impression that most people are having direct experiences of gods or other supernatural apparitions?
So how do you explain things like this?
Same as with Bigfoot/Loch Ness Monster. People (especially children) are highly suggestible, hallucinations and optical illusions occur, hoaxes occur. People lie to fit in. These are things that are already known to be true.
Well the miracle of the sun was witnessed by 30,000 to 100,000 people.
How many people witnessed this?
It looks to me as if the two of you are talking past each other. I think knb means "it doesn't seem to me like things that would have to be complex hallucination if there were no gods are that common", and is kinda assuming there are in fact no gods; whereas Will means "actual complex hallucinations aren't common" and is kinda assuming that apparent manifestations of gods (or something of the sort) are common. I second knb's request that Will give some description of his own encounters with god(s), but I expect him to be unwilling to do so with much detail. [EDITED to add: And in fact I see he's explicitly declined to do so elsewhere in the thread.] I think hallucination is more common than many people think it is (Oliver Sacks recently wrote a book that I think makes this claim, but I haven't read it), and I am not aware of good evidence that apparent manifestations of gods dramatic enough to be called "outright complex hallucination" are common enough to require a huge fraction of people to be anosognosic if gods aren't real -- Will, if you're reading this, would you care to say more?
Upon further reflection it is very difficult for me to guess what percentage of people experience what evidence and of what nature and intensity. I do not feel comfortable generalizing from the experiences of people in my life, for obvious reasons and some less obvious ones. I believe this doesn't ultimately matter so much for me, personally, because what I've seen implies it is common enough and clear enough to require a perhaps-heavy explanation. But for others trying to guess at more general base rates, I think I don't have much insight to offer.
A while back, you mentioned that people regularly confuse universal priors with coding theory. But minimum message length is considered a restatement of occam's razor, just like solomonoff induction; and MML is pretty coding theory-ish. Which parts of coding theory are dangerous to confuse with the universal prior, and what's the danger?
The difference I was getting at is that when constructing a code you're taking experiences you've already had and then assigning them weight, whereas the universal prior, being a prior, assigns weight to strings without any reference to your experiences. So when people say "the universal prior says that Maxwell's equations are simple and Zeus is complex", what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence: "this explanation is more probable because it is simpler, and I know it's simpler because it's more probable". When in fact the relationship between simplicity and probability is tautologous, not mutually reinforcing. This error really bothers me, because aside from its incorrectness it's using technical mathematics in a surface way as a blunt weapon verbose argument that makes people unfamiliar with the math feel like they're not getting something that they shouldn't in fact get nor need to understand. (I've swept the problem of "which prefix do I use?" under the rug because there are no AIT tools to deal with that and so if you want to talk about the problem of prefixes, you should do so separately from invoking AIT for some everyday hermeneutic problem. Generally if you're invoking AIT for some object-level hermeneutic problem you're Doing It Wrong, as has been explained most clearly by cousin_it.)
I thought it meant that if you taboo "Zeus", the string length increases more dramatically than when you taboo "Maxwell's equations".
Except that's not the case. I can make any statement arbitrarily long by continuously forcing you to taboo the words you use.
Sure, but stil somehow "my grandma" is more complex than "two plus two", even if the former string has only 10 characters and the latter has 12. So now the question is whether "Zeus" is more like "my grandma" or more like "two plus two".
Attempting to work the dependence of my epistemology on my experience into my epistemology itself creates a cycle in the definitions of types, and wrecks the whole thing. I suspect that reformalizing as a fixpoint thing would fix the problem, but I suspect even more strongly that the point I'm already at would be a unique fixpoint and that I'd be wrecking its elegance for the sake of generalizing to hypothetical agents that I'm not and may never encounter. (Or that all such fixpoints can be encoded as prefixes, which I too feel like sweeping under the rug.)
...So, where in this schema does Minimum Message Length fit? Under AIT, or coding theory? Seems like it'd be coding theory, since it relies on your current coding to describe the encoding for the data you're compressing. But everyone seems to refer to MML as the computable version of Kolmogorov Complexity; and it really does seem fairly equivalent. It seems to me that KC/SI/AIT explicitly presents the choice of UTM as an unsolved problem, while coding theory and MML implicitly assume that you use your current coding; and that that is the part that gets people into trouble when comparing Zeus and Maxwell. Is that it?
I think more or less yes, if I understand it. And more seriously, AIT is in some ways meant not to be practical, the interesting results require setting things up so that technically the work is pushed to the "within a constant" part. Which is divorced from praxis. Practical MML intuitions don't carry over into such extreme domains. That said, the same core intuitions inspire them; there are just other intuitions that emerge depending on what context you're working in or mathematizing. But this is still conjecture, 'cuz I personally haven't actually used MML on any project, even if I'm read some results.
Where are you posting these days?
I mostly don't, but when I do, Twitter. @willdoingthings mostly; it's an uninhibited drunken tweeting account. I also participate on IRC in private channels. But in general I've become a lot more secretive and jaded so I post a lot less.
Any particular reason? I'd certainly be interested in some of the things you have to say. Incidentally, I've also had some experiences myself that could reasonably be interpreted as supernatural and wouldn't mind comparing notes (although mine are more along the lines of having latent psychic powers and not direct encounters with other entities).
What do you mean with the term god?
This is hard to answer. I mean something vague. A god is a seemingly transhumanly intelligent agent. (By this I don't mean something cheap like "the economy" or "evolution", I mean the obvious thing.) As to their origins I have little idea; aliens, simulators, programs simpler than our physical universe according to a universal prior, hypercompetent human conspiracies with seemingly inhuman motivations, whatever, I'm agnostic. For what it's worth (some of) the entity or entities I've interacted with seem to want to be seen as related to or identical with one or more of the gods of popular religions, but I'm not sure. In general it's all quite ambiguous and people are extremely hasty and heavy with their interpretations. Further complicating the issue is that it seems like the gods are willing to go along with and support humans' heavy-handed interpretations and so the interpretations become self-confirming. I say "gods", but for all I know it's just one entity with very diverse effects, like an author of a book.
Note that many folklore traditions posit paranormal entities that are basically capricious and mischievous (though not unfriendly or malevolent in any real sense) and may try to deceive people who interact with them, for their own enjoyment. Some parapsychologists argue that _if_ psi-related phenomena exist, then this is pretty much the best model we have for them. In your view, how likely is it that you may also be interacting with entities of this kind?
It seems likely that something like that is going on, but I wouldn't think of capriciousness and mischievousness as character traits, just descriptions of the observed phenomena that are agnostic regarding the nature of any agency behind them. Those caveats are too vague for me to give an answer more precise than "likely".
I'm curious about your experience with memantine- I vaguely remember you tweeting about it. What was it helping you with? If you disagree in spirit with much of the sequences, what would you recommend for new rationalists to start with instead?
Re memantine, it helped with overactive inhibition some, but not all that much, and it made my short term memory worse and spaced me out. Not at all like the alcohol-in-a-pill I was going for, but of course benzos are better for that anyway. New rationalists... reminds me of New Atheism these days, for a rationalist to be new. They've missed out on x-rationalism's golden days, and the current currents are more hoi polloi and less interesting for, how should I put it, those who are "intelligent" in the 19th-century French sense. I don't really identify as a rationalist, but maybe I can be identified as one. I think perhaps it would mean reading a lot in general, e.g. in history and philosophy, and reading some core LW texts like GEB, while holding back on forming any opinions, and instead just keeping a careful account of who says what and why you or others think they said what they said. I haven't been to university but I would guess they encourage a similar attitude, at least in philosophy undergrad? I hope. Anyway I think just reading a bunch of stuff is undervalued; the most impressive rationalists according to the LW community are generally those who have read a bunch of stuff, they just have a lot of information at hand to draw from. Old books too: Wealth of Nations, Origin of Species; the origins of the modern worldview. Intelligence matters a lot, but reading a lot is equally essential. Studying Eliezer's Technical Explanation of Technical Explanation in depth is good for Yudkowskology which is important hermeneutical knowledge if you plan on reading through all the Sequences without being overwhelmed (whether attractively or repulsively) by their particular Yudkowskyan perspective. I do think Eliezer's worth reading, by the way, it's just not the core of rationality, it's not a reliable source of epistemic norms, and it has some questionable narratives driving it that some people miss and thereby accept semi-unquestioningly. The subtext shapes the text mor
Crazy people and trolls exist. Some of them are eloquent. So why do you talk about it at all when it just makes you seem crazy to most of us? Are you looking for confirmation or agreement in others' hallucinations? Or perhaps you suspect your kind of experiences are more common than openly expressed? I assume I'd take seriously your crazy experiences if they were mine. Is there anything at all you can say that's of value to someone like me who just hears crazy?
When it comes to epistemic praxis I am not a friend of the mob. I want to minimize my credibility with most of LessWrong and semi-maximize my credibility with the people I consider elite. I'm very satisfied with how successful my strategy has been. Indeed. I am somewhat proud of the care I've taken in interpreting my experiences. I think that even if people don't think there's anything substantial in my experiences, they might still appreciate and perhaps learn from my prudence. Interpreting the supernatural is extremely difficult and basically everyone quickly goes off the rails. Insofar as there is a rational way to really engage with the contents of the subject I think my approach is, if not rational, at least rational enough to avoid many of the failure modes. But perhaps I am overly proud.
Thanks for answering that as if it were a sincere question (it was). "Maybe this universe has invisible/anthropic/supernatural properties" is a fascinating line of daydreaming that seems a bit time-wasting to me, because I'm not at all confident I'd do anything healthy/useful if I started attempting to experiment. Looking at all the people who are stuck in one conventional religion or another, who (otherwise?) seem every bit as intelligent and emotionally stable as I am, I think, to the extent that you're predisposed to having any mystical experiences, that way is dangerous.

Discussion of this post goes here.

I think this is a really cool post idea. LW has a well-above-average user base, and sharing knowledge and ideas publicly can be a great boon to the community as a whole.

Yes, this is a really nice open thread that seems to be working well.

I have written various things, collected here, including what I think is the second most popular (or at least usually second-mentioned) rationalist fanfiction. I serve dinner to the Illuminati. AMA.

Some time ago you made the public offer to talk to depressed or otherwise seriously lonely people, even though you apparently really dislike phonecalls. Did anybody take you up on it? How did it go?
I don't think anyone sought me out on the basis of that offer, or if they did, they chose not to tell me or I forgot the details of how we met. Unrelatedly, I have friends with various loneliness and mental health statuses who I talk to (mostly online).
Do you have a routine as a writer? Do you get writer's block, and if yes, any favorite methods of breaking it? How much do you rewrite your drafts?
I don't have a routine. I could be described as having writer's block right now; I was devoting pretty much all my creative output to Effulgence, which ground to a screeching halt due to coauthor brain problems, and now I am metaphorically upside-down like a particularly unfortunate turtle. I have been trying various things but nothing has produced good results yet (I have written, like, one short story, but no chapters). However, I have every expectation of being able to return to Effulgence full speed ahead when my coauthor can even if I don't manage to budge my novels between now and then. I do almost no revising after I've gotten an entire chapter down (though I will sometimes iterate a sentence a bit while it's in progress, and I will rearrange paragraphs if my beta readers suggest it while I'm writing for my test audience.). I don't like revision after that; it slows me down and makes me second-guess myself and hate my output faster than I normally start to and leaves me with questionable mental maps of what has and has not happened. I will correct typos and grammatical errors and the like when I am made aware of them. Elcenia as it currently stands is a complete reboot which I generate without directly consulting the original - I extracted a loose plot outline, massaged it into making somewhat better sense, and haven't opened the old documents since except to remind myself of how to spell things and various assignments of numerical value, I write from the plot outline and memory. Effulgence I can't even fix typos because of the limitations of the Dreamwidth platform, so that's closer to literally no revision.
That's pretty interesting, thanks. More questions! Suppose for the sake of the argument that copyright problems do not exist, and you're offered to publish Luminosity as a book. Would you then want to work with editors/copyeditors and change the text substantially according to their suggestions, or are you more like "this is done, feel free to fix typos but otherwise take it or leave it"? Do you have a day job? A profession? What are they? Do you like them? (obviously feel free to ignore etc.) I am Omega, and I intend to change humanity in such a way that some authors never really existed, their books are gone from collective memory and never influenced anyone. Because I liked Luminosity, I allow you to name up to 5 authors whom I won't even consider expunging. Who do you name? (don't waste a slot on yourself, you're safe)
I didn't do more than get a copyeditor to look over the text of the Elcenia books before self-publishing them. I would probably go the extra mile if we're talking published published, but my tolerance for Executive Meddling is negligible, so it'd have to be more like pointing things out that I might want to fix so I can fix them than changing things without my participation. And it would have to be more about wording, pruning or adding exposition, etc. than about macroscopic plot or character issues, because I don't know how to touch those in a complete work without doing a whole lot more work than I'm willing to or having things fall apart like wet tissue paper. My most recent conventional employment was being the administrative manager at MetaMed, but I quit a few months ago, and now I am basically a house spouse, the "spouse" part pending till September. I'd take conventional employment if it dressed up pretty and knocked on my door with a bouquet of flowers (I have informed e.g. Louie that I exist, am unemployed, and like money) but it's not urgent. Irregularly, people will pay me to do things like write commissions (I am pretty bad about delivering in a timely manner though, I have one like half finished...) or make menus. Sometimes I get donations through my websites or somebody buys an Elcenia book. I think I'd need to know more about how this hypothetical works. Are my personal friends and family safe too even though you've likely never heard of their writing, or do I need to expend slots on all my favorite people who happen to have written fiction (or whatever the "author" threshold is)? Is Stephenie Meyer safe (because you liked Luminosity) or is she in the line of fire and something weird happens to Luminosity if she gets got? Are huge linchpins of influence like Tolkien safe just because they'd have knock-on effects beyond their own works, or are those knock-on effects part of the point?
The idea is that Omega makes the world stay roughly as it is, but the individual beauty and other virtues of the books are lost. The books are replaced by something generic and drab that is still able to generate roughly the same large-scale effect due to Omega's tweaks. And everyone you know personally is exempt. So for example Tolkien may be expunged, and instead someone else wrote some epic fantasy that helped launch a genre and it had something like orcs in it, but it wasn't nearly as powerful and beautiful and everything as Tolkien was. Same for Stephenie Meyer: whatever you liked about Twilight is gone, replaced with some generic vampire love story that inexplicably became incredibly popular, and you're able to base Luminosity on it, and maybe add more of your personal imagination to offset the drabness, so large-scale effects added up to the same in your world. Basically I'm trying, instead of asking the familiar "your top 5" or "the 5 books you'll take to an uninhabited island", to ask "which 5 books you find it most painful to contemplate being lost to the world as if they never existed, but everything else mostly stayed the same". It's an inherently self-contradictory question, I know, but maybe still worth asking.
Hmm. Taking this question at face value where I am only prioritizing by the individual flavor and character of the books and not their cultural significance, I'm going to say let's keep J.K. Rowling... Tamora Pierce... Sharon Shinn, Laini Taylor, John Scalzi. I was also tempted by Philip Pullman (but I think about 75% of what I'd miss is people putting daemons in arbitrary fanfiction, which it sounds like would get suitably replaced?) and Zenna Henderson (but I think losing her stories would probably be a smaller loss to me than the ones I picked). I did this by looking at my bookshelf which has actual books on it, so if I was supposed to interpret it to include screenwriters or anything the answer is invalid.
My impression of Luminosity, after reading it and before reading Radiance, was that it was essentially depicting the usefulness of luminosity more of less entirely by showing vampire-Bella completely losing her luminosity techniques/attitudes. To what degree did you intend this? Do you see it as accurate? Also what do you think of Syzygy, seven years down the line? (Me(highschool) quite liked it. Me(2014) was very surprised to discover that it was written by someone I encountered again elsewhere.)
I did not intend that interpetation, and have been repeatedly surprised to find people espousing it. There is a reduction in Luminosity's didacticism over the course of the book as I got caught up in the plot, and it's possible it happens to undergo a particularly noticeable drop around when Bella turns which people are reading this way. However, I didn't intend to show Bella's various errors as being consequences of any abandonment of her interior luminosity, however much less narration I spent on it. She has plenty of other personality flaws and resource shortages to drive her mistakes. Oh man, Syzygy. That started closer to a decade ago, though I guess it did end around seven years ago. I don't hate it enough to break my rule that what goes up, stays up, so when I recovered the files from the unexpected cataclysm that caused the comic's end, up they went. But it's embarrassing, very noticeably amateur, both in the art and the writing. I'm still pleased with a couple of particularly nasty turns of plot, like Kulary's backstory, but they weren't presented to their best effect.
What's the status of Effulgence? I gave up on it soon after it branched out wildly around the Milliways part, and when I checked to see what's going on, there appeared to be no updates in 6 months or so. Anything else you've written recently that you may recommend?
My coauthor for Effulgence is suffering from an inability to can. It is slowly recovering (today we were able to do a not-in-Effulgence-continuity sandbox thread for a little more than thirty comments, and she's been writing an unrelated short story!) and we are continuing to make plans for what we will write when the ability to can is restored. The last new post was made in November 2013, though, so I'm not sure where you're getting "6 months or so". I periodically update curious parties about Effulgence behind-the-scenes goings-on in this TV Tropes forum thread which was originally about Elcenia but is now about my stuff in more generality. I have released two short stories relatively recently, though the latter (AU-fanfiction-of-sorts of Three Worlds Collide) was written back in 2012 and I just sat on it for a while. I have also been writing a series of social justice blog posts for alternate universes which have inspired some entertaining audience participation. I recommend subscribing to my general RSS feed if you are curious about my creative output. I have less than zero idea how far you got into Effulgence when you describe yourself as dropping it "after it branched out wildly around the Milliways part". But if wild branching and Milliways were turnoffs for you I don't think you're gonna like anything after that mysterious part.
What does it mean "to can"? Two uses spring to my mind: to discard material (as in "trashcan"); to declare work done (somehow from "film canister").
It's an internet-dialect neologism. Related to "I can't" without any subsequent verb, evolved into "I have lost the ability to can" etc.
Thanks! If I recall, I really liked the story as a standalone one, up until the Luminosity Bella showed up. Of course, given the name and the nature of the RP, I should have expected it.
Yeah, there are, um, lots of them. You can read some of their stories before they hit the "peal" as self-contained AUs, if you want - just go to the first instance of a new "symbella" in the index (except for the lower-case omega, that's a special case), and read only posts that have no other symbellas. (Some posts have no symbella and these are usually part of the same story as whatever's closest to them, it just means the relevant Bell isn't present in that particular thread.) These will sometimes cut off kind of awkwardly, of course...
Ah, thanks, I'll give it a try. I was confused about where the stories start.
If you don't mind me asking what do you do other than writing? Do you have any plans to make it a career or is it strictly recreational?
I don't currently have a day job, though I have in the past. I suppose you could call me a housefiancée, spousehood pending. I'm not interested in traditional publishing, but I certainly wouldn't object if my mini-fandom exploded and started showering me with money.

I'm an unemployed legally blind mostly white American who may have at one point been good at math and programming, who is just smart enough to get loads of spam from MIT, but not smart enough to avoid putting my foot in my mouth an average of monthly on Lesswrong. I've been talking about blindness-related issues a lot over the past year mostly because I suddenly realized that they were relevant, but my aim is to solve these problems as quickly as possible so I can get back to getting better at things that actually matter. On the off chance that you have questions, feel free to AMA.

How blind are you, in layman terms of what you can/can't see? What's your prognosis?
I'm not-quite completely blind; what little vision I have tends to fluctuate between effectively nonexistent and good enough to notice vague details maybe once or twice a year. I could see better up until I was 14, but my vision was still too poor to get out of using braille and a cane (given thick glasses and enough time, I could possibly have read size 20 font; even with the much larger font used in movie subtitles, I had to pause the video and put my face against the screen to read them). I don't know my official acuity/diagnoses (It's been a few years since I saw an eye doctor), but I appear to have started out with retinal detachment and scarring, and later developed uveitis. The latter seems to be the primary cause for the dramatic decline starting from age 14.
Why is that? No healthcare policy? It seems that you have good reason to frequent an eye-doctor.
Most of my medical everything is handled by my parents, who are unlikely to do anything unless it is brought to their attention (though sometimes they do ask to make sure nothing's quietly going horribly wrong). My vision was awful enough when last I went, and the doctor only aware of a full-on bionic eye as a possible method for improvement, and what little I had left vulnerable enough to damage/severe discomfort from the sorts of things needed to examine my eyes (holding them open and shining a light in, basically) that it's mostly stopped being worth it. I did discover a possible treatment for my specific condition recently. I am unsure as to if it would be of much value with my vision as it currently is, but it's something I aim to look into further when I've sorted out enough of this basic life stuff.
Are these problems likely to be correctable/improvable with medicine, but you have no money/insurance to get medical help? Or are they of a kind that basically can't be helped, and that's why you haven't been to a doctor in years? Or is it something else? Do you use a reader program to browse the web and this site? Do you touch-type or dictate your comments? (I realize that my questions are callous; please feel free to ignore if they're too invasive)
The retinal issues are unlikely to be fixable in the immediate future (though the latest developments on that front seem potentially promising). There may be a treatment for the more annoying issue, but I don't know if it's too late/what I should do to learn more, and so I'm waiting until life in general is more favorable to dig into it further. (Which I expect means I'll be putting it off until 2015, since I expect to be fairly occupied during most of 2014.) For using the internet/computers in general, I use Nonvisual Desktop Access, a free screen reader which only recently attained comparable status to Jaws for Windows, which I'd been using prior to 2011. These work well with plaintext, and have trouble with certain types of controls/labels and images and such (I had to Skype someone a screenshot to get past the CAPCHA to register here. I was using a trial of a CAPCHA-solving add-on at the time, but it was unable to locate the CAPCHA on Lesswrong.). Since NVDA is open source, users frequently develop useful add-ons and plugins, such as a CPU usage monitor and the ability to summon a Google Translation of copied text with a single keystroke. (It supposedly includes an optical character recognition feature, but I've never figured out how to use it.). I touch-type. I'm not much of a fan of dictation, though I'm not sure why.
1. Why do you say "may have at one point been good at math and programming." Aren't you still good at that? Are opportunities for people like yourself -- blind, but with those aptitudes --, available in today's world, where so much is done in front of a computer screen, and adaptive technologies exist? Or do you think that in a competitive world, blindness puts you hopelessly behind sighted people? 2. Do you think that your level of ambition and drive are lessened by your disability, increased, or does it make no difference? 3. Does the CfAR-style philosophy of instrumental rationalism help you overcome your disability?
1. Issues in my first two years of college interfered in my Math/Physics/Computer Science courses, and I never got back into those. So my skills in each has remained only that which I've used most (for example, I've made some games, but the required qualifications for most programming jobs I've come across exceed what I can do without additional training. I think that, even had I not dropped the ball on those, competing with sighted programmers/scientists/mathematicians would require a decent amount of exceptionalism and/or luck. Mathematic notation is also tricky in terms of accessibility; there exist codes such as Nemeth that make math in braille relatively powerful, but on the computer side of things, graphs and LeTX take some doing to use, which also makes trying to study anything with math online difficult (I once downloaded a web page and edited its source so I could read the equations). 2. It's hard to say. For roughly four years after my vision went from poor to useless, I think I was still fairly driven and ambitious (I did a lot of writing, half taught myself Japanese and Javascript, self-published a terrible science fiction novel, learned to use a music composition program whose accessibility was poor, improvised some crude techniques for making simple images, got into and graduated from the state Math and Science school, and was taking plenty of notes on numerous other things I was hoping to do sooner than later). It all went to hell when I got to college, and has gone back and fourth since, but I'm not sure if any of this compares favorable/unfavorably to the average person. There may be some contributing factors to the negative aspects that go back to my vision (I can't safely get up and go running, or do the all-important eye-contact thing, as examples), but I don't think the affect in the ambition/motivation area has been majorly significant. 3. I'm not sure what you mean, specifically? My exposure to CFAR consists primarily of LesssWrong; I've be
Standard economics question: have you considered accepting lower pay?

I write about causality sometimes.

How significant/relevant is the mathematical work on causality to philosophical work/discussion? If someone was talking about causality in a philosophical setting and had never heard of the relevant math, how badly would/should that reflect on them? Does it make a difference if they've heard of it, but didn't bother to learn the math?
I am not up on my philosophical literature (trying to change this), but I think most analytic philosophers have heard of Pearl et al. by now. Not every analytic philosopher is as mathematically sophisticated as e.g. people at the CMU department. But I think that's ok! I don't think it's a wise social move for LW to beat on philosophers.
Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?) Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?

Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)

On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the 'penetration rate' is path-dependent (that is, depends on the history of the field, personalities involved, etc.)

To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).

Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches?

I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that's fun to argue about). People model it in ... (read more)

Can you point out some cool/insightful applications of broadly Pearlian causality ideas to applied problems in, say, epidemiology or econometrics?
"Pearlian causality" is sort of like "Hawkingian physics." (Not to dismiss the amazing contributions of both Pearl and Hawking to their respective fields). ---------------------------------------- I am not sure what cool or insightful is for you. What seems cool to me is that proper analysis of causality and/or missing data (these two are related) in observational data in epidemiology is now more or less routine. The use of instrumental variables for getting causal effects is also routine in econometrics. The very fact that people think about a causal effect as a formal mathematical thing, and then use proper techniques to get it in applied/data analysis settings seems very neat to me. This is what success of analytic philosophy ought to look like!
What you mention in your last paragraph is roughly what I had in mind when asking for examples. So I take it that IVs are a method inspired by causal graphs (or at least causal maths)? If so you've answered my question.
IVs were first derived by either Sewall Wright or his dad (there is some disagreement on this point). I don't think they formally understood interventions in general back in 1928, but they understood causality very well in the linear model special case. IVs can be used in more general models than linear, and the reason they work in such settings needed formal causal math to work out, yes. IVs recover interventionist causal effects.
It's his job.
Nobody gets my jokes...
What caused your interest in the topic? What was the arc of your career leading up to that?
Thanks for your question. I got into AI/ML and graphical models as an undergrad. I thought graphical models were very pretty, but I didn't really understand them back then very well (probably still don't..). Causal inference is the closest we have to "applied philosophy," and that was very interesting to me because I like both philosophy and mathematics (not that I am any good at either!) Also I had an opportunity to study with a preeminent person and took it.
Are you aware of any attempts to assign a causality(-like?) structure to mathematics? There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure), but the probability based definition of causality fails when all the probabilities are 0 or 1.
Can you give a simple example of/pointer to what you mean?
I don't know if this is what Nier has in mind, but it reminds me of Cramer's random model for the primes. There is a 100 per cent chance that 758705024863 is prime, but it is very often useful to regard it as the output of a random process. Here's an example of the model in action.
I am aware of "logical uncertainty", etc. However I think uncertainty and causality are orthogonal (some probabilistic models aren't causal, and some causal models, e.g. circuit models, have no uncertainty in them).
Well, in analytic number theory, for example, there are many heuristic arguments that have a causality like flavor; however, the proofs of the statements in question are frequently unrelated to the heuristics. Also, this is a discussion about the causal relationship between a theorem and its proof.
I don't know much about analytic number theory, could you be more specific? I didn't follow the discussion you linked very well, because they say things like "Pearlian causality is not counterfactual", or think that there is any relationship between implication and causation. Neither is true.

Ask me almost anything. I'm very boring, but I have recovered from depression with the help of CBT + pills, am a lurker since back from the OB days and know the orthodoxy here quite well, started to enjoy running (real barefoot if >7 degrees Celsius) after 29 years of no physical activity, am chairman of the local hackerspace (software dev myself, soon looking for a job again), and somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).

What steps did you take to start enjoying running?
This was surprisingly simple: I got myself to want to run, started running, and patted myself on the back everytime I did it. The want part was a bit of luck: I always thought I "should" do some sports, for physical and more importantly mental health reasons, and think that being able to do stuff is better than not being able, ceteris paribus. So I was thinking what kind of activity I might prefer. I like my alone time (so team- or pair-sports are out), I dislike spending money when I expect it to be wasted (like Gym memberships, bikes, et al.). And I feel easily embarassed and ashamed, and like to get myself at least somewhat up to speed on my own. Running fits those side requirements. Out of chance I got hold of "Born to Run", and even after the first quarter of the book I thought that it would be great if I could just go out on a bad day and spend an hour free of shit, or how it would be great that I could just reach some location a few kilometers away without any prep or machines or services. I then decided that I will start running, and that my primary goal shall be that I like it and be able to do it even in old age if such would happen. With the '*' that I give myself an easy way out in case of physical pain or unexpected hatred against the activity, but not for any weasel reasons. I didn't start running for another one and a half years, because Schweinehund, subtype Innerer. When my mood was getting slightly better (I was again able to do productive work), I started, with the "habit formation" mind-set. Also didn't tell anyone in the beginning. I think it helped that I already had some knowledge on how to train and run correctly, which especially in the beginning meant that I always felt like I could run further than I was "allowed" to. And for good feedback: However it went, when I finished my training, I "said" to myself: I did good. I feel good. I feel better than before I started. I wrote every single run down on RunKeeper and Fitocracy, and always
That's not boring, it is impressive and admirable. Well done.
What's your motivation for veganism? What do you enjoy most in software development, and why are you going to be looking for a job again soon? What's your dream SW dev job?
Moral reasons. All else equal, I think that inflicting pain or death is bad, and that the ability to feel pain and the desire to not die is very widespread. I also think that the intensity of pain in simpler animals is still very strong (I think humans did not evolve large brains because otherwise the pain was not strong enough). I also think that our ability to manage pain slighly reduces the impact of our having the ability to suffer more strongly and with more variety. But I give, for sanity check reasons, priority to the desires of "more complex" animals, like humans. Due to our technical ability we can now produce supplements for micronutrient which are missing or insufficently available in plants[1], and so I see health concerns resolved. So all the pain and death that I would inflict would only be there for greated enjoyment of food. Although I love the taste of meat and animal products, the comparative enjoyment is not big enough that I would kill for it. That I can enjoy plant-based foods is partly based upon my not being afraid of using my kitchen, and having a good vegan/vegetarian self-service restaurant 100m from my apartment. And than there are the environmental reasons, and the antibiotic use, etc. etc. They count, and might be even sufficent on their own, but I'll only investigate those in case my other concerns/reasons were invalidated. [1] There is vegan vit B12, vit D3, EPA/DHA (omega3), and creatin powder.
Cannot really answer what I enjoy most; I like almost every job that comes up, with only a few exceptions. I hate repeating myself, and I hate having to do things in a ... ... ... way against my better judgement. I prefer to work more time (as in effort and calender time) doing the architecture/design/coding parts, but I also prefer doing other stuff once in a while more than being purely a lonely coder. I will give my notice in a few hours, so I'll than search for a new job. I will have two months time for that, though, and maybe I take some time off before starting in a new company. I'll end this job because neither one of money, project nor team is good enough to make me happy, and the job market for software developers allows for searching for improved conditions. My dream SW job would involve writing open source software which somehow tangibly improves the lives of some people (think better medical DAq and analysis instead of the newest photo sharing app), working with a team where competence and respect is wide-spread, as is friendlyness, and pay which is not worse than I what I got when I was still failing to drop out of college. Sadly, I do not think such a job exists, especially not for people like me (who do not have the necessary skills for anything fancy).
I'm also working on depression with CBT and pills. I find I function well when I have structure and external obligations but revert to inaction when left to my own devices, any similar experience? Any general advice?
Similar experience, and not much of real advice. I mostly solve it by setting up obligations by myself. However, I revert to this only for stuff that is important. Examples: I've announced and discussed doing some boring accounting and controlling for the hackerspace, and people now expect some specific results. On anothor note, instead of procrastinating about finding a better workplace, I gave my notice. Once I was out of the job, I simply had to start looking. Finally, I do not need to be perfect. More people than I expected have the odd day or two during the workweek, and knowing this I have reset my expectations regarding my own performance to something more humane.
Could you go into a little more detail by what you mean by recovered from depression and what aspects of CBT assisted the most?
I'm sorry to have not answered for so long, I had some busy weeks. Depression: I'd suffered many months from a depression bad enough that I was not able to work the hours of a part-time job, let alone achieve any acceptable performance. I was using alcohol as replacement for other diluted variants of H2O. This was also not the first time of being depressed, and needless to say, such things can fuck up your life, and are generally not very desirable. I recovered as well as I think possible: I feel well. I can work. I enjoy, and can concentrate on stuff that piques my interest. I feel secure enough to make plans spanning more than two days, and expect to be somewhere between OK and very good for the forseeable future. For most measures, I am now better functioning, healthier (physically and emotionally) than the average person. The sword of Damokles being that the next episode might break through my defenses so fast that I break down. Again. If I remember correctly, there is a four in five chance there will be one. I do not worry about that, though. Therapy: The most useful part of my therapy was the judicious choice of some small things to work on, and the frequent feedback from an outsider. Also, never underestimate by how much a therapist approaches problems differently than a damaged brain. On my own I would either not do anything, and hate myself for it, or try something, and hate myself for failing (again), or do something, and hate myself for spending energy on such a worthless, embarassingly tiny task. It was primarily option one. It took some months, but through repeated experience I came to accept slight progress as progress nevertheless, and many of the tasks I was given to do integrate very nicely into everday activities now. I learned about saying "Well done!" to myself. I also learned about building habits, not as in 'scientist', but but applied to my own life. I also made it through some setbacks, faster and better than in the past years, so there

I didn't think I had anything particularly interesting to offer, but then it occurred to me that I have a relatively rare medical disorder: my body doesn't produce any testosterone naturally, so I have to have it administered by injection. As a result I went through puberty over the age range of ~16-19 years old. If you're curious feel free to AMA.

(also, bonus topic that just came to mind: every year I write/direct a Christmas play featuring all of my cousins, which is performed for the rest of the family on Christmas Eve. It's been going on for over 20 years and now has its own mythology, complete with anti-Santa. It gets more elaborate every year and now features filmed scenes, with multi-day shoots. This year the villain won, Christmas was cancelled for seven years and Santa became a bartender (I have a weird family). It's...kind of awesome? If you're looking for a fun holiday tradition to start AMA)

Cool. Well, for starters, what are your thoughts on the experience? Presumably you were better-equipped to analyse the change than most.
Interesting, I had a very similar puberty, but was never diagnosed with a disorder. What were the symptoms that led to a diagnosis?
What's your favorite amount of testosterone? Why? Would the optimum shift according to purpose?
Well, I've been on the same dose for the past 8 years (set by my original endocrinologist and carried forward by all doctors since, who've basically shrugged and said "ehh, worked so far"). Last time I had my testosterone levels checked they were on the high end of normal, which suits me fine. I have a fairly high sex drive, which you might expect, but very low aggression, which you might not - although I've always been a very passive and non-aggressive person. So I guess to answer your question, I haven't really explored different amounts. I don't particularly plan to in the future, if for no other reason than I've been on my current dose long enough to self-identify with the range of behaviours it produces.
Other than wanting more sex, did you notice your mind changing? I also wonder if late puberty extends the pre-adult skill learning window (adults supposedly can't learn as much or as well).

Biology/genetics graduate student here, studying the interaction of biological oscillations with each other in yeast, quite familiar with genetic engineering due to practical experience and familiar with molecular biology in general. Fire away.

What's the current thinking on how to prevent physiological decay over time (id est ageing)? Figure a way to recover the bits of DNA cleaved in mitosis?

Shortening telomeres are a red herring. You need multiple generations of a mammal not having telomerase before you get premature ageing, and all the research you've heard about where they 'reversed ageing' with telomerase was putting it back into animals that had been engineered to lack it for generations. Plus lack of telomerase in most of your somatic cells is one of your big anti-cancer defenses.

Much more of a problem is things like nuclear pores never being replaced in post-mitotic cells (they're only replaced during mitosis) and slowly oxidizing and becoming leaky, extracellular matrix proteins having a finite lifetime, and all kinds of metabolic dysregulation and protein metabolism issues.

This isn't exactly my field, but there's a few interesting actual lines of research I've seen. One is an apparent reduction in protein-folding chaperone activity with age in many animals from C. elegans to humans [people LOVE C. elegans for ageing studies because they can enter very long-lived quiescent phases in their life cycle, and there are mutations with very different lifespans]. People still aren't quite sure what that means or where it comes from.

There's lots of interest in calor... (read more)

Intriguing, and thank you for the detailed reply. May I respond in the future should I have further queries?
Sure, why not. I might be able (in a less busy time) to dig up that protein chaperone research too, somebody came to the university I'm at to give a talk on it a month or two ago.
How stable is gene-to-protein translation in a relatively identical medium? I.e. if we abstract away all the issues with RNA and somehow neutralize any interfering products from elsewhere, will a gene sequence always produce the same protein, and always produce it, whenever encountered at as specific place? Or is there something deeper where changes to the logic in some other, unrelated part of the DNA could directly affect the way this gene is expressed (i.e. not through their protein interfering with this one)? Or maybe I don't understand enough to even formulate the right question here. Or perhaps this subject simply hasn't been researched and analyzed enough to give an answer to the above yet? If the answer is simple, are there any known ratios and reliability rates? There's no particular hidden question; I'm not asking about designer babies or gengineered foodstuffs or anything like that. I'm academically curious about the fundamentals of DNA and genetic expression (and any comparison between this and programming, which I understand better, would be very nice), but hopelessly out of my depth and under-informed, to the point where I can't even understand research papers or the ones they cite or the ones that those cite, and the only things I understand properly are by-order-of-historical-discovery-style textbooks (like traditional physics textbooks) that teach things that were obsolete long before my parents were born.

The dreaded answer: 'Well, it depends..."

The genetic code - the relationship between base triplets in the reading frame of a messenger RNA and amino acids that come out of the ribosome that RNA gets threaded through – is at least as ancient as the most recent common ancestor of all life and is almost universal. There are living systems that use slightly different codons though – animal and fungal mitochondria, for example, have a varied lot of substitutions, and ciliate microbes have one substitution as well. If you were to move things back and forth between those systems, you would need to change things or else there would be problems.

If you avoid or compensate for those weird systems, you can move reading frames wherever you want and they will produce the same primary protein sequence. The interesting part is getting that sequence to be made and making sure it works in its new context.

At the protein level, some proteins require the proper context or cofactors or small molecules to fold properly. For example, a protein that depends on disulfide bonds to hold itself in the correct shape will never fold properly if it is expressed inside a bacterium or in the cytosol of a ... (read more)

That was an awesome breakdown of things, thank you! I've learned way more from this than from all my previous reading, without even including the data about what I didn't know I don't know and other meta.
Any time. Feel free to message with other questions too.
Just for fun, here's a couple of good-enough animations of various eukaryotic systems. Shows nothing of the constant jiggering back and forth of the molecules and makes it look far too directed, but it gives an idea of many of the things going on. https://www.youtube.com/watch?v=yqESR7E4b_8

I'm a programmer at Google in Boston doing earning to give, I blog about all sorts of things, and I play mandolin in a dance band. Ask me anything.

1. What are you working on at google? 2. How much do you earn? 3. How much do you give, and to where?

What are you working on at google?

ngx_pagespeed and mod_pagespeed. They are open source modules for nginx and apache that rewrite web pages on the fly to make them load faster.

How much do you earn?

$195k/year, all things considered. (That's my total compensation over the last 19 months, annualized. Full details: http://www.jefftk.com/money)

How much do you give, and to where?

Last year Julia and I gave a total of $98,950 to GiveWell's top charities and the Centre for Effective Altruism. (Full details: http://www.jefftk.com/donations)

Did you ever get down to trying fumaric acid? How does it compare to citric and malic acids?
I've added an update to that post: http://www.jefftk.com/p/citric-acid
Adding citric acid to overly sweet jam is indeed wonderful.
I once had a one-pound bag of Sour Skittles, and after eating all of them, consumed the entirety of the white powder left over in the bag at once. Simply thinking about that experience is sufficient to produce a huge burst of saliva.
That powder is mostly citric acid mixed with sugar. Mmm.
Thanks! Will not order then.
If you're ever in Boston I'm happy to give you some to play with.
Uncertain how soon I will be able take you up on this, but thanks!

I like the idea.

Here we go, things that might be interesting to people to ask about:

  • born in Kharkov, Ukraine, 1975, Jewish mother, Russian father

  • went to a great physics/math school there (for one year before moving to US), was rather average for that school but loved it. Scored 9th in the city's math contest for my age group largely due to getting lucky with geometry problems - I used to have a knack for them

  • moved to US

  • ended up in a religious high school in Seattle because I was used to having lots of Jewish friends from the math school

  • Became an

... (read more)
How did your family handle your deconversion? Do you continue with the religious Jewish style of everyday life? Do your kids speak Russian at all/fluently? If not, are you at all unhappy about that? What about Hebrew? If you're comfortable discussing the HFA kid: at what age was he diagnosed? What kind of therapy did you consider/reject/apply? What are the most visible differences from neurotypical norm now?
Hi Anatoly, Initially it was a shock to my wife, but I took things very slowly as far as dropping practices. This helped a lot and basically I do whatever I want now (3.5 years later). Also transferred my kids to a good public school out of yeshiva. My wife remains nominally religious, it might take another 10 years :) My kids don's speak Russian - my wife is American-born. I prefer English myself, so I'm not "unhappy" about them not speaking Russian in particular although I'd prefer them to be bilingual in general. They read a bit of Hebrew. I'm happy to discuss my HFA kid via PM.
So glad to hear you got your kids out of yeshiva. Way to go! Did you meet your wife via shidduch or more traditionally? If you ever did shidduch: I'm curious if in the orthodox circles in the US a Baal Teshuva faces a tougher challenge in shidduch than someone who grew up in a frum family. This is very much the case in Israel. Here I've heard tales of severe discrimination and essentially second-class status. What's the attitude in orthodox circles towards Conservative/Reform Jews? (not the official one, but the "on the street" sort of thing, if it exists...). Is there any dialogue between the branches at all? (As you probably know, Conservative/Reform barely exist in Israel).
Met my wife through a Shidduch, though the Shadchan was my friend and both of us were BTs, so it wasn't quite Fiddler on the Roof. The BT thing made my transition out easier, now my in-laws love me even more :). I attended a modern and strangely rationalist Yeshiva - they really attempted to reconcile Torah with modern science ala Maimonides. I just concluded you can't pull that off in the end. The attitude to conservatives there was "well, they're wrong, but let's not make this personal", mostly treating them as "tinock shenishbh". The guy who started it was mostly a nice guy, and he used most of the allowed vitriol to attack the stupidity and superstition of the right. I can't speak for other yeshivot or sects from personal experience, but I imagine this was somewhat unusual. Funny - my biological father's last name was Vorobyev. I guess that makes us cousins :-p
0Yaakov T10y
Is your wife still teaching your kids religion? How do you work out conflicts with your wife over religious issues (I assume she insists on a kosher kitchen, wants the kids to learn Jewish values etc)
Speaking as a nonexpert, I'm curious what similarities, parallels, and overlap you see between these two fields.
Modern NLP (Natural Language Processing) uses statistical methods quite a bit - http://nlp.stanford.edu/fsnlp/

Ask me anything. Like Vulture, I reserve the right to not answer.

Is your button business really functioning, do you get a nontrivial number of orders? What do your buttons look like and why isn't there a single picture of one on your website?
It's still functioning to some extent-- I'll be at Arisia next weekend. As far as I can tell, I'm neglecting the website because of depression and inertia.
Images of buttons

I understand ancient Greek philosophy really well. In case that has come up. I'm a PhD student in philosophy, and I'd be happy to talk about that as well.

What do you think of Epicurus? What do you think of Epicurean ethics?
Do you have a sense of how the proportion of philosophy varied with place and time, both the proportion written and the proportion surviving? My impression is that there was a lot more philosophy in Athens than in Alexandria.
I'm not sure I entirely understand the question. I'll try to give a history in three stages 1) Roughly, the earliest stages of philosophy were mathematics, and attempts at reductive, systematic accounts of the natural world. This was going on pretty broadly, and only by virtue of some surviving doxographers do we have the impression that Greece was at the forefront of this practice (I'm thinking of the pre-Socratic greek philosophers, like Thales and Anaxagoras and Pythagoras). It was everywhere, and the Greeks weren't particularly good at it. This got started with the Babylonians (very little survives), and when the Assyrian empire conquered Babylon (only to be culturally subjugated to it), they spread this practice throughout the Mediterranean and near-east. Genesis 1 is a good example of a text along these lines. 2) After the collapse of the Assyrians, locals on the frontiers of the former empire (like Greece and Israel) reasserted some intellectual control, often in the form of skeptical criticisms or radically new methodologies (like Parmenides very important arguments against the possibility of change, or the Pythagorean claim that everything is number). Socrates engaged in a version of this by eschewing questions of the cosmos and focusing on ethics and politics as independent topics. Then came Plato, and Aristotle, who between them got the western intellectual tradition going. I won't go into how, for brevity's sake. 3) After Plato and Aristotle, a flurry of philosophical activity overwhelmed the Mediterranean (including and especially in Alexandria), largely because of the conquests of Alexander and the active spread of Greek culture (a rehash of the thing with the Assyrians). This period is a lot like ours now: widespread interest in science, mathematics, ethics, political theory, etc. Many, many people were devoted to these things, and they produced more work in a given year during this period than every that had come before combined. But as a result o
Before I expand on my question, let me ask what I really should have asked before: is there a place I can look up what survives, with a rough classification; or better, what is believed to have existed? You seem to include all non-fiction in philosophy. Fine by me, but I just want to make it explicit. What I meant by proportion was the balance between fiction and non-fiction. I don't think I've heard of any Hellenistic fiction. Was it rarer than classical fiction? Was it less often preserved? Again because it was derivative?  But maybe we should distinguish science from philosophy. My understanding is that Hellenistic science was an awful lot better than classical science. Hipparchus was not lost because he was derivative of Aristotle, but, apparently, because Ptolemy was judged to supersede him, or at least be an adequate summary.
Ancient Greek novels
Well, with respect to mathematics at least one difference between the Greeks and everybody else, is that the Greeks provided proofs of the non-obvious results.
Yes, though that really got started with Euclid, who post-dates Aristotle. It's with Plato and Aristotle that the Greeks really set them-selves apart. I don't think we'd be reading any of the rest of it if it weren't for them.
Euclid is merely the first whose work has survived to the modern day. If tradition is to be believed, Thales and Pythagoras provided proofs of non-intuitive results from intuitive one. Furthermore, Hippocrates of Chios wrote a systematic treatment starting with axioms. All three predated Plato.
That's a good point about Hippocrates, I'd forgotten about him. Do you have a source handy on Thales and Pythagoras? I don't doubt it, it's just a gap I should fill. So far as I remember, a proof that the square root of two is irrational came out of the Pythagorean school, but that's all I can think of. I hadn't heard anything like that about Thales.
I linked to the relevant Wikipedia articles in my comment.
Ah, but note the 'history' section of the Thales article. It rather supports my picture, if it supports anything at all.
Why? If you mean that Thales learned the result from the Babylonians, the point is that he appears to have been the first to bother proving it.
Do you feel overworked and desparate as a PhD student or is it basically fun? Have you published any articles yet or are you planning to? What are your career plans?
I feel overworked, desperate, and very happy. The desperation: This is a very hard field to work in, psychologically, because there's no reliable process for producing valuable work (this might be true generally, but I get the sense that in the sciences it's easier to get moving in a worthwhile direction). It's not rare that I doubt that anything I'm writing is valuable work. Since I'm at the (early) dissertation stage, these kinds of big picture worries play an important daily role. The overwork: This is exacerbated by the fact that I have a family. I have much more to do than I can do, and I often have to cut something important. I grade papers on a 3 min per page clock, and that almost feels unethical. I just recently got a new dissertation advisor who wants to see work every two weeks. The happy: I have a family! It makes this whole thing much, much easier. Most of my problem with being a grad student in the before time was terrible loneliness. Some people do well under those conditions, but I didn't. Also, I do philosophy, which is like happiness distilled. When everyone is uploaded, and science is complete, and a billion years or so have gotten all the problems and needs and video games and recreational space travel out of our system, we'll all settle into that activity that makes life most worth living: talking about the most serious things in the most serious way with our friends. That's philosophy, and I'm very happy to be able to do it even if I don't get a job out of it. I haven't published anything, but someone recently footnoted me in an important journal. Small victories. I have a paper I'd like to publish, but it's a back-burner project. As to my career, I will take literally anything they can give me, so long as I can be around my family (my wife is a philosopher too, so we need to both get jobs somewhere close). Odds are long on this, so my work has to be good.
I think you're right that philosophy is particularly difficult in this respect. In many fields you can always go out, gather some data and use relatively standard methodologies to analyze your data and produce publishable work from it. This is certainly true in linguistics (go out and record some conversations or whatever) and philology (there are always more texts to edit, more stemmas to draw etc.). I get the impression that this is also more or less possible in sociology, psychology, biology and many other fields. But for pure philosophy, you can't do much in the way of gathering novel data.
Interestingly, my field, mathematics, is similar to philosophy, probably for the same reason.

If anyone's interested (ha!), then sure, go ahead, ask me anything. (Of course I reserve the right not to answer if I think it would compromise my real-world identity, etc.)

N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.

For what it's worth I posted this with my main account and not with a sockpuppet precisely to ensure the exclusion of Eliezer.
Why are you hiding your real identity? Don't you fear that in a few years programs, available to the general public, will be able to match writing patterns and identify you?
I see it more as introducing a trivial inconvenience which keeps people I know in real life generally away from my (often frank) online postings. In some sense it's just psychological, since by nature I am a very reticent person and it makes me feel like I can jot out opinions and get feedback without having to agonize over it. (That's also why I'm not necessarily comfortable directly listing out personal details which could probably be inferred/collected from what I write.)
FWIW, this is the same as my rationale. It is theoretically possible to trace Alsadius back to me-the-human, since I'm sure I've given enough identifying details to narrow down the pool of candidates to one given perfect information, but it is sufficiently difficult that I doubt anyone will actually bother.
As someone who feels the same way, forestalling that possibility/making it take effort to identify me is somewhat worth it. And there's a substantial possibility that it won't take that long from development of programs-which-can-recognize to development of programs-that-can-hide.

Feel free to ask me (almost) anything. I'm not very interesting, but here are some possible conversation starters.

  1. I'm a licensed substance abuse counselor and a small business owner (I can't give away too many specifics about the business without making my identity easy to find, sorry about this.)
  2. I'm a transhumanist, but mostly pessimistic about the future.
  3. I support Seasteading-like movements (although I have several practical issues with the Thiel/Friedman Seasteading Institute.
  4. I'm an ex-liberal and ex-libertarian. I was involved in the anti-war mo
... (read more)
Maybe you can give some common misconceptions about how people recover from / don't recover from their addictions? That's the sort of topic you tend to hear a lot of noise about which makes it tough to tell the good information from the bad. Do you have any thoughts on wireheading? Have you tried any 19th/20th century reactionary authors? Everyone should read Nietzsche anyway, and his work is really interesting if a little dense. His conception of Master/slave morality and nihilism is a much more coherent explanation for how history has turned out than the Cathedral, not to mention that the superman (I always translate it as posthuman in my head) as beyond good and evil is interesting from a transhumanist perspective.
I'm not sure if these are misconceptions, but here are some general thoughts on recovery: 1. Neural genetics probably matters a lot. I don't know what to do with this, but I think neuroscience and genetics will produce huge breakthroughs in treatment of addiction in the next 20 years. People like me will probably be on the sidelines for this big change. 2. People who feel coerced into entering counseling will almost certainly relapse, and they'll relapse faster and harder compared to people who enter willingly. However... 3. ...this doesn't make coercion totally pointless--counselors can plant the seeds of a sincere recovery attempt, and give clients the mental tools to recognize their patterns. 4. People who willingly enter counseling still usually relapse, multiple times. The people who keep coming back after a relapse stand a much better chance of getting to a high level of functioning. People who reenter therapy every time they relapse will usually succeed eventually. (I realize this is almost a tautology.) 5. Clients with other diagnosed disorders are much less likely to fully recover.) Wireheading is somewhat fuzzy as a term.... The extreme form (being converted into "Orgasmium") seems like it would be unappealing to practically everyone who isn't suicidally depressed (and even for them it would presumably not be the best option in a transhuman utopia in which wireheading is possible.) I think a modest version of wireheading (changing a person's brain to raise their happiness set point) will be necessary if we want to bring everyone up to an acceptable level happiness. I've read a lot of excerpts and quotes, but not many full books. I read a large part of one of Carlyle's books and one late 19th Century travelogue of the United States which Moldbug approvingly linked to. (I've read a fair amount of Nietzsche's work, but I think calling him a reactionary is a bit like calling the Marquis de Sade a "libertarian.")
The one concept from Nietzsche I see everywhere around me in the world is ressentiment. I think much of the master-slave morality stuff was too specific and now feels dated 130 years later, but ressentiment is the important core that's still true and going to stay with us for a while; it's like a powerful drug that won't let humanity go. Ideological convictions and interactions, myths and movements, all tied up with ressentiment or even entirely based on it. And you're right, I would have everyone read Nietzsche - not for practical advice or predictions, but to be able, hopefully, to understand and detect this illness in others and especially oneself.
It's funny to me that you would say that, because the way I read it was mainly that slave morality is built on resentment whereas master morality was built on self-improvement. The impulse to flee suffering or to inflict it (even on oneself) is the the difference between the lamb and the eagle, and thus the common and the aristocratic virtues. I wouldn't have thought to separate the two ideas. But again, one of the reasons why he ought to be read more; two people reading it come away with five different opinions on it.
Why are you pessimistic about the future? What are your practical issues about the Seasteading Institute? My major issue is that even if everything else works, governments are unlikely to tolerate real challenges to their authority. What political theories, if any, do you find plausible?
I worry about a regression to the historical mean (Malthusian conditions, many people starving at the margins) and existential risk. I think extinction or return to Malthusian conditions (including Robin Hanson's hardscrabble emulation future) are the default result and I'm pessimistic about the potential of groups like MIRI. As I see it, the main problem with SI is their over-commitment to small-size seastead designs because of their commitment to the principle of "dynamic geography." The cost of small-seastead designs (in complexity, coordination problems, additional infrastructure) will be huge. I don't think dynamic geography is what makes seasteading valuable as a concept. The ability to create new country projects by itself is the most important aspect. I think large seastead designs (or even land-building) would be more cost-effective and a better overall direction. I've always thought the risk from existing governments isn't that big. I don't think governments will consider seasteading to be a challenge until/unless governments are losing significant revenues from people defecting to seasteads. By default, governments don't seem to care very much about things that take place outside of their borders. Governments aren't very agent-y about considering things that are good for the long term interests of the government. Seasteads would likely cost existing governments mainly by people attracting revenue-producing citizens away from them and into seasteads, and it will take a long time before that becomes a noticeable problem. Most people who move to seasteads will still retain the citizenship of their home country (at least in the beginning), and for the US that means you must keep paying some taxes. Other than the US, there aren't a lot of countries that have the ability to shut down a sea colony in blue water. By the time the loss of revenue becomes institutionally noticeable, the seasteads are likely to be too big to easily shut down (i.e. it would requir
What are warning signs someone should look out for (in themselves) in avoiding addiction?
My take on drug abuse is that it isn't primarily the drugs themselves that are the problem but the user. That is to say the drugs have powerful and harmful effects, but the buck ultimately stops with the user who chooses to imbibe them. As physically addictive as some drugs can be, not everyone will; A) Be addicted if they try it once, and, B) Actually want to use the drug to begin with. It's the people who are depressed, self-harming, etc, who have drug problems. I think my point can be easily confused so i'll give an analogy: a magnetic sea mine is terribly destructive and can blow me to pieces (swap for drugs), but being a human of flesh and blood (swap for healthy life and psychology), there will be no magnetic attraction and we won't be drawn towards each other. On the other hand if I was a steel ship (depressed, etc), the magnet will be drawn to me and devastation will be the result. To recap again in one sentence; the mainstream point of view seems to be that drugs are like a virus which can effect anyone and are the problem in themselves where as I see the users as the 'problem' and the drugs as one (of many) destructive outcomes of this. My question is basically; do you agree with the above?

Some LW-folks have in the past asked me questions about my stroke and recovery when it came up, and seemed interested in my answers, so it might be useful to offer to answer such questions here. Have at it! (You can ask me about other things if you want, too.)

I'm a 24-year-old guy looking for a job and have a great interest in science and game design. I read a lot of LW but I rarely feel comfortable posting. I wished there was a LW meetup group in Belgium and when nobody seemed to want to take the initiative I set one up my self. I didn't expect anyone to show, but now, two years later it's still going. Ask me anything you want, but I reserve the right not to answer.

How hard did you find it to be to organize/run a meetup? How did that compare to what you expected?
How hard it is depends on what kind of meetup you’re running, in may case it’s very easy. The Brussels group is more of a social gathering. We start of with a topic for the day but go on wild tangents/play board games and generally just have fun. The only things I ever needed to do as an organizer were: pick a topic for the meetup, post the meetup on the site, arrive on time, make new members feel welcome and manage the mailing list. When I started out I honestly didn't have any expectations on how hard it would be, I had no idea how they would turn and had decided to just run with whatever happened. Once the meetup had a core group of regulars some of them offered to help and I could delegate the stuff I’m not very good at (like the meetup posts on LW and coming up with topics) These days the only things I feel I have to do are put in an extra effort to involve new members and keep the atmosphere friendly (which, in two years of meetups, has only once been a problem, LW’ers are generally great people) and those are things I would do anyway. I know there are other meetups were the organizer has more responsibility. For example, if you have a system where every month another person gives a short presentation you have to manage that as well. For larger groups (Brussels rarely has more then 4 people) an official moderator type person might be handy to make sure quieter people get a chance to speak up. There is no one “right” way to run a meetup, see why people enjoy coming to yours and try and make that part as awesome as you can. Just keep an open mind about trying new things every now and then. In short, how hard it is to run a meetup depends the type (social, exercise focused, presentations, etc.) In my case, it’s very easy especially since I have other helping me out. If you’re thinking on starting one yourself don’t worry to much about what type you want it to be, just see how the first few meeting go and it'll point itself out from there.

I'm heavily interested in instrumental rationality -- that is, optimizing my life by 1) increasing my enjoyment per moment, 2) increasing the quantity of moments, and 3) decreasing the cost per moment.

I've taught myself a decent amount and improved my life with: personal finance, nutrition, exercise, interpersonal communication, basic item maintenance, music recording and production, sexuality and relationships, and cooking.

If you're interested in possible ways of improving your life, I might have direct experience to help, and I can probably point you in the right direction if not. Feel free to ask me anything!

Do you think you had a high starting conscientiousness level or did you have to develop it? What do you mean about increasing enjoyment of moments? I guess some sort of mindfulness? Can you expand on sexuality and relationships? What techniques do you have for determining goals as opposed to fulfilling them? E.g. if I have no particular sense of what I want how would I determine it?
Have you become exceptionally good at anything, and if so what and how?
Improving skills is about deliberate practice, objective analysis (either by yourself or a teacher), and evaluating and fixing your weaknesses. I've been able to improve every skill I've tried with this method. I consider myself exceptionally good at creating metal music (playing guitar, vocals, recording/mixing/production), and I'm getting pretty good at weight lifting. I am beginning to develop the skill of computer programming, which I expect to take to that level. For most non-career and non-pleasure skills, I generally stop at the point of diminishing returns. I've learned to cook for myself better than most restaurants, but I don't care to invest the time and energy to become a real artist with it.
Do you use any quantitative self tools for this? If so, could you elaborate on your data tracking/analysis processes?
Yes, but incompletely. I'll track things precisely until a habit is established, at which point I stop tracking everything and check-in every once in a while to make sure I'm still on track. Some things I keep track of consistently, such as my budget, weight lifting numbers, bodyweight, etc. The process is different for different things. I usually start with a Google Drive spreadsheet, and then experiment with other more specific apps if they're better than spreadsheets (they rarely are). If you have any more specific questions, I'd be glad to answer them.

You can ask me anything.

Okay, I'll bite. Do you think any part of what MIRI does is at all useful?

Do you think any part of what MIRI does is at all useful?

It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).

I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI's position is extreme.

Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:

  1. Someone should actively think about the issue in their spare time.

  2. It wouldn’t be a waste of money if someone was paid to think about the issue.

  3. It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.

  4. There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.

  5. Relevant researchers should be actively cautious and think about th

... (read more)
Upvoted solely for the handy scale.
How should I fight a basilisk?

How should I fight a basilisk?

Every basilisk is different. My current personal basilisk pertains measuring my blood pressure. I have recently been hospitalized as a result of dangerously high blood pressure (220 systolic, mmHg / 120 diastolic, mmHg). Since I left the hospital I am advised to measure my blood pressure.

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Should I stop measuring my blood pressure because the knowledge hurts me or should I measure anyway because knowing it means that I know when it reaches a dangerous level and thus requires me to visit the hospital?

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Measure every hour. Or every ten minutes. Your hormonal system can't sustain the panic state for long, plus seeing high values and realizing that you are not dead yet will desensitize you to these high values.

As someone who's had both high blood pressure and excessive worrying — I second this advice.

Do you do any sort of meditation?
No. Do you have any recommendations on what to read/try? Given the side effects of anxiety disorder medications such as pregabalin, meditation was one of the alternatives I thought about besides marijuana.
I have a bunch of recommendations, but I'm no expert. Generic advice: sit or stand with your back straight and unsupported. If sitting, your knees should be below your hips. This means straight chair (soles of feet on the ground), cross-legged on a cushion, or full lotus. Pay attention to something low-stress. Your breath (possibly just the feeling of it going in and out of your nostrils), a candle flame, your heart beat (if low stress), counting from one to four and back again. 20 minutes is commonly recommended, but I don't think it's crazy to work up from 5 or 10 minutes if 20 is intolerable. Meditation isn't easy. One of the useful parts of the training is gently putting your attention back where you want it when you notice you're thinking about something else. It may help to have a few simple categories like thought, memory, imagination, sensation to just label thoughts as they go by. I recommend The Way of Energy by Lam Kam Chuen-- it's an introduction to Daoist meditation (mosly standing). I'm not going to say it's the best ever (I haven't investigated the field), but it's got a good reputation and I've gotten good results from it. There. Now that I've said some things, I predict that other meditators will come in with more advice.
One more thing: Only do 70% as much as you think you can. I think this applies to meditation as well as (non-emergency) physical activities. It improves the odds that you won't make yourself sick of it. Looks like I was wrong about getting replies.
That advice is reasonable. The hospital/Doctor may be able to refer you to a local Mindfulness Based Stress Reduction course. Many people find the social support of meditating in a group, helpful. I hope you make a speedy recovery to full health, XiXiDu.

You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.

Can you talk about your specific field in linguistics/philology? What it is, what are the main challenges? Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?

Can you talk about your specific field in linguistics/philology?

I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.

what are the main challenges?

There are lots of little problems I'm interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason - if someone manages to establish "p" then all the nice speculation based on assuming "not p" is worthless. But many would prefer to be as free as possible to speculate about as much as possible.

Do you have

... (read more)
Is that really the standard term? You know, that the LW party line is that it's a bad term like selling non-apples. Google suggests to me that it is not the most popular term. The link below replaces "non-traditional" with "modern," which isn't an improvement on this dimension. Also, my first parsing was that "non-traditional" modified "authorship." This is actually a reasonable use of the prefix "non," since having a strong prior on the author makes a big difference (sociologically, if not technically). How bout that Marlowe?
You're right, it's a horrible term. For one thing, the methods involved are pretty well-established by now. I just use it by habit. As for that old Marlowe/Shakespeare hubbub, here's a recent study which finds their style similar but definitely not identical.
Does anyone use a better term? "Statistical author attribution" seems like an obvious term, but google tells me that no one has ever used it.
Have you read the study you link? People who have read it tell me that the conclusions drawn do not match the body of the paper.
I skimmed it and nothing seemed obviously wrong. If you're interested, you could try for yourself. If you download Marlowe's corpus, Shakespeare's corpus and stylo you can get a feel for how this works in a couple of hours.
Would love to read your post on the Chomskian approach, please do write it!
I would be extremely interested in your post on Chomsky. I almost but not quite majored in linguistics in America, which meant that I got the basic Chomskyan introduction but never got to the arguments against it. I am vaguely familiar with the probabilistic-learning models (enough to get why Chomsky's proof that they can't work fails), but not enough to get what predictions they make etc.
That's quite a broad field to plow! I'll keep asking questions, feel free to ignore those that are too specific/boring. I've always wanted to know more about how authorship attribution is done; is this, found with a quick search, a reasonable survey of current state of the art, or perhaps you'd recommend something else to read? Are your fields, and humanities in general, trying to move towards open publishing of academic papers, the way STEM fields have been trying to? As someone w/o a university affiliation, I'm intensely frustrated every time I follow an interesting citation to a JSTOR/Muse page. Do you plan to stay in academia or leave, and it the latter, for what kind of job? I think you should write that post about the Chomskyan approach.
The Stamatatos survey you linked to will do fine. The basic story is "back in the day this stuff was really hard but some people tried anyway, then in 1964 Mosteller and Wallace published a landmark paper showing that you really could do impressive stuff, then along came computers and now we have a boatload of different algorithms, most of which work just great". The funny thing about stylometry is that it is hard to get wrong. Count up anything you like (frequent words, infrequent words, character n-grams, whatever) and use any distance measurement you like and odds are you'll get usable results. If you want to play around with this for yourself you can install stylo and turn it loose on a corpus of your choice. Gwern's little experiment is also a good read. My involvement with stylometry has not been to tweak the algorithms (they work just fine) but to apply them in some particular cases and to try to convince my fellow scholars that technological wizardry really can tell them things worth knowing. Yes. Essentially every scholar I know is in favor of this. As far as I can see, It will happen and is happening. I worked as an engineer for a few years but found I wasn't that into it and really missed school. So I went back and I'd like to stay.

Sure, what the heck. Ask me stuff.

Professional stuff: I work in tech, but I've never worked as a developer — I have fifteen years of experience as a sysadmin and site reliability engineer. I seem to be unusually good at troubleshooting systems problems — which leaves me in the somewhat unfortunate position of being most satisfied with my job when all the shit is fucked up, which does not happen often. I've used about a dozen computer languages; these days I code mostly in Python and Go; for fun I occasionally try to learn more Haskell. I've occasionally tr... (read more)

What's the best programming language to learn in order to get a job? Or a good job, if the two answers would differ. (Open question; it's too bad there isn't an "ask everyone who works in tech" thread or somesuch. For background, I used to know Java, as well as BASIC and bits of assembly, but a series of unfortunate chance events distracted me from programming about five years ago and I haven't done any since.)
Eh, depends on what sort of job. In my line of work, Python or maybe Ruby — they're both widely used by major employers, and particularly for automation tools. But Java if you want to write for business computing; C# if you want to write for Windows; Objective-C if you want to write for the Mac or iGizmos; PHP if you want Great Cthulhu to rise from his tomb at R'lyeh. And Perl, Python, or Ruby and a smattering of shellscript if you want to do systems stuff.
Also C for a lot of embedded-systems things, and C++ ditto (and also for a fair amount of applications and a whole lot of what you might call scientific computing: computer vision, financial simulations, games engines, etc. -- but C++ is another Great Cthulhu Language). Also, even if your only real interest is in getting a good job, it is very worthwhile learning more languages, preferably highly varied ones. The ideas that are natural or even necessary in one language may be useful to have in your mental toolbox when working in another. Consider, e.g., (1) some variety of assembly language to get a better idea of what the machine is actually doing, (2) a functional language like Haskell to show you a very different style of software design, (3) Common Lisp for its unusual (but good) approaches to OO and exception handling and to show you what a really powerful macro system looks like, (4) some languages with very different execution models -- Prolog (unification and backtrack-based searching), Forth or PostScript (stack machine), Mathematica (pattern-matching), etc. Warning: the more different languages you are familiar with, the more you will notice the annoying limitations of each particular language.
You could start one.
How typical do you think your experience has been in this regard? IME, teaching programming to complete novice has been cruise-control stuff and one of the relatively few things where I know exactly what's going on and where I'm going within minutes of starting. For context: I've had success in teaching a complete novice with vague memory of high-school-math usage of variables how to go from that to writing his own VB6 scripts to automate simple tasks, of retrieving and sending data to fields on a screen using predetermined native functions in the scripting engine (which I taught him how to search and learn-to-use from the available and comprehensive reference files). This was on maybe my third or fourth attempt at doing so. What I actually want to know is how typical my experience is, and whether or not there's value in analyzing what I did in order to share it. I suspect I may have a relatively rare mental footing, perspective and interaction of skillsets in regards to this, but I may be wrong and/or this may be more common than I think, invalidating it as evidence for the former.
I think it would be a very good idea to analyse what you're doing, and probably valuable to have some transcripts of sessions-- what you think you're do may not be what you actually do. Do you teach in person? By phone? I'm wondering how much you use subtle clues to find out what your student is thinking.
Usually, in person (either as a tag-team or "I'll be right over here, call me when you're stumped" approach; I've experimentally confirmed that behind-the-shoulder teaching has horrible success rates, at least for this subject), though a few times by chat / IM while passing the code back and forth (or better yet, having one of those rare setups where it's live-synch'ed). TL;DR: Look at examples of wildly successful teaching recipes, take cues from them and from LW techniques and personal experience at learning, fiddle a little with it all, and bam, you've got a plan for teaching someone to program! Now you just need pedagogical ability. My general approach is to feel out what dumb-basics they know by looking at it as if we were inventing programming piecemeal, naturally with my genius insight letting us work out most of the kinks on the spot. I also go straight for my list of Things I Wish Someone Would Have Told Me Sooner, the list of Things That Should Be In Every Single So-Called "Beginner's Tutorial To Programming" Ever, and the list of Kindergarden Concepts You Need To Know To Create Computer Programs -- written versions pending. For instance, every "Beginner's Tutorial to Programming" I've ever seen fails to mention early enough that all this code and fancy stuff they're showing is nice and all, but to actually have meaningful user interactions and outputs from your program to other things (like the user's screen, such as making windows appear and put text and buttons in them!) you have to learn to find the right APIs, the right handles and calls to make, and I've yet to see a single tutorial, guide, textbook, handbook, "crash course" or anything that isn't trial-and-error or a human looking at what you did that actually teaches how to do that. So this is among the first things I hammer into them - "You want to display a popup with yes/no buttons? Open up the Reference here, search for "prompt", "popup", "window", "input" or anything else that seems relate
Do you have a view on Scala?
Never tried it.
How'd you get to be this way?
I'm not sure, but one of the techniques that seems most salient to me is breadth-first search. Partly this is to hold off on proposing solutions. Take just a little bit longer to look at the problem and gather data before generating hypotheses. The second part is to find cheap tests to disprove your hypotheses instead of going farther down the path that an early hypothesis leads. Folks who use depth-first search, building up a large tree of hypotheses first or going down a long path of possible tests and fixes, seem more likely to get stuck. I also really like troubleshooting out loud with colleagues who aren't afraid to contradict each other. Generating lots of hypotheses and quickly disconfirming most of them can quickly narrow down on the problem. "Okay, maybe the cause is a bad data push. But if that were so, it would be on all the servers, not just the ones in New York, because the data push logs say the push succeeded everywhere. But the problem's just in New York. So it's not the data push."

I am asking everybody here.

Do you have a plan of your own, to ignite the Singularity, the Intelligence explosion, or whatever you want to call it?

If so, when?


I have a plan. Posts here have convinced me that the singularity will most likely be a lose condition for most people. So I'll only activate my plan if I think other actors are getting close.
becomes wildly curious Since you posted above that you're participating in the AMA, can you give some details of this plan? (Assuming step one isn't "tell people about this plan", in which case please don't end the world just because you precommitted to answering questions.)
I think sharing concrete details would be a bad idea, but it's not like I've come up with any clever trick. I'll do it the same way I'd do anything else - buy what I can, make what I can't. I am (rightly or not) very confident in my programming abilities.
This post reminds me of Denethor saying the Ring was only to be used in utmost emergency at the bitter end
Insert pun on the phrase 'ignite the Singularity'.
No. I have no particular skills in that field, and it's the sort of thing that's plagued by optimism. Besides, it's far too big a task for any one person - it'll be lit off by whole industries working for decades, not by one person turning on Skynet.
No, not by myself. Wouldn't have the skillset for it, anyways. So I only try to introduce people to things like MIRI, to improve the chances that future discussions might not stop dead in fatalistic and nihilistic clichés. Effective altruism is an angle where I try to get a sense if a worthwhile elaboration is possible, as steering the arguments is somewhat easier when not starting with the most crazy stuff first.

I believe that the things I do at any given time are reasonable for me to do, AMA.

How often do you use "It seemed like a good idea at the time!" as a defence unironically?
Do you mean that you evaluate the utility function for working out what things to spend time on? Have you assigned arbitrary numbers to the outcomes or is it an estimate?
I estimate time value of various things often, yes.

Sure, you can ask me anything.

IIRC you are interested in educational games, any new thoughts in that area?
Depends on what you mean by new: I elaborated on some of my core ideas about the field in the blog posts Why edugames don't have to suck, Videogames will revolutionize school (not necessarily the way you think), and also touched upon their role in society in Doing Good in the Addiction Economy. My thoughts have gotten somewhat more precise, but off-hand I can't think of any major recent insights that I wouldn't have mentioned in those posts. On the topic of the educational game that I'm doing for my Master's Thesis, I'm making slow but sure progress.
Yes, I had read those posts before which is why I knew you were involved in the field. Good luck with your thesis - I think games have huge potential in education, but it will be difficult because educational games are aiming at a smaller target than normal ones.
I have an idea for a video game that can teach microeconomics. It would create a persistent low-graphics world similar to what's in the game Travian and would require no artificial intelligence. Unfortunately, I can't program beyond the level of what they teach in codecademy. Do you have suggestions for people I could contact to get financial support for my game? I'm the author of a microeconomics textbook and so I think I have a credible background for this project.
Hmm. I haven't really looked into any actual funding agencies or the "getting money for this" side at this point, so I don't know much about that, but I can think of some researchers who might either have an interest in collaborating, or who could know more direct sources of funding. Two groups that come to mind who might be worth contacting in this regard are GAPS and Institute of Play. I'll let you know if I think of any others. (If you do contact them, I'd be curious to hear about the response.)
What is the intended audience for this game? Why, do you think, people will play it?
Students taking introductory or intermediate microeconomics. Instructors would require their students to play.
Ah, so this is purely non-commercial, a course teaching aid, basically. Can't you rope some grad students into doing this?
I would love to make money off of it, and have a revenue model but I would also be willing to do it for free. My school doesn't have econ grad students. Also, it wouldn't be a good career move for a grad student who wanted to become a professor to devote lots of time to this.
So the target market is economics departments at other colleges/universities? You're are talking essentially about a piece of education software sold to institutions, not to end users/players. In this case, I think, you'll have to make a business case for the proposition. I am not sure enough people will find this idea fun enough to contribute their time for free. Another point: do you really have to develop a new game from scratch? Doing a mod of an existing game or engine is likely to be vastly simpler and cheaper.
Why are you utilitarian? Inspired by this.
At heart, utilitarianism feels like what you get when you ask yourself, "would I rather see few people hurt than many, many people happy rather than few, and how important do I think that to be", answer "I'd rather see few people hurt, rather see many people happy, and this is important", and then apply that systematically. Or if you just imagine yourself as having one miserable or fantastic experience, and then ask yourself what it would be like to have that experience many times over, or whether the impact of that experience is at all diminished just because it happens to many different people. Basically, utilitarianism feels like applied empathy.
So, if someone lacks empathy, utilitarianism is senseless to them?
Well, the particular rationale that I gave might be. Possibly they might find it sensible for some other reason.
Indeed. "utilitarianism feels like what you get when you ask" this, let you empathy take over and think it to its 'logical conclusion'. The problem I have with this kind of reasoning that it leads into extremes that don't match up with your other values. Oh it might not look like a conflict. But I sometimes get the impression that this is because the daubt is compartmentalized away because the empathy is such a positively valued emotion and not following it feels wrong. I have to admit that me not being a utilitarian I don't have a clear cut answer of how to rationally act on my empathy either. The problem with complex value functions is that there are no simple answers and utilitarianism suspiciously looks like another simplistic answer to a complex problem.

I'm a 30-year-old first-year medical student on a full tuition scholarship. I was a super-forecaster in the Good Judgment Project. I plan to donate a kidney in June. I'm a married polyamorous woman.

Before participating in the Good Judgment Project did you think you were a particularly good forecaster? Do you believe you have an entrepreneurial edge because of your ability, if you were to pursue it? Have you used your abilities to hack your life for the better?
I realize I could research this myself -- at least enough to ask a more informed version of this question -- but I've been procrastinating that since when I first read your comment, so: Could you talk about your decision to donate the kidney and what your judgments of the tradeoffs were? (I assume, since you didn't mention otherwise, that this donation is not to a friend or family member.)

Why not.

I attended CFAR's may 2013 workshop. I was the main organizer of the London LW group during approximately Nov 2012-April 2013, and am still an occasional organizer of it. I have an undergraduate MMath. My day job is software, I'm the only fulltime programmer on a team at Universal Pictures which is attempting to model the box office. AMAA.

I wrote a book about a new philosophy of empirical science based on large scale lossless data compression. I use the word "comperical" to express the idea of using the compression principle to guide an empirical inquiry. Though I developed the philosophy while thinking about computer vision (in particular the chronic, disastrous problems of evaluation in that field), I realized that it could also be applied to text. The resulting research program, which I call comperical linguistics, is something of a hybrid of linguistics and natural language processing, but (I believe) on much firmer methodological ground than either. I am now carrying out research in this area, AMA.

How do you expect this work to influence the fields of computer vision, NLP, etc. -- would it inspire new techniques?
First, I want people in computer vision and NLP to actually look at the data sets their algorithms apply to. Ask a physicist to tell you some facts about physical reality, and they will rattle off a lengthy list of concepts, like conservation of energy, isotropy of spacetime, Ohm's law, etc etc. As a vision scientist to tell you some things about visual reality, and my guess is they won't have much to say. Sure, a vision scientist can talk a lot about algorithms, machine learning techniques, feature sets, and other computational tools, but they can't tell you much about what's actually in the images. The same problem is true with NLP people to a lesser degree; they can talk about parsing algorithms and optimization procedures for finding MaxEnt parameters, but they can't tell you much about the actual structure of text. So, yes, I expect the approach to produce new techniques, but not because it supplies some kind of new mathematical framework. It suggests a new set of questions.

I am not interesting, but I've been here a few years.

Are there interesting reasons that some LW regulars feel disdain for RationalWiki, besides RW's unflattering opinion of LW/EY? Can you steelman that disdain into a short description of what's wrong with RW, from their point of view? (I'm asking as someone basically unfamiliar with RW).

I think the main reason is that basically nobody in the wider world talks about LW, and RW is the only place that talks about LW even that much. And RW can't reasonably be called very interested in LW either (though many RW regulars find LW annoying when it comes to their attention). Also, we use the word "rational", which LW thinks of as its own - I think that's a big factor.

From my own perspective: RW has many problems. The name is a historical accident (and SkepticWiki.com/org is in the hands of a domainer). Mostly it hasn't enough people who can actually write. It's literally not run by anyone (same way Wikipedia isn't), so is not going to be fixed other than organically. Its good stuff is excellent and informative, but a lot of it isn't quite fit for referring outside fresh readers to.

It surprises me how popular it is (as in, I keep tripping over people using a particular page they like - Alexa 21,000 worldwide, 8800 US - and Snopes uses us a bit) - it turns out there's demand for something that can set out "no, actually, that's BS and here's why, point for point". Raising the sanity waterline does in fact also involve dredging the swamps and cleaning up to... (read more)

Because RW sucks at actually being rational. Rather they seem to have confused being "rational" with supporting whatever they perceive to be the official scientific position. Whereas LW has a number of contrarian positions, most notably cryonics and the Singularity, where it is widely believed the mainstream position is likely wrong and their argument for it is just silly.
I'm downvoting you not because I disagree, but rather because the question was addressed to David, not you.
It is worth noting that Eugene's main concern is that RW has no patience with "race realism", as its proponents call it.
Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is "boring and opaque" to beginners now?
My answer may be a bit generic :-) Re: Wikipedia - This is pretty well-trodden ground, in terms of (a) people coming up with explanations (b) having little evidence as to which of them hold. There's all manner of obvious systemic problems with Wikipedia (maybe the easy stuff's been written, the community is frequently toxic, the community is particularly harsh to newbies, etc) but the odd thing is that the decline in editing observed since 2007 has also held for wikis that are much younger than English Wikipedia - which suggests an outside effect. We're hoping the Visual Editor helps, once it works well enough (at present it's at about the stage of quality I'd have expected; I can assure you that everyone involved fully understands that the Google+-like attempt to push everyone into using it was an utter disaster on almost every level). The Wikimedia Foundation is seriously interested in getting people involved, insofar as it can make that happen. As for LessWrong ... it's interesting reading through every post on the site (not just the Sequences) from the beginning in chronological order - because then you get the comments. You can see some of the effect you describe. Basically, no-one had read the whole thing yet, 'cos it was just being written. I'm not sure it was easier for beginners at all. Remember there was only "main" for the longest time - and it was very scary to write for (and still is). Right now you can write stuff in discussion, or in various open threads in discussion.
Thank you. You brought up considerations I hadn't considered.

I'll answer anything that will not affect negatively my academic career or violates anyone's privacy but mine (I never felt like I had one). I waive my right not to answer anything else that could be useful to anyone. I'm finishing a master’s on ethics of human enhancement in Brazil, and have just submitted an application for a doctorate in Oxford about moral enhancement.


I don't think I'm known around here, but sure why not. Ask me anything.

Why did you make this post Will? Wait I guess you didn't comment here volunteering to answer questions.

Anyway I guess I can answer questions but I'm pretty lazy and not very educated so ask at your own risk.

You're asking me why? I did it 'cause I was bored. I'll probably jump in if others do, otherwise it's too narcissistic as the creator of the post.

Will have you ever had an encounter with the divine?

I upvoted you because I misread it as "Will you ever had" and thought you were making a joke about eternity, but now I suspect you just forgot the comma after "Will". Keep the upvote, though, I want to know too.
Fo sho.
What happened?
See here for my explanation of why I'd rather not answer that.
I looked there and didn't see any explanation of why you'd rather not answer that. What did I miss?
I imagine it's because Right?
Might be. But I don't see how that would make it wrong for Will to describe his experiences, without also making it wrong for him to say he's had them and is very convinced by them. I mean, it could. The gods would need to think that the level of evidence present in the world without any comment from Will is too low, and the level of evidence present with a description of Will's experiences is too high. It would be quite a coincidence, wouldn't it?, for the optimum level of evidence to fit into so narrow a region?
What is the counter argument to EA critics, that if you take EA to its logical conclusion, your life will suck. If I donate 50% of my income I probably could donate 55% then 65%, eventually to be consistent you'd have to donate 100% because as an American I could probably dumpster dive for food and live in a box and still have a better life then someone out there. What is the happy medium that is consistent and justified?
This has been written about by Julia Wise at Giving Gladly, and others. Two relevant considerations are: * Major self-sacrifice tends to be unsustainable, leading to burnout. * If an EA makes him or herself miserable, he or she is likely to repel bystanders, reducing other people's interest in being EAs. Giving What We Can has set donating 10% of one's income as a threshold for membership. There's a historical precedent of this level of giving being sustainable for many people, coming from tithing practices in religion. As for higher percentages: roughly speaking, it seems that marginal returns diminish very rapidly beyond $100k/year, so that one can give everything beyond that without substantially sacrificing quality of life. There are reasons why more can help: for example, to save extra money on the contingency that one is unemployed, or to be able to take care of many children. But I think that the level of sacrifice involved would be acceptable for many people. If one is living in an area with low cost of living, or doesn't want children, one can often live on a lot less than $100k/year without sacrificing quality of life.

Self deprecating observations about my knowledge and interestingness, etc, but I have been reading this site for a while. So on the off chance then sure why not, ask me anything

Sure. I run a Software Dev Shop called Purple Bit, based in Tel Aviv. We specialise in building Python/Angular.js webapps, and have done consulting for a bunch of different companies, from startups to large businesses.

I'm very interested in business, especially Startups and Product Development. Many of my closest friends are running startups, I used to run a startup, and I work with and advise various startups, both technically and business-wise.

AMA, although I won't/can't necessarily answer everything.

In terms of custom software, what do you see as the next big thing that business will want? More specifically do you get the feeling that more people are wanting to move away from cloud services to locally managed applications?
This really depends on the field. My experiences are probably only relevant to about 1% of software projects out there - there's a lot of software in the world. That said, in terms of Cloud vs. Local - definitely not. Most large (and small!) companies we've worked with use AWS. We also highly recommend Heroku/AWS to all our customers as the easiest and least expensive way to get started on building a custom application. Of course, there are a lot of places where cloud still doesn't make sense. We have one client who has custom software deployed in hospitals, where all of the infrastructure is of course local to their site, not in any kind of cloud. But for the majority of people who don't have such a use case, everyone understands that cloud makes everything easier.
can you explain your basic business model? Also, what is the hardest part of your business and/or the biggest barrier to entry?
So, we're what's called a "Professional Services" firm. This term is usually used when talking about e.g. Accountants, Lawyers, etc, but is just as relevant for a Software Consultancy. I'll go a little into the idea behind professional services firms in general, then get back to talking about us in particular. There are many, many different types of Professional Services firms, but the basic business model is usually the same - you're selling your time for money, and people pay because of your expertise and experience in the field. But here's where large firms make their real money: the firm gets projects based on the expertise and experience of the "managing partners", and then a combination of the managing partners and juniors perform the actual work. For example, a law office will win a contract because of their experience and the expertise of its "Name Partners", and they'll charge let's say $500 an hour for an hour of Partner time. But they'll also charge $450 an hour for an Associate lawyer. The firm pays huge salaries for the name partners, so they're basically not making any profit there. But they pay tiny salaried to the Associates, for a large profit. This is called "leverage". This is how a professional services firm grows and makes a profit - leveraging the skills and reputation of key, highly payed employees, to sell the work of lower-payed employees. Most Professional Services firms can be placed on a moving scale as to how much expertise vs. leverage they have. An example of a highly skilled "firm" - a team of brain surgeons. They're basically paid amazingly well, and have minimal leverage. An example of a consultancy with a lot of leverage - a company that builds websites for restaurants. Building a website for a restaurant is 90% repetitive work that can be given to junior employees, with senior employees focused on finding work and growing the reputation of the business. So where do we fit in all this? In our case, as a rather small firm, we'r

Sure, ask me if you want. Programmer/anime fan/LW reader and commenter.

What's your favorite anime, and why?
Wandering Son (Hōrō Musuko) Personal reasons: the story's relevant to my own and in a genre I don't normally pay much attention to, which might be why it stands out over other possible candidates (e.g. Puella Magi Madoka☆Magica). Also, by choosing an artsy show that tackles a serious dramatic subject, full of tragedy (and qbrfa'g erfbyir rirelguvat arngyl at the end), I sound more intellectual. Psuedo-objective reasons: I feel it accurately captures the feelings of childhood and growing up. I particularly liked the portrayal of the sibling relationship, where you hate each other on a level that's superficial but no less genuine for that, but will stand by each other when you discover things the other really cares about. The conclusion also felt very true-to-life. I liked the visual style; the character designs are much more realistic than the animé norm (and for viewers who find it hard to tell them apart, serve as a demonstration of the valid reasons for the animé norm), and the whole setting and story feels like something you could do in live action. But at the same time this would be completely impossible to produce in live action, for a different reason than normal (child actors and ethical issues), so it shows off the ability of animé to do what other media can't. The slightly washed-out, watercolour visual style is distinctive, even among animé - but it's like that for a reason, the uncertain, blurry visuals aligning perfectly with the emotions this series is trying to convey. Likewise the light, childish-sounding soundtrack is distinctive - but it's not just style for the sake of style, it fits with the show as a whole. Practical notes: I prefer the 11-episode (rather than 12-episode) release. I've avoided describing the premise because it's an episode 1 spoiler; if you think you'd like the show from this description I recommend watching it (or at least watching episode 1) rather than seeking out more information.
Many believe that the anime is a poor adaptation of the manga, or at the very least that the manga is the best medium the story is told in. What do you think about the subject?
I don't generally get on with manga as a medium. I tried to read this particular one and gave up after about three chapters. So depending on your perspective either I can't compare the two, or I found the anime to be much, much better.
Are you that lmm?

In case anyone has question for myself I"m happy to answer.

What is the philosophy behind your prolific commenting?
In general online commenting is something I do out of habit. Higher return on time than completely passive media consumption such as watching TV but not that I book under time spent with maximum returns. I generally think that a shift to massive information consumption of content via TV/radio in the 20st century was something that's bad for the general discourse of ideas in society. Active engagment helps learning. I also prefer it over chatting in venues such as IRC, because it provides it provides deeper engagement with ideas and leaves more of a footprint. Created content is findable afterwards. Lesswrong is also a choice to keep me intellecutally grounded. These days I do spent plenty of time thinking in mental frameworks that are not based on reductionist materalism. I do see value of being pretty flexible about changing the map I use to navigate the world and I don't want to lose access to the intellectual way of thinking. In total I however spent more time than optimal on LW and frequently use it to procrastinate on some other task.

I work as a software engineer, married with two kids, live in Israel and blog mostly in Russian. AMA.

Why do you even waste time on lj-russians? The level of the discourse is lagging roughly two hundred years behind the western world.
The quality of discourse in Russian LJ depends almost entirely on your immediate circle of readers. Incredible stupidity and mendacity happily coexist with fantastic blogs and interesting debates. The number and density of the latter has gone down over the years, but then again, blogging as a phenomenon has. It comes down to this: the main reason I blog on LJ in Russian because I still have lots and lots of readers there who are smarter and knowledgeable than me in the many different areas I'm interested in. There's no single place I can blog or write in English that would give me as much, and as useful, feedback (and that certainly includes LW).
Do you believe that by living in Israel you are by de facto green-lighting it's history and current course of action (such as settlements, etc)? If not, can you explain what you believe your involvement/non-involvement entails? [edit: I think this question might of come off sounding thorny when it's not supposed to be- espiecially given the charged emotions and such on the conflict there. I just want some perspective on what it's personally like for you to 'live in the middle' of such a well known conflict]
Why would someone down vote me without commenting as to why? Why would my question warrant a down vote anyway?
Downvotes without comments are routine. I didn't vote, but I suspect the downvoter felt that a discussion of Middle-East politics was likely to follow from the question, and likely to be unpleasant or heated.
That would be an assumption and entirely irrational. I am not going to be unpleasant nor engage in a lengthy debate about anything- least of all expanding the topic to other middle east politics. I simply wanted to know what it's like to live in such a controversial topic. Where does he finds himself in it (as in does he feel like it's in another world or maybe it's a daily experience?). I really don't know if the average person there feels like they are part of what is happening or if it is something they see in the news like every one else in the world and feel disconnected from. I'm an Australian- and I wouldn't have a problem if someone asked me the same line of questioning based around say the current (anti) refugee policy or even the white invasion and genocide of the indigenous people.
Yes, it's an assumption. An irrational assumption? No, not especially. In the absence of special information about you, it's rational enough to assume you are a typical commenter on this site. If they observe evidence of your exceptionality, a rational observer updates based on that information.
Hmm I see your point- but if what they did was called 'rational' then there has to be another word for the part where they made the mistake. The mistake was they came to so much of a conclusion about something that they acted on it. They were wrong. They caused negative utility. It negatively effected the world and also their understanding of it. What is that called?
All that about a single downvote..? X-D I recommend growing thicker skin, quickly.
lol not negative utility to me- to him! It hasn't hurt my feelings or made me feel like a victim, I'm talking about how someone has misinterpreted and acted out on to the world. Even at that it was such a minor incident that i'm not talking about this in terms of damage done. What i'm really saying is- why is someone acting irrational on a rationality website?
The obvious answer is that people here are humans and not Vulcans. But I don't see the irrationality you are talking about. Rationality doesn't specify values or goals. You know nothing about the person who downvoted you or the reasons he did it. Given this, your accusation of irrationality seems... hasty. Even irrational, one might say :-)
The way people vote on politically contentious topics on this site is very far from some rational ideal. Politics is the mind-killer and all that. I don't think it's changing any time soon, so I'd recommend just getting used to it.
But why do people just accept the status quo?? Politics doesn't kill my mind. I know how to not 'cheer for my team' and to think about topics in a balanced way. I expect people to act irrationally on the comment section of the news website I read- but why are people not rising above it on this website of all places? Get use to it? It's very hard to think it's rare and unexpected for people to talk about a topic rationally. I don't see why people find it so hard- especially when they've apparently read articles highlighting common problems and where they come from.
When we find it hard to think that things are as they are, and we find it hard to see why things are as they are, that's often a good time to pay close attention to the behavior of the system. Often this has better results than expecting the behavior to be different and complaining when it isn't... though admittedly, sometimes complaining has good results. Or do you have a third alternative in mind?
To be honest, I guess my comment was just a complaint with no expected result. It really had no point other than some kind of emotional release
Heh. Are you quite sure of that? :-)
lol ok yes as I typed that I had to ask myself that exact same question- since it's such a bold thing to say and exactly what someone with a problem might say. I could explain why I am sure, but I'm not sure anyone is interested in that explanation. I've got a ask me a question comment on here so I guess if anyone is interested- they can ask :-)
That's standard operating procedure around here. Most up- and down-votes are given without comment. Your question implies that Israel's "history and current course of action" are bad/shameful/immoral/etc.
Why do you live in Slovakia?
I was born here, and I never lived anywhere else (longer than two weeks). I dislike travelling, and I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language). Generally, I dislike changes -- I should probably work on that, but this is where I am now. I could also provide some rationalization... uhh, I have friends here, I am familiar with how the society works here, maybe I prefer being a fish in a smaller pond -- okay the last one is probably honest, too.
Speaking in a language I'm not fluent in (and in a cultural context I'm not familiar with) makes me feel like an idiot savant, because it destroys my social skills while keeping my abstract reasoning/mental arithmetic skills intact.
Is it difficult being too smart and concerned about the right things where you live/lived? If yes, how you deal/dealt with it?

Well, it is sometimes difficult to be me, but I'm not sure how much of that is caused by being smart, how much by lack of some skills, and how much is simply the standard difficulty of human life. :D

Seems to me that most people around me don't care about truth or rationality. Usually they just don't comment on things outside of their kitchens; unless they are parrotting some opinion from a newspaper or a TV. That's actualy the less annoying part; I am not disappointed because I didn't expect more from them. More annoying are people who try to appear smart and do so basicly by optimizing for signalling: they repeat every conspiracy theory, share on facebook every "amazing" story without bothering to google for hoaxes or just use some basic common sense. When I am at Mensa and listen to people discussing some latest conspiracy theory, I feel like I might strangle them. Especially when they start throwing around some fully general arguments, such as: You can't actualy know anything. They use their intelligence to defeat themselves. Also, I hate religion. That's a poison of the mind; an emotional electric fence in a mind that otherwise might have a chance to become sane. -- B... (read more)

I look a look at Mensa sometime in the 80s in the US, mostly through their publications. I was very underwhelmed-- they had a very bad habit of coming up with a set of plausible-sounding definitions and basing an argument on them. I went to an event, and I could get at least as good conversation at a science fiction convention. On the other hand, one of my friends, an intelligent person, was very fond of DC area Mensa, and it doesn't surprise me if there's a lot of local variation. I also know another very smart person who's also very fond of Mensa. Perhaps it's not a coincidence that she also lives in the DC area. If the best company you've found was a math club, perhaps you should be looking for mathematicians and/or math clubs.
I suspect that local Mensas are different. But I also think that none of them even approaches the LW level. Maybe it's a question of size -- if you have say 100 Mensans in one city, 10 of them can be rational and have a nice talk together, aside from the rest of the group. If you only have 10 Mensans in one city, you are out of luck there. The mathematician club I was in as a child was one of a kind; and the lady who led it doesn't do this anymore. She has her own children now, and she works as a coordinator of correspondence competitions; which is not the same thing as having a club. Unfortunately, there was no long-term plan... If I could somehow restart this thing, I would try something like Scouts do (okay, I don't know much details about Scouts, but this is my impression); I would encourage some members to become new leaders, so that the whole thing does not fall apart when the main person no longer has time; I would try to make a self-reproducing system. There is an interesting background of that mathematical club. It started with a Czech elementary-school teacher of mathematics, Vít Hejný, who taught himself from books some psychology of Piaget and based on this + his knowledge of math + some experimenting in education he developed his own method of teaching matematics. He later taught it to a group of interested students; one of them was the lady who organized my club. But until recently, there was no book explaining the concepts. And even with the book, this man was a psychology autodidact, so he invented a lot of unusual words to describe the concepts he used, so it would be difficult to read for someone without a first-hand experience. And most of the psychologists wouldn't grok the mathematical aspect of the thing, because it is a theory of "how people think when they think about mathematical problems". So I am afraid the whole art will be forgotten. (Perhaps unless someone translates his book to English, substituting his neologisms with the proper psy

In the unlikely even that anyone is interested, sure, ask me anything.

Edit: Ethics are a particular interest of mine.

Would you rather fight one horse sized duck, or a hundred duck sized horses?

Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.

Is this a fist-fight or can blacktrance use weapons?
Any topics of interest? Same goes for other 'whatever's
Ethics, I suppose. Most of my other interests are either probably too mindkilling for LW or are written about in the Sequences already, more clearly than I could write about them.