All of incariol's Comments + Replies

10-Step Anti-Procrastination Checklist

Here's another one: Skyrim soundtrack (a bit over 3,5 hours of epic fantasy music, with the last ~40 minutes being purely atmospheric/ambient).

Rationality Quotes March 2013

Choice of attention - to pay attention to this and ignore that - is to the inner life what choice of action is to the outer. In both cases, a man is responsible for his choice and must accept the consequences, whatever they may be.

W. H. Auden

Course recommendations for Friendliness researchers

Apart from Numerical Analysis and Parallel Computing which seem a bit out of place here (*), and swapping Bishop's Pattern Recognition for Murphy's ML: A Probabilistic Perspective or perhaps Barber's freely available Bayesian Reasoning and ML, this is actually quite a nice list - if complemented with Vladimir Nesov's. ;)

(*) We're still in a phase that's not quite philosophy in a standard sense of the word, but nonetheless light years away from even starting to program the damn thing, and although learning functional programming from SICP is all good and we... (read more)

Godel's Completeness and Incompleteness Theorems

Given these recent logic-related posts, I'm curious how others "visualize" this part of math, e.g. what do you "see" when you try to understand Goedel's incompleteness theorem?

(And don't tell me it's kittens all the way down.)

Things like derivatives or convex functions are really easy in this regard, but when someone starts talking about models, proofs and formal systems, my mental paintbrush starts doing some pretty weird stuff. In addition to ordinary imagery like bubbles of half-imagined objects, there is also something machine-like ... (read more)

9FeepingCreature8yVisual/imaginative modelling of mathematical tasks is not a universal trait.

You can think of the technical heart of the incompleteness theorem as being a fixed point theorem. You want to write down a sentence G that asserts "theory T does not prove G." In other words, there is a function which takes as input a sentence S and outputs the sentence "theory T does not prove S," and you want to find a fixed point of this function. There is a general fixed point theorem due to Lawvere which implies that this function does in fact have a fixed point. It is a more general version of what Wikipedia calls the diagonal le... (read more)

5moshez8yHere's how I visualize Goedel's incompleteness theorem (I'm not sure how "visual" this is, but bear with me): I imagine the Goedel construction over the axioms of first-order Peano arithmetic. Clearly, in the standard model, the Goedel sentence is true, so we add G to the axioms. Now we construct G' a Goedel sentence in this new set, and add G'' as an axiom. We go on and on, G''', etc. Luckily that construction is computable, so we add G^w as a Goedel sentence in this new set. We continue on and on, until we reach the first uncomputable countable ordinal, at which point we stop, because we have an uncomputable axiom set. Note that Goedel is fine with that -- you can have a complete first-order Peano arithmetic (it would have non-standard models, but it would be complete!) -- as long as you are willing to live with the fact that you cannot know if something is a proof or not with a mere machine (and yes, Virginia, humans are also mere machines).
8Kindly8yThinking about algebra (e.g. group theory) makes a lot of this make more sense. The definition of a group is a "theory"; any particular group is a "model". This isn't a huge revelation or anything, but it's easier to think about these ideas in the context of algebra (where different structures that behave similarly are commonplace) rather than arithmetic (where we like thinking about one "true" picture).
Checklist of Rationality Habits

Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.

How can I reduce existential risk from AI?

When someone proposes what we should do, where by we he implicitly refers to a large group of people he has no real influence over (as in the banning AGI & hardware development proposal), I'm wondering what is the value of this kind of speculation - other than amusing oneself with a picture of "what would this button do" on a simulation of Earth under one's hands.

As I see it, there's no point in thinking about these kind of "large scale" interventions that are closely interweaved with politics. Better to focus on what relatively sma... (read more)

4ChristianKl8yI'm not sure whether that's true. Government officials who are tasked with researching future trend might right the article. Just because you yourself have no influence on politics doesn't mean that the same is true for everyone who reads the article. Even if you think that at the moment nobody with policial power reads LessWrong, it's valuable to signal status. If you want to convince a billionaire to fund your project it might be be benefitial to speak about options that require a high amount of resources to pull off.
0Bruno_Coelho8yIn early stages is not easy to focus directly in the organizations x or y, mostly because a good amount of researchers are working in projects who could end in a AGI expert in numerous specific domains. Futhermore, large scale coordination is important too, even if not a top priority. Slowing down a project or funding another is a guided intervetion who could gain some time while technical problems remain unsolved.
Checklist of Rationality Habits

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

Checklist of Rationality Habits

Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)).

It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)

6inblankets8yI disagree with the commenters below-- I think you're fairly likely to find yourself wanting to be in a relationship if you're not careful. I'm a female, and I don't want to get married or have kids. Unfortunately, I'm 24, and some part of me/the body is really trying to marry me off and give me baybehs. So I try not to take in too much media that normalizes this vs. normalizing my goals, I don't babysit, and I am open about my intent so as not to attract invitations.
9[anonymous]8yI don't think you'd be likely to find yourself in a relationship despite not wanting to by going to parties with lots of pretty girls around, let alone by walking on a street where girls also walk rather than through a forest. And not developing social skills may make things much harder should you ever decide to try and get into a relationship later in your life.
Logical Pinpointing

Due to all this talk about logic I've decided to take a little closer look at Goedel's theorems and related issues, and found this nice LW post that did a really good job dispelling confusion about completeness, incompleteness, SOL semantics etc.: Completeness, incompleteness, and what it all means: first versus second order logic

If there's anything else along these lines to be found here on LW - or for that matter, anywhere, I'm all ears.

Logical Pinpointing

So this is where (one of the inspirations for) Eliezer's meta-ethics comes from! :)

A quick refresher from a former comment:

Cognitivism: Yes, moral propositions have truth-value, but not all people are talking about the same facts when they use words like "should", thus creating the illusion of disagreement.

... and now from this post:

Some people might dispute whether unicorns must be attracted to virgins, but since unicorns aren't real - since we aren't locating them within our universe using a causal reference - they'd just be talking about

... (read more)
3Nick_Tarleton8yAgreed, and disappointed that this comment was downvoted.
2012 Less Wrong Census/Survey

Done it all!

With all those personality tests and surveys it took me a bit more than an hour, but it was quite interesting (particularly CFAR questions) so I won't complain, much. :)

Causal Reference

"Mass-energy is neither created nor destroyed..." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.

It seems to me that this might be a point where logical reasoning takes it over from causal/graphical models, which in turn suggests why there are some problems with thinking about the laws of physics as nodes in a graph or even arrows as opposed to... well, I'm not really sure what specifically ... (read more)

Original Research on Less Wrong

Um... perhaps Wei Dai's analysis of the absent-minded driver problem (with it's subsequent resolution in the comments) and paulfchristiano's AIXI and existential despair would qualify?

Open Thread, October 16-31, 2012

Use your imaginary friend whom you try to explain the gist of what you've just read when, say, brushing your teeth. :)

(Actually writing down an explanation would certainly be more effective but not as fast).

Stuff That Makes Stuff Happen

Um, let's see if I get this (thinking to myself but posting here if anyone happens to find this useful - or even intelligible)...

claiming you know about X without X affecting you, you affecting X, or X and your belief having a common cause, violates the Markov condition on causal graphs

The causal Markov condition is that a phenomenon is independent of its noneffects, given its direct causes. It is equivalent to the ordinary Markov condition for Bayesian nets (any node in a network is conditionally independent of its nondescendents, given its parents) w... (read more)

Stuff That Makes Stuff Happen

Look at it as an exercise for the actively disbelieving mini-skill. :)

5loup-vaillant9yMini-trick for the mini-skill: Pretend he's talking about a fictional universe where anything explicitly mentioned is arbitrary.
The Useful Idea of Truth

So... could this style of writing, with koans and pictures, be applied to transforming the majority of sequences into an even greater didactic tool?

Besides the obvious problems, I'm not sure how this would stand with Eliezer - they are, after all, his masterpiece.

6thomblake9yReally, more like his student work. It was "Blog every day so I will have actually written something" not "Blog because that is the ultimate expression of my ideas".
The Useful Idea of Truth

Perhaps this: "accuracy" is a quantitative measue when "truth" is only qualitative/categorical.

The Useful Idea of Truth

We know there's such a thing as reality due to the reasons you mention, not truth - that's just a relation between reality and our beliefs.

"Arrangements of atoms" play a role in the idea that not all "syntactically correct" beliefs actually are meaningful and the last koan asks us to provide some rule to achieve this meaningfulness for all constructible beliefs (in an AI).

At least that's my understanding...

Advice On Getting A Software Job

What about some kind of online employment like the one offered by e.g. oDesk? Some time ago I stumbled upon this recommendation that also gave a few tips on how to approach this kind of work.

I haven't yet found the time to try it out, but since I'm also in a similar situation (finishing a CS degree then planning to find a job that'll pay the bills and use my free time for personal projects) I treat it as one of the most promising alternatives...

1maia9yInteresting tip, seems like it might work out well. That also looks like an interesting thread in general.
Suggest alternate names for the "Singularity Institute"


"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.

... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...

...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."

No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)

0Multiheaded9yHaha, we're gonna see a lot more of such comparisons as the community extends.
Reaching young math/compsci talent

You can find a few suggestions here, for starters.

0DaFranker9yI was reading this and preparing to post a questions-comment just like his, so thanks!