Posts

Sorted by New

Wiki Contributions

Comments

Yes, thank you for writing this- I've been meaning to write something like it for a while and now I don't need to! I initially brushed Newcomb's Paradox off as an edge case and it took me much longer than I would have liked to realize how universal it was. A discussion of this type should be included with every introduction to the problem to prevent people from treating it as just some pointless philosophical thought experiment.

As far as I can tell from the evidence given in the talk, contagious spreading of obesity is a plausible but not directly proven idea. Its plausibility comes from the more direct tests that he gives later in the talk, namely the observed spread of cooperation or defection in iterated games.

However, I agree that it's probably important to not too quickly talk about contagious obesity because (a) they haven't done the more direct interventional studies that would show whether this is true, and (b) speculating about contentious social issues in public before you have a solid understanding of what's going on leads to bad things. He could have more explicitly gotten at the point that we're not sure what effects cause the correlations that we see- I caught it but I suspect people paying less attention would come away thinking that they had proved the causal model.

The Moire Eel - move your cursor around and see all the beautfiul, beautiful moire patterns.

Social Networks and Evolution: a great Oxford neuroscience talk. I will also shamelessly push this blog post that I wrote about the connection between the work in the lecture and Jared Diamond's thesis that agriculture was the worst mistake in human history.

This is exactly what I was thinking the whole time. Is there any example of supposed "ambiguity aversion" that isn't explained by this effect?

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it.

This is precisely why people should be encouraged to do it more. I've found that the more you admit to a lack of ability where you don't have the ability, the more people are willing to listen to you where you do.

I also see interesting parallels to the relationship between skeptics and pseudoscience, where we replace skeptics -> rationalists, pseudoscience -> religion. Namely, "things that look like politics are the mindkiller" works as "things that look like pseudoscience are obviously dumb". It provides an opportunity to view yourself as smarter than other people without thinking too hard about the issue.

1) This is fantastic- I keep meaning to read more on how to actually apply Highly Advanced Epistemology to real data, and now I'm learning about it. Thanks!

2) This should be on Main.

3) Does there exist an alternative in the literature to the notation of Pr(A = a)? I hadn't realized until now how much the use of the equal sign there makes no sense. In standard usage, the equal sign either refers to literal equivalence (or isomorphism) as in functional programming, or variable assignment, as in imperative programming. This operation is obviously not literal equivalence (the set A is not equal to the element a), and it's only sort of like variable assignment. We do not erase our previous data of the set A: we want it to be around when we talk about observing other events from the set A.

In analogy with Pearl's "do" notation, I propose that we have an "observe notation", where Pr(A = a) would be written as Pr(obs_A (a)), and read as "probability that event a is observed from set A," and not overload our precious equal sign. (The overloading with equivalence vs. variable assignment is already stressful enough for the poor piece of notation.)

I'm not proposing that you change your notation for this sequence, but I feel like this notation might serve for clearer pedagogy in general.

That is the general approach I've been taking on the issue so far- basically I'm interested in learning about consciousness, and I've been going about it by reading papers on the subject.

However, part of the issue that I have is that I don't know what I don't know. I can look up terms that I don't know that show up in papers, but in the literature there are presumably unspoken inferences being made based on "obvious" information.

Furthermore, since I have a bias toward novelty or flashiness, I may miss things that blatantly and obviously contradict results that any well-trained neuroscientist or cognitive scientist should know and end up believing something that couldn't be true.

Do you have recommendations for places where non-experts can ask more knowledgeable people about neuro/cog sci? There exists a cognitive sciences stack exchange, but it appears to be poorly trafficked- there's an average of about one posting per week.

(How many different DAGs are possible if you have 600 nodes? Apparently, >2^600.)

Naively, I would expect it to be closer to 600^600 (the number of possible directed graphs with 600 nodes).

And in fact, it is some complicated thing that seems to scale much more like n^n than like 2^n: http://en.wikipedia.org/wiki/Directed_acyclic_graph#Combinatorial_enumeration

Load More