This is an idea that just occurred to me. We have a large community of people who think about scientific problems recreationally, many of whom are in no position to go around investigating them. Hopefully, however, some other community members are in a position to go around investigating them, or know people who are. The idea here is to allow people to propose relatively specific ideas for experiments, which can be upvoted if people think they are wise, and can be commented on and refined by others. Grouping them together in an easily identifiable, organized way in which people can provide approval and suggestions seems like it may actually help advance human knowledge, and with its high sanity waterline and (kind of) diverse group of readers, this community seems like an excellent place to implement this idea.

These should be relatively practical, with an eye towards providing some aspiring grad student or professor with enough of an idea that they could go implement it. You should explain the general field (physics, AI, evolutionary psychology, economics, psychology, etc.) as well as the question the experiment is designed to investigate, in as much detail as you are reasonably capable of.

If this is a popular idea, a new thread can be started every time one of these reaches 500 comments, or quarterly, depending on its popularity. I expect this to provide help for people refining their understanding of various sciences, and if it ever gets turned into even a few good experiments, it will prove immensely worthwhile.

I think it's best to make these distinct from the general discussion thread because they have a very narrow purpose. I'll post an idea or two of my own to get things started. I'd also encourage people to post not only experiment ideas, but criticism and suggestions regarding this thread concept. I'd also suggest that people upvote or downvote this post if they think this is a good or bad idea, to better establish whether future implementations will be worthwhile. 

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 4:28 PM

Does the Dual n-back increase intelligence?

Some evidence indicates that playing the videogame Dual n-back increases working memory and fluid intelligence.

A group of us would first take a memory test. Next, a randomly selected subgroup would play the Duel n-back a few hours a week for, say, a month. Then, both groups would take another memory test. Next, we would wait, say, two months with no one playing the game. Finally, the two groups would again take a memory test. We could probably still learn a lot by omitting the control group.

Here is a free version of the game.

In addition to the memory test, we should also use some fluid intelligence test (like RAPM). It would probably be good to use unspeeded RAPM and other fluid intelligence tests (rather than speeded RAPM, which is controversial.).

Also, we should investigate different modes, like multiple stimuli and arithmetic and crab n-back.

[-]Louie13y-10

A few of us at Singularity Institute tested Dual N-Back last year. For 1 week, 13 people were tested on dissimilar metrics of intelligence while some of them performed the same kind of Dual N-Back done in the original Jaeggi study.

Conclusion: It doesn't make you smarter.

Bonus: You get better at Dual N-Back though!

Interestingly, at around the same time as we were doing our tests last last year, the original research "replicated" her own results and published them again using new data. I'm sort of confused. I don't want to say Jaeggi doesn't understand training and practice effects... but I'm struggling to understand how else to explain this.

That said, it would still be cool to see LW folks test IA interventions. I just recommend exploring more promising ones. Perhaps seeking to confirm the results of these studies instead?

Louie, I don't remember the details of this. I thought folks ended up with a very small and not-very-powerful study (such that the effect would have had to be very large to show up anyhow), with the main goal of the "study", such as it was, being to test our procedures for running future potential experiments?

Could you refresh my memory on what tests were run?

Also, speaking about how "the folks at SIAI believe X" based on small-scale attempt run within the visiting fellows program last summer seems misleading; it may inaccurately seem to folks as though you're speaking for Eliezer, Michael Vassar, or others. I was there (unlike Eliezer or Michael) and I don't recall having the beliefs you mention.

This definitely agrees with my memory, too...

I mostly felt compelled to actually sign in and comment because I wanted to point out that designating something as "[t]he opinion of folks at Singularity Institute" or the like is often an annoying simplification/inaccuracy that allows people to thenceforth feel justified in using their social cognition to model a group roughly as they would a person.

When I asked back in September 2010, you said

They're ad hoc, we've used one for a dual n-back study which ended up yielding insufficient data....We didn't study long enough to get any statistically significant data. Like, not even close....really, there's no information there, no matter how much Bayes magic you use

So at least your memory has been consistent over the last 9 months.

Hmm. If your replication attempt was good science, you could help the world by publishing it. If it wasn't good science, you probably shouldn't update on it very strongly.

[+]Louie13y-70
[-][anonymous]13y60

For 1 week, 13 people were tested on dissimilar metrics of intelligence

One week seems very short compared to the studies. I didn't check all, but one mentioned in Wikipedia was 5 weeks and another whose abstract I found was 4 weeks. It seems probable that any effects on the brain would be slow to accumulate. As a point of comparison, it takes a lot longer than one week to assess the value of a particular muscle training program.

[-][anonymous]13y00

Conclusion: It doesn't make you smarter.

How much smarter does it not make you, specifically?

Here's an example, which I will review the comments from and use to develop a sort of standard structure, which I will then incorporate in the top level post for future. So please comment both on the idea and my expression of it, and make suggestions for what basic info should be included either in response to this or in response to the primary article.

Area: Evolutionary Psychology

Topic: Genetic versus "cultural" influence on mate attraction.

Specific problem: There's a belief that various aspects of physical attraction are genetically determined. It is difficult to separate genetic effects from cultural effects. This is an attempt to try to control for that, to see how (in)substantial the effects of culture are. The underlying idea is that, while different cultures also have different genetic makeups, different times in the same geographic area may see different cultures with much more related genetic makeups.

The actual experiment: Sample a group of people (possibly just one sex per experiment) and obtain their views on physical attractiveness. Show them images of people, or drawings, or ask questions about what physical qualities they would find desirable in a mate. (e.g. An attractive member of the opposite sex would be taller than me - strongly agree, agree, disagree, strongly disagree). Then, and this is the expensive part, use the exact same survey on people's offspring at about the same age. It may be ideal to compare people with aunts and uncles rather than parents, as parents are likely to have a more direct non-genetic effect on preferences.

This is a rather general description, but it should be perfectly adequate for someone in the relevant field to design a very effective and insightful experiment. It could even easily be incorporated as part of a larger experiment tracking qualities between generations.

Have you seen this study yet?

This is a very cool idea.

One minor protocol thing: if it's a good idea to limit the sample group based on sex, then it would also be a good idea to limit based on sexual orientation, since the cultural factors that affect opposite-sex attraction are quite different from those that affect same-sex attraction, and there may be a difference in the genetic factors as well.

Per this discussion post: do siblings who look alike tend to get along better and help each other more?

Ideally you'd get a bunch of (full, tested genetically if there's budget and this passes the ethics board) sibling pairs, probably controlling for number and variety of extra siblings in the family. Take pictures of all the pairs, get third parties to rate their visual similarity (I can name half a dozen features that don't match between me and my sister but that didn't stop six of her friends from mistaking me for her when I walked into her school building one day, so I'm obviously no judge of the matter), and then measure altruism between the siblings (exchanges of babysitting services, frequency of contact/social support, any material assistance that they provide one another, etc.)

Things I'd like to see tested which may or may not have been tested before but I haven't seen in the literature.

1) There's a lot of evidence that people are wildly overconfident. The most classic version of this is how if you ask people to give a range for something such that they are 90% sure they got it right, and do it for a long list of things (like say the populations of various countries) they will get much lower than 90%. Will be people more properly calibrate when there is money at stake? (This is something that Mass Driver and I discussed a while back.) The way I'd test this is after they've given the various options see what bets they are willing to take about their being correct and how closely they match their estimated confidence.

2) Are people who have learned about cognitive biases less likely to be subject to them to any substantial degree? The one I'm most curious about is the conjunction fallacy. The obvious way to test this is to put people who have just finished a semester of intro psychology or something similar and see if they show less of a conjunction bias than students who have not done so.

3) Can training make one better at the color-word version of the Stroop interference test?

3) Can training make one better at the color-word version of the Stroop interference test?

Yes. The Stroop test is, along with spaced repetition, one of the most confirmed and replicated tasks in all of psychology, so it would be deeply surprising if no one had come up with training to make you better at the Stroop test. (Heck, there's plenty of training available for IQ tests - like taking a bunch of IQ tests.)

I'd put a very high confidence on that, but as it happens, I don't have to since I recently saw discussion of one result on Stroop test and meditation:

. After training, subjects were tested on a variety of cognitive and personality tests, including associate learning, word fluency, depression, anxiety, locus of control, and of course Stroop. Results showed that the TM and MF groups together scored significantly higher on associate learning and word fluency than the no-training and relaxation-training groups. Perhaps most surprisingly, over a 36 month period, the survival rate for the TM and MF groups was significantly higher than for the relaxation and no-training groups (p<.00025). But more to the point, both TM and MF scored higher than MR and no-training on the Stroop task (p<.1; one-tailed test).

Or:

Incredibly, behavioral data showed that the standard stroop effect (again, a cost in reaction time when reading incongruent words relative to congruent words) was completely eliminated in terms of both reaction time and accuracy for both the experimental and control groups. [ERP analyses revealed decreased visual activity under suggestions , including suppression of early visual effects commonly known as the P100 and N100, while fMRI showed reductions in a variety of regions including anterior cingulate]. The bottom line, then, is that even strong suggestion is enough to accomplish some amount of deprogramming, as measured through the Stroop task.

Thanks.

1) I'm surprised this hasn't already been done. Many economists like to argue that "people are rational when it counts" i.e. when there's stronger incentives. Similar to your proposal, I'm interested in seeing how priming affects decisions with incentives, and to my knowledge, this hasn't been done either (but IIRC it has been done without incentives).

2) IIRC the results have been replicated with economics and/or psychology graduate students (citation needed).

1) Different but related; people who trade stuff a lot suffer much less from the endowment effect, also while people are crap at randomising normally with money at stake they get better very quickly.

It is possible that 1) has been done but if so I haven't seen the studies.

Oooh, I tend to get these quite often, lesse if i can remember any that's actually workable...

I had this idea for a narrow AI experiment where you have two populations of algorithms, of many different and unrelated types, in a predator-prey like arms race where one side tries to forge false sensory (for example images or snippets of music), and the other tries to distinguish those falsifications from human or nature supplied data, and the first group is scored on how well it fools the second. That's the basic idea, if anyone would actually be interested in actually trying it out I thought a bunch more about the details of how to implement it and possible problems and further small things you could do to make it work even better than the raw version and stuff like that.

Field: Software Engineering. Issue: what are the determinants of efficiency in getting stuff done that entails writing software.

At the Paris LW meetup, I described to Alexandros the particular subtopic about which I noticed confusion (including my own) - people call it the "10x thesis". According to this, in a typical workgroup of software professionals (people paid to write code), there will be a ten to one ratio between productivities of the best and worst. According to a stronger version, these disparities are unrelated to experience.

The research in this area typically has the following setup: you get a group of N people in one room, and give them the same task to perform. Usually there is some experimental condition that you want to measure the effect of (for instance "using design patterns" vs "not using design patterns"), so you split them into subgroups accordingly. You then measure how long each takes to finish the task.

The "10x" result comes from interpreting the same kind of experimental data, but instead of looking at the effect of the experimental condition, you look at the variance itself. (Historically, this got noticed because it vexed researchers that the variance was almost always swamping out the effects of the experimental conditions.)

The issue that perplexes me is that taking a best-to-worst ratio in each group, in such cases, will give a measurement of variance that is composed of two things: first, how variable the time required to complete a task is intrinsically, and second, how different people in the relevant population (which is itself hard to define) differ in their effectiveness at completing tasks.

When I discussed this with Alexandros I brought up the "ideal experiment" I would want to use to measure the first component: take one person, give them a task, measure how long they take. Repeat N times.

However this experiment isn't valid, because remembering how you solved the task the first time around saves a huge amount of time in successive attempts.

So my "ideal" experiment has to be amended: the same, but you wipe the programmer's short-term memory each time, resetting them to the state they were in before the task. Now this is only an impossible experiment.

What surprised me was Alexandros' next remark: "You can measure the same thing by giving the same task to N programmers, instead".

This seems clearly wrong to me. There are two different probability distributions involved: one is within-subject, the other inter-subject. They do not necessarily have the same shape. What you measure when giving one task to N programmers is a joint probability distribution, the shape of which could be consistent with infinitely many hypotheses about the shape of the underlying distributions.

Thus, my question - what would be a good experimental setup and statistical tools to infer within-subject variation, which cannot be measured, from what we can measure?

Bonus question: am I totally confused about the matter?

  1. Give one task to N programmers.
  2. Give a different task to the same N programmers.
  3. Repeat #2 several times.
  4. Say to self "I'll bet the same guy was a super-programmer on all of those tasks. He just is better at programming".
  5. Repeat #4 several times.
  6. Analyze the data by multiple regression. Independent variables are programmer ids and task ids. Intrinsic variability of tasks falls out of the analysis as unexplained variance, but what you are really interested is relative performance of programmers over all tasks.

Bonus: I don't think you are confused. But you seem to be assuming that the 10x thesis applies to specific programming tasks (like writing a parser, or a diagram editor, or a pretty-printer). But I think the hypothesis is stronger than that. Some people are better at all types of programming than are lesser mortals. So, you can smooth the noise by aggregating several tasks without losing the 10x signal.

Analyze the data by multiple regression

I'd appreciate practical advice on how to do that in R/RStudio. I have data from an empirical study, loaded in RStudio as "29 observations of 8 variables". My variables are "Who, T1, T2, T3 (etc)" where "Who" is programmer id and T1, etc. are the times taken for tasks 1 through 8.

What R command will give me a multiple regression of times over programmer id and task id?

[ETA: OK, I figure what I've got to do is make this a data frame with 3 variables, those being Who, TaskId, Time. Right? Maybe I can figure it out. Worst case, I'll create a spreadsheet organized that way.]

[ETA2: I've done the above, but I don't know how to interpret the results. What do you expect to see - in terms of coefficients of regression?]

I think you need one variable per programmer (value 0 or 1), one variable per task (value 0 or 1), and one variable for time taken to complete the task (real number). So, with 8 tasks and 29 programmers, you have 38 (= 29 + 8 + 1) variables, all but 3 of which are zero for each observation. And you have 232 observations.

Since you have 37 independent variables, you will have 37 regression coefficients (each presumable in units of hours) plus one additional parameter that applies to all observations. The results claim that you get a good estimate of the time required for programmer j to complete task k by adding together the j-th programmer coefficient, the k-th task coefficient and the extra parameter.

I'm not seeing why the ProgID and TaskID variables need to be booleans - or maybe R implicitly converts them to that. I've left them in symbolic form.

Here is a subset of the PatMain data massaged (by hand!) into the format I thought would let me get a regression, and the regression results as a comment. I got this into a data frame variable named z2 and ran the commands:

fit = lm(Time ~ .,data=z2)
summary(fit)

I suck at statistics so I may be talking nonsense here, and you're welcome to check my results. The bottom line seems to be that the task coefficients do a much better job of predicting the completion time than do the programmer coefficients, with t-values that suggest you could easily not care about who performs the task with the exception of programmer A6 who was the slowest of the lot.

(For instance the coefficients say that the best prediction for the time taken is "40 minutes", then you subtract 25 minutes if the task is ST2. This isn't a bad approximation, except for programmer A4 who takes 40 minutes on ST2. It's not that A4 is slow - just slow on that task.)

You had asked for assistance and expertise on using R/RStudio. Unfortunately, I have never used them.

maybe R implicitly converts them

Judging from your results, I'm sure you are right.

The bottom line seems to be that the task coefficients do a much better job of predicting the completion time than do the programmer coefficients.

Yes, and if you added some additional tasks into the mix - tasks which took hours or days to complete - then programmer ID would seem to make even less difference. This points out the defect in my suggested data-analysis strategy. A better approach might have been to divide each time by the average time for the task (over all programmers), optionally also taking the log of that, and then exclude the task id as an independent variable. After all, the hypothesis is that Achilles is 10x as fast as the Tortoise, not that he takes ~30 minutes less time regardless of task size.

you seem to be assuming that the 10x thesis applies to specific programming tasks (like writing a parser, or a diagram editor, or a pretty-printer)

Where is that implied in what I wrote above?

Some people are better at all types of programming than are lesser mortals

Are you making that claim, or suggesting that this is what the 10x thesis means?

(Dijkstra once claimed that "the use of COBOL cripples the mind". If true, it would follow that someone who is a great COBOL programmer would be a poor programmer in other languages.)

Some people are better at all types of programming than are lesser mortals

Are you making that claim, or suggesting that this is what the 10x thesis means?

Both.

(Dijkstra once claimed that "the use of COBOL cripples the mind". If true, it would follow that someone who is a great COBOL programmer would be a poor programmer in other languages.)

Amusingly, that does not follow. A great COBOL programmer completes his COBOL tasks in 1/10 the time of lesser folk, and hence becomes 1/10 as crippled.

you seem to be assuming ...

Where is that implied in what I wrote above?

It appears that I somehow misinterpreted your point and thereby somehow offended you. That was not my intention.

You begin by mentioning the problem of testing the 10x hypothesis, and then switched to the problem of trying to separate out "how variable the time required to complete a task is intrinsically". That is an odd problem to focus on, and my intuition tells me that it is best approached by identifying that variance as a residual rather than by inventing an ideal thought experiments that measure it directly. But if someone else has better ideas, that is great.

somehow offended you

No offense taken. Just curious to know. I'm declaring Crocker's Rules in this thread.

You are asserting "some people are better at all types of programming than are lesser mortals". In that case I'd like to know what evidence convinced you, so that I can have a better understanding of "better at".

Some of the empirical data I have looked at contradicted your hypothesis "the same guy was a super-programmer on all of those tasks". In that study, some people finished first on one task and last on some other task. (Prechelt's "PatMain" study.)

the problem of testing the 10x hypothesis

One of my questions is, "is the 10x claim even a testable hypothesis?". In other words, do we know what the world would look like if it was false?

When I've brought this up in one venue, people asked me "well, have you seen any evidence suggesting that all people code at the same rate?" This is dumb. Just because there exists one alternate hypothesis which is obviously false does not immediately confirm the hypothesis being tested.

Rather, the question is "out of the space of possible hypotheses about how people's rates of output when programming differ, how do we know that the best is the one which models each individual as represented by a single numerical value, such that the typical ratio between highest and lowest is one order of magnitude".

This space includes hypotheses where rate of output is mostly explained by experience, which appear facially plausuble - yet many versions of the 10x thesis explicitly discard these.

My reasons for believing the 10x hypothesis are mostly anecdotal. I've talked to people who observed Knuth and Harlan Mills in action. I know of the kinds of things accomplished more recently by Torvalds and Hudak. Plus, I have myself observed differences of at least 5x in industrial and college classwork environments.

I looked at the PatMain study. I'm not sure that the tasks there are large enough (roughly 3 hours) to test the 10x hypothesis. Worse, they are program maintenance tasks, and they exclude testing and debugging. My impression is that top programmers achieve their productivity mostly by being better at the design and debugging tasks. That is, they design so that they need less code, and they code so they need dramatically less debugging. So I wouldn't expect PatMain data to back up the 10x hypothesis.

My reasons for believing the 10x hypothesis are mostly anecdotal.

Do you see it as a testable hypothesis though, as opposed to an applause light calling out the programming profession as one where remarkable individuals are to be found?

I'm not sure that the tasks there are large enough ... they are program maintenance tasks

You said earlier that a great programmer is good at all types of programming tasks, and program maintenance certainly is a programming task. Why the reversal?

Anyway, suppose you're correct and there are some experimental conditions which make for a poor test of 10x. Then we need to list all such exclusion criteria prior to the experiment, not come up with them a posteriori - or we'll be suspected of excluding the experimental results we don't like.

My impression is that top programmers achieve their productivity mostly by being better at the design and debugging tasks ... they design so that they need less code

Now this sounds as if you're defining "productivity" in such a way that it has less to do with "rate of output". You've just ruled out, a priori, any experimental setup in which you hand programmers a fixed design and measure the time taken to implement it, for instance.

At this point ISTM we still have made surprisingly little headway on the two questions at hand:

  • what kind of claim is the 10x claim - is it a testable hypothesis, and if not, how do we turn it into one
  • what kind of experimental setup will give us a way to check whether 10x is indeed favored among credible alternatives

Do you see it as a testable hypothesis[?]

I believe it can be turned into one. For example, as stated, it doesn't take into account sample or population size. The reductio (N=2) is that it seems to claim the faster of two programmers will be 10x as fast as the slower. There is also a need to clarify and delimit what is meant by task.

You said earlier that a great programmer is good at all types of programming tasks, and program maintenance certainly is a programming task. Why the reversal?

Because you and I meant different things by task. (I meant different types of systems - compilers vs financial vs telephone switching systems for example.) Typing and attending meetings are also programming tasks, but I wouldn't select them out for measurement and exclude other, more significant tasks when trying to test the 10x hypothesis.

Now this sounds as if you're defining "productivity" in such a way that it has less to do with "rate of output". You've just ruled out, a priori, any experimental setup in which you hand programmers a fixed design and measure the time taken to implement it, for instance.

Yes, I have. And I think we are wasting time here. It is easy to refute a scientific hypothesis by uncharitably misinterpreting it so that it cannot possibly be true. So I'm sure you will succeed in doing so without my help.

It is easy to refute a scientific hypothesis by uncharitably misinterpreting it so that it cannot possibly be true.

Where specifically have I done that? (Is it the "applause light" part? Do you think it obviously false that the thesis serves as an applause light?)

And I think we are wasting time here.

Are you tapping out? This is frustrating as hell. Crocker's Rules, dammit - feel free to call me an idiot, but please point out where I'm being one!

Without outside help I can certainly go on doubting - holding off on believing what others seem to believe. But I want something more - I want to form positive knowledge. (As one fictional rationalist would have it, "My bottom line is not yet written. I will figure out how to test the magical strength of Muggleborns, and the magical strength of purebloods. If my tests tell me that Muggleborns are weaker, I will believe they are weaker. If my tests tell me that Muggleborns are stronger, I will believe they are stronger. Knowing this and other truths, I will gain some measure of power.")

For example, as stated, it doesn't take into account sample or population size.

Yeah, good catch. The 10x ratio is supposed to hold for workgroup-sized samples (10 to 20). What the source population is, that's less clearly defined. A 1983 quote from Mills refers to "programmers certified by their industrial position and pay", and we could go with that: anyone who gets full time or better compensation for writing code and whose job description says "programmer" or a variation thereof.

We can add "how large is the programmer population" to our list of questions. A quick search turns up an estimate from Watts Humphrey of 3 million programmers in the US about ten years ago.

So let's assume those parameters hold - population size of 3M and sample size of 10. Do we now have a testable hypothesis?

What is the math for finding out what distribution of "productivity" in the overall population gives rise to a typical 10x best-to-worst ratio when you take samples of that size? Is that even a useful line of inquiry?

The misinterpretation that stood out to me was:

Now this sounds as if you're defining "productivity" in such a way that it has less to do with "rate of output". You've just ruled out, a priori, any experimental setup in which you hand programmers a fixed design and measure the time taken to implement it, for instance.

I'm not sure whether you meant "design" to refer to e.g. internal API or overall program behavior, but they're both relevant in the same way:

The important metric of "rate of output" is how fast a programmer can solve real-world problems. Not how fast they can write lines of code -- LOC is a cost, not an output. Design is not a constant. If Alice implements feature X using 1 day and 100 LOC, and Bob implements X using 10 days and 500 LOC, then Alice was 10x as productive as Bob, and she achieved that productivity by writing less code.

I would also expect that even having a fixed specification of what the program should do would somewhat compress the range of observed productivities compared to what actually happens in the wild. Because translating a problem into a desired program behavior is itself part of the task of programming, and is one of the opportunities for good programmers to distinguish themselves by finding a more efficient design. Although it's harder to design an experiment to test this part of the hypothesis.

LOC is a cost, not an output

Yes.

A great COBOL programmer completes his COBOL tasks in 1/10 the time of lesser folk, and hence becomes 1/10 as crippled.

That has unfortunately not been my experience with similarly crippling languages. A great programmer finishes their crippling-language tasks much quicker than a poor programmer... and their reward is lots lots more tasks in the crippling language. :-\

That has unfortunately not been my experience with similarly crippling languages. A great programmer finishes their crippling-language tasks much quicker than a poor programmer... and their reward is lots lots more tasks in the crippling language

I've seen this too - if something sucks it can be a good idea to make sure you appear to suck at it!

If being a good or bad programmer is an intrinsic quality that is independent of the task, then you could just give the same subject different tasks to solve. So you take N programmers, and give team all K tasks to solve. Then you can determine the mean difficulty of each task as well as the mean quality of each programmer. Given that you should be able to infer the variance.

There are some details to be worked out, for example, is task difficulty multiplicative or additive? I.e. if task A is 5 times as hard as task B, will the standard deviation also be 5 times as large? But that can be solved with enough data and proper prior probabilities of different models.

Field: Electrical Engineering. No idea how practical this is though:

An important problem with increasing the number of cores on a chip is having enough bandwidth between the cores. Some people are working on in-silicone optical channels, which seems promising. Instead of this would it be possible for the different cores to communicate with each other wirelessly? This requires integrated transmitter and receivers, but I believe both exist.

I am not an electrical engineer, but as far as I know, wireless communication requires a relatively large antenna. Also, the bandwidth is likely a lot worse than that of a wire. There is a good reason that people still use wires whenever possible.

I should have done some more due diligence before suggesting my idea:

http://www.cs.ucla.edu/~sblee/Papers/mobicom09-wnoc.pdf

Edit: I was originally concerned about bandwidth, but the above article claims

On-chip wireless channel capacity. Because of such low signal loss over on-chip wireless channels and new techniques in generating terahertz signals on-chip [14,31], the on-chip wireless network becomes feasible. In addition, it is possible to switch a CMOS transistor as fast as 500 GHz at 32 nm CMOS [21], thus allowing us to implement a large number of high frequency bands for the onchip wireless network. Following a rule of thumb in RF design, the maximum available bandwidth is 10% of the carrier frequency. For example, with a carrier frequency of 300 GHz, the data rate of each channel can be as large as 30 Gbps. Using a 32 nm CMOS process, there will be total of 16 available channels, from 100 GHz to 500 GHz, for the on-chip wireless network, and each channel can transmit at 10 to 20 Gbps. In the 1000-core CMPs design, the total aggregate data rate can be as high as 320 Gbps with 16 TX’s and 64 RX’s.

Inspired by this article on the effects of gut bacteria, I'd like to know whether chronically obese people starting a new diet would benefit from taking a course of antibiotics at the same time.

Of course, if this becomes widespread, it would increase antibiotic resistance (and other health risks) for a relatively low payoff, but I'm still curious.