Sorted by New

# Wiki Contributions

This reminds me of a bit from Feynman's Lectures on Physics:

"What is this law of gravitation?  It is that every object in the universe attracts every other object with a force which for any two bodies is proportional to the mass of each and varies inversely as the square of the distance between them.  This statement can be expressed mathematically by the equation F=Gmm'/r^2.  If to this we add the fact that an object responds to a force by accelerating in the direction of the force by an amount that is inversely proportional to the mass of the object, we shall have said everything required, for a sufficiently talented mathematician could then deduce all the consequences of these two principles."

Like Feynman, however, I think his next sentence is important:

"However, since you are not assumed to be sufficiently talented yet, we shall discuss the consequences in more detail, and not just leave you with these two bare principles."

"The average shareholder definitely does not care about the value of R&D to the firm long after their deaths, or I suspect any time at all after they sell the stock."

This was addressed in the post: the price of the stock today (when its being sold) is a prediction of its future value.  Even if you only care about the price that you can sell it at today, that means that you care about at least the things that can lead to predictably greater value in the future, including R&D, because the person you're selling to cares about those things.

Also worth noting: the reason that the 2% value is meaningful is that if firms captured 100% of the value, they would be incentivized to increase the amount produced such that the amount they create would be maximally efficient.  When they only capture 2% of the value, they are no longer incentivized to create the maximally efficient amount (stop producing it when cost to produce = value produced).  This is basically why externalities lead to market inefficiencies.  The issue isn't that they won't produce it at all, it's that they will underproduce it.

Spandrels certainly exist.  But note the context of what X is in the quoted text:

"a chunk of complex purposeful functional circuitry X (e.g. an emotion)"

a chunk of complex purposeful functional circuitry cannot be a spandrel.  There are edge cases that are perhaps hard to distinguish, but the complexity of a feature is a sign of its adaptiveness.  Eyes can't be spandrels.  The immune system isn't a spandrel.  Even if we didn't understand what they do, the very complexity and fragility of these systems necessitates that they are adaptive and were selected for (rather than just being byproducts of something else that was selected for).

Complex emotions (not specific emotional responses) fall under this category.

The wealthy may benefit from the existence of low-skilled labour, but compared to what?  Do they benefit more than they would from the existence of high-skilled labour?

Yes, they benefit from low skilled labour as compared to no labour at all, but high skilled labour, being more productive, is an even greater benefit.  If it weren't, it couldn't demand a higher wage.

If "the wavefunction is real, but it is a function over potential configurations, only one of which is real." then you have the real configuration interacting with potential configurations.  I don't see how you can say something isn't real (if only one of them is real then the others aren't) is interacting with something that is.  If that "potential" part of the wave function can interact with the other parts of the wave function, then it's clearly real in every sense that the word "real" means anything at all.

I know they're just cartoons and I get the gist, but the graphs labelled "naive scenario" and "actual performance" are a little confusing.

The X axis seems to be measuring performance, with benchmarks like "high schooler" and "college student", but in that case, what's the Y axis? Is it the number of tasks that the model performs at that particular level?  Something like that?

I think it would be helpful if you labeled the Y axis, even with just a vague label.

Re: the dark matter analogy.  I think the analogy works well, but would just like to point out that even in theories where dark matter doesn't interact even with the weak force, and there is some other force that it does interact with that's analogous to electromagnetism, so it could bind together to form an earth-like planet, it still interacts with gravity, and if this earth-sized dark matter planet really did overlap with ours, we'd feel it's gravity and the earth would seem to be twice as massive as it is.  Or, to state it slightly differently, the actual earth would be half as massive as we measure it to be.  But that would be inconsistent with what we know of its composition and density.  We know the mass of rocks, and the measurement of the mass of a rock of a particular size wouldn't be subject to this error, so we can rule out there being a dark matter Earth coincident with ours.

This isn't in any way a criticism of what I found to be a brilliant piece.  And I'm not even sure that it's reason enough not to use that particular analogy, which otherwise works great.

Related to this topic, with a similar outlook but also more discussion of specific approaches going forward, is Vitalik's recent post on techno-optimism:

https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html

There is a lot at the link, but just to give a sense of the message here's a quote:

"To me, the moral of the story is this. Often, it really is the case that version N of our civilization's technology causes a problem, and version N+1 fixes it. However, this does not happen automatically, and requires intentional human effort. The ozone layer is recovering because, through international agreements like the Montreal Protocol, we made it recover. Air pollution is improving because we made it improve. And similarly, solar panels have not gotten massively better because it was a preordained part of the energy tech tree; solar panels have gotten massively better because decades of awareness of the importance of solving climate change have motivated both engineers to work on the problem, and companies and governments to fund their research. It is intentional action, coordinated through public discourse and culture shaping the perspectives of governments, scientists, philanthropists and businesses, and not an inexorable "techno-capital machine", that had solved these problems."

I've no real insight to add, but would just like to comment that this generally lines up with the picture Steven Pinker paints in books like "Better Angels of Our Nature" and "Enlightenment Now".

Thanks for a good comment.  My oversimplified thought process was that a 10x increase in energy usage for the brain would equate to a ~2x increase in total energy usage.  Since we're able to maintain that kind of energy use during exercise, and elite athletes can maintain that for many hours/day, it seems reasonable that the heart and other organs could maintain this kind of output.

However, the issue you bring up, of actually getting that much blood to the brain, evacuating waste products,  doing the necessary metabolism there, and dealing with so much heat localized in the small area of the brain, are all valid.  While it seems like the rest of the body wouldn't be constrained by this level of energy use, a 10x power output in the brain probably might be a problem.

It's worth a more detailed analysis of exactly where the max. power output constraint on the brain, without any major changes, lie.