jimrandomh

LessWrong developer, rationalist since the Overcoming Bias days. Connoisseur of jargon.

Comments

Against boots theory

It depends on how tightly you draw the analogy. If your takeaway from the boots-story is that buying better versions of commodity, manufactured goods like shoes, is a key part of the story, then this is pretty clearly false, if only because those goods, even in aggregate, don't make up a large enough part of anyone's budget.

If you broaden it to include expenditure and accumulation of all resources, not just money, then it's mostly true. In a given year, a person might work a minimum wage job (have more money now, less money later--cheap boots) or attend a programming bootcamp (have less money now, more money later--expensive boots). They might eat cheap unhealthy food (have more money now, face problems later), or high-quality more expensive food (have less money now, fewer problems later). And so on, repeated across many kinds of decisions, and many kinds of resources.

John_Maxwell's Shortform

I checked the obvious subreddit (r/hairloss), and it seems to do just about everything wrong. It's not just that the Hot algorithm is favoring ephemeral content over accumulation of knowledge; they also don't have an FAQ, or any information in the sidebar, or active users with good canned replies to paste, or anything like that. I also note that most of the phpBBs mentioned are using subforums, to give the experimenters a place to talk without a stream of newbie questions, etc., which the subreddit is also missing.

I think the phpBB era had lots of similarly-neglected forums, which (if they somehow got traffic) would have been similarly bad. I think the difference is that Reddit is propping up this forum with a continuous stream of users, where a similarly-neglected phpBB would have quickly fallen to zero traffic.

So... I think this may be a barriers-to-entry story, where the relevant barrier is not on the user side, but on the administrator side; most Reddit users can handle signing up for a phpBB just fine, but creating a phpBB implies a level of commitment that usually implies you'll set up some subforums, create an FAQ, and put nonzero effort into making it good.

What Does "Signalling" Mean?

Oddly enough, the Signaling tag is currently awaiting a merge between the description you quoted, and a description imported from the old wiki, which links to and quotes the Scott Alexander post you referenced. (You can see both in the edit history; it looks a bit weird because there's a revision written in the context of the tagging system before we imported stuff, which was written in isolation, but presented as though it were a diff relative to the imported, older wiki page.)

The universality of computation and mind design space

I think you're missing what the goal of all this is. LessWrong contains a lot of reasoning and prediction about AIs that don't exist, with details not filled in, because we want to decide which AI research paths we should and shouldn't pursue, which AIs we should and shouldn't create, etc. This kind of strategic thinking must necessarily be forward-looking, and based on incomplete information, because if it wasn't, it would be too late to be useful.

So yes, after AGIs are already coded up and ready to run, we can learn things about their behavior by running them. This isn't in dispute, it's just not a solution to the questions we want to answer (on the timescales we need the answers).

The universality of computation and mind design space

But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.

This is straightforwardly incorrect. Humans (in 2020) reasoning about what future AIs will do, do not have the source code or full details of those AIs, because they are hypothetical constructs. Therefore we can't simulate them. This is the same as why we can't predict what another human would do by simulating them; we don't have a full-fidelity scan of their brain, or a detailed-enough model of what to do with such a scan, or a computer fast enough to run it.

Social Capital Paradoxes
  1. Why do so many good things have horizontal transmission structures?

I think the key to this is that while vertical transmission is more likely to be aligned, it is aligned with reproductive fitness in particular, which only partially matches the rest of what we value. Whereas horizontal transmission can come with an aligned, human-chosen filter attached. If I accept ideas from random unvetted sources, they will be optimized for transmission by that medium; if I want ideas that will make me a good thinker, and I have some ability to identify who the previous generation's good thinkers are, then I can selectively copy ideas from them, and they will be selected for that. (And if I succeed at becoming a recognizably good thinker, then future people may similarly copy ideas from me, and so on.)

(This kind of horizontal transmission is vulnerable to being taken over by fakes; if I lose the ability to distinguish who the good thinkers are, and start copying ideas from the wrong sources, then I'm back to the bad version of horizontal transmission in which ideas are selected mainly for virality, which in this case means memes that will turn me into a convincing faker.)

2. How should we think about horizontal transmission, normatively? Specifically, "paradox two" is an argument that horizontal-transmission practices, while enticing, can "burn the commons" of collective goodwill by opening up things for predatory/parasitic dynamics. Yet the conclusion seems severe and counterintuitive.

[Earlier in post:] The videos are being optimized for transmission rather than usefulness. Acquiring useful information requires prudent optimization against this.

It seems to me that the point where the damage is done is when someone signal boosts or retransmits the retransmission-optimized-low-quality information without doing this sort of prudent optimization. The more discriminating people are in what they signal boost, the more horizontal transmission becomes okay, both globally and within a particular information bubble.

This implies that the norms should be different in different groups, based on their inclination and ability to vet information before retransmitting it. Ie, most average people shouldn't be choosing their reading material based on what their friends chose to signal boost, because they have undiscriminating friends, but intellectuals with curated follow lists can probably get away with this.

Jimrandomh's Shortform

Vitamin D reduces the severity of COVID-19, with a very large effect size, in an RCT.

Vitamin D has a history of weird health claims around it failing to hold up in RCTs (this SSC post has a decent overview). But, suppose the mechanism of vitamin D is primarily immunological. This has a surprising implication:

It means negative results in RCTs of vitamin D are not trustworthy.

There are many health conditions where having had a particular infection, especially a severe case of that infection, is a major risk factor. For example, 90% of cases of cervical cancer are caused by HPV infection. There are many known infection-disease pairs like this (albeit usually with smaller effect size), and presumably also many unknown infection-disease pairs like this as well.

Now suppose vitamin D makes you resistant to getting a severe case of a particular infection, which increases risk of a cancer at some delay. Researchers do an RCT of vitamin D for prevention of that kind of cancer, and their methodology is perfect. Problem: What if that infection wasn't common in at the time and place the RCT was performed, but is common somewhere else? Then the study will give a negative result.

This throws a wrench into the usual epistemic strategies around vitamin D, and around every other drug and supplement where the primary mechanism of action is immune-mediated.

Do mesa-optimizer risk arguments rely on the train-test paradigm?

One would certainly hope that lifelong learning would cause an AI with a proto-mesa-optimizer in it to update by down-weighting the mesa-optimizer. But the opposite could also happen; a proto-mesa-optimizer could use the influence it has over a larger AI system to navigate into situations which will increase the mesa-optimizer's weight, giving it more control over the system.

The intro paragraph of this tag is more important than most tags, since it appears when you hover over the tag in the filters on the front page, or on a post page, and will be many peoples' first exposure to the definition.

A Refutation of (Global) "Happiness Maximization"

I think you have misunderstood the genre of some of the conversations you've been having. Wireheading is a philosophical thought experiment, not a policy proposal. Getting angry and calling it a "criminal proposal" implies a significant misunderstanding of what is being talked about and what kind of conversation is being had.

Combining this with references to an in-person conversation where it isn't clear what was said, and links to a few posts that don't quite match the thing you're responding to, makes this whole post very confusing. I don't think I could discuss the topic at the object level without quite a few rounds of clarification first.

Load More