Ninety-Three

Posts

Sorted by New

Wiki Contributions

Comments

Good Heart Week: Extending the Experiment

Doh. I forget how much faster strong upvotes scaled with user karma, that resolves all my confusion.

Good Heart Week: Extending the Experiment

Looking at more random users, I think tokens earned via posting are being undercounted somehow. Users with only comments display as having exactly the amount of tokens I would expect from total karma on eligible posts minus self-votes, but looking at users with posts made after April 2 (to avoid complications from a changing post-value formula) consistently have less than the "comment votes plus 3x post votes, not counting self votes" formula would predict. For instance, Zvi has two posts (currently 52 and 68 karma) and zero comments in the last week. With strength 2 selfvotes, (52-2+68-2)*3=348 expected tokens which is a significant mismatch from his displayed 302. It doesn't seem to be out of date since his displayed tokens change instantly in response to me voting on the posts, is something going wrong or is there some weird special case way of voting on posts that doesn't get immediately reflected in the user page?

Good Heart Week: Extending the Experiment

The numbers attached to posts and comments seem to be a straightforward reskin of karma: includes self-votes, does not include the 3x multiplier for posts. The token counter in user profiles seems to update instantly (I tried voting up and down on posts and comments then refreshing the page of the user to test) but undercounts in ways I don't understand. For instance, this random user currently displays as 207 karma (edit: tokens, not karma, doh) based on a post with 68 karma and about a dozen comments with net karma-minus-selfvotes of ~15. I can tell it's up to date because it went up by 3 when I upvoted his post, but it seems like it ought to be ~216 (post karma times three plus comment karma) and I can't explain the missing tokens, and several of the random users I looked at displayed this sort of obvious undercounting.

Good Heart Week: Extending the Experiment

As a lurker, I failed to understand this system in a way that led to me completely ignoring it (I probably would have engaged more with LW this week had I understood, having noticed now it feels too late to bother), so I feel like I should document what went wrong for me.

I read several front-page posts about the system but did not see this one until today. The posts I read were having fun with it rather than focusing on communication, plus the whole thing was obviously an extended April Fool's joke so I managed to come away with a host of misconceptions including total ignorance of the core "no really, karma equals actual money for you" feature. I assumed that if it was serious people would be trying a lot harder to communicate the incentives to people (compare announcements of LW bounties, which I routinely manage to hear about even in periods where I've fallen out of the habit of checking this website).

On top of "karma equals money" being fundamentally implausible, an April 1st joke named Good Heart Tokens feels like it was designed to not be taken seriously. If the system was meant to incentivize posts from lurkers, more effort could have been put into making the incentives clear.

Edit: Making this comment I double-checked some things and thought I came to a fully correct understanding of the system but upon hitting submit I became confused again. This post says that self-votes don't count but my fresh comment displays as having 1 token. There is a token counter on the user profile page, but as far as I can tell from looking at the pages of a few random users, that counter is tracking neither karma nor any calculation I can imagine representing token count, I have no idea what it's doing.

It Looks Like You're Trying To Take Over The World

As an exercise in describing hard takeoff using only known effects, this story handwaves the part I always had the greatest objection to: What does Clippy do after pwning the entire internet? At the current tech level, most of our ability to manufacture novel new goods is gated behind the physical labour requirements of building factories: even supposing you could invent grey goo from first principles plus publicly available research, how are you going to build it?

A quiet takeover could plausibly use crypto wealth to commission a bunch of specialized equipment to get a foothold in the real world a month later when it's all assembled, but going loud as Clippy did seems like it's risking a substantial chance that the humans successfully panic and Shut. Down. Everything.

"Moral progress" vs. the simple passage of time

"I don't care what future people think of my morality, I just care what's moral by the arbitrary standards of the time I live in."

As a moral super-anti-realist ("Morality is a product of evolutionary game theory shaping our brains plus arbitrary social input") this doesn't represent my view. 

I care about morality the same way I care about aesthetics: "I guess brutalism, rock music and prosocial behaviour are just what my brain happens to be fond of, I should go experience those if I want to be happy." I think this is heavily influenced by the standards of the time, but not exactly equal to those standards, probably because brains are noisy machines that don't learn standards perfectly. For instance, I happen to think jaywalking is not immoral so I do it without regard for how local standards view jaywalking.

Concisely, I'd phrase it as "I don't care what future people think of my morality, I just care what's moral by the arbitrary standards of my own brain."

Petrov Day 2021: Mutually Assured Destruction?

I am opposed to the implementation of this exercise, I believe its basic concept seriously undercuts the moral lesson we should take from Petrov Day.

The best way to not blow ourselves up is to not make nuclear weapons. On a day dedicated to not blowing ourselves up, LW has decided to manufacture a bunch of completely unneeded nuclear weapons, hand them out to many people, and then hope really hard that no one uses them. This is like a recovering addict carrying drugs on his person in order to make a point about resisting temptation: he is at best bragging and at worst courting disaster so boldly that one should wonder if he really wants to avoid self-destruction. This makes a good allegory for the senseless near-apocalypse of the Cold War, but deliberately creating a senseless risk does not seem like an appropriate way of celebrating the time we narrowly avoided triggering a senseless risk.

Communication Requires Common Interests or Differential Signal Costs

"it is impossible for there to be a language in which most sentences were lies"
Is it? If 40% of the time people truthfully described what colour a rock was, and 60% of the time they picked a random colour to falsely describe it as (perhaps some speakers benefit from obscuring the rock's true colour but derive no benefit from false belief in any particular colour), we would have a case where most sentences describing the rock were lies and yet listening to someone describing an unknown rock still allowed you to usefully update your priors. That ability to benefit from communication seems like all that should be necessary for a language to survive.

What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?

Without rejecting any of the premises in your question I can come up with:

Low tractability: you assign almost all of the probability mass to one or both of "alignment will be easily solved" and "alignment is basically impossible"

Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is "sit on them until the field developers further" in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML.

Refusing to prioritize highly uncertain causes in order to avoid the Winner's Curse outcome of your highest priority ending up as something with low true value and high noise

Flavours of utilitarianism that don't value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren't painful)

Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness

I got all of the octopus questions right (six recalled facts, #6 intuitively plausible, #9 seems rare enough that it should be unlikely for humans to observe, and #2 was uncertain until I completed the others then metagamed that a 7/2 split would be "too unbalanced" for a handcrafted test) so the only surprising fact I have to update on is that the recognition thing is surprising to others. My model was that many wild animals are capable of recognizing humans, and octopuses are particularly smart as animals go, no other factors weigh heavily. That octopuses evolved totally separated from humans didn't seem significant because although most wild animals were exposed to humans I see no obvious incentive for most of them to recognize individual humans, so the cases should be comparable on that axis. I also put little weight on octopuses not being social creatures because while there may be social recognition modules, A: animals are able to recognize humans and all of them generalizing their social modules to our species seems intuitively unlikely and B: At some level of intelligence it must be possible to distinguish individuals based on sheer general pattern-recognition, for ten humans an octopus would only need four or five bits of information and animal intelligence in general seems good at distinguishing between a few totally arbitrary bits.

The evolutionary theory of aging is interesting and seems to predict that an animal's maximum age will be proportionate to its time -to-accidental-death. Just thinking of animals and their ages at random this seems plausible but I'm hardly being rigorous, have there been proper analyses done of that?

Load More