LESSWRONG
LW

Eric B
1880157712
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Arbital Needs A Mechanism For Defining Terms
Eric B8y*10

Seems like to the term gets you that, minus auto-suggest which seems like it'd get unwieldy as we expand to lots of topics?

Reply
Arbital Needs A Mechanism For Defining Terms
Eric B8y*10

Yep, that is what I meant. Create new page is less exposed now we've moved the Arbital math home. It's accessible via the hamburger menu in the top right still, and you can link to it like this, but that's not particularly easy to find. I'm pretty sure we'll want to improve that, and generally make it easier to navigate between the wiki and discussion areas of Arbital.

Yep, I think when we've got good support for browsing by tag we'll be able to tag things with #term or #definition, and it'll work?

By splits I just meant if people disagree they could make alternate terms, maybe crosslinking with "maybe you want this term"? No special features.

Reply
Arbital Needs A Mechanism For Defining Terms
Eric B9y*70

Making a page and greenlinking to it (with comments / edits / splits available) seems fine to me?

Reply1
Scalable Ways To Associate Evidence Pro Or Con With Claims Will Be More Valuable In Elevating Accuracy Than Complex Voting And Reputation Systems
Eric B9y*60

Reddit's reputation system gives new arrivals equal weight to long-standing highly trusted members of the community, and does not include priors about content quality based on poster's history. It's the simplest thing which could barely work, and does not allow for high quality discussion to scale without relying heavily on moderators or other social things not present in all communities and not able to resist certain forms of attack. It also lacks adequate indexing and browsing by topic, making discussions temporary rather than able to produce lasting artifacts and be continued easily.

SE's reputation system is a little better (you need to prove to the system you can productively engage with the topic before your votes have any weight), but it's very focused on QA, which is not a great format for extended truth-seeking discussion.

Cool argument structuring seems like an optional bonus (still great to have, but not necessary for the thing to work), but features that give users reason to expect their high-quality content gets more eyeballs (particularly the eyeballs which most need that specific content) seem core and essential.

Reply2
Scalable Ways To Associate Evidence Pro Or Con With Claims Will Be More Valuable In Elevating Accuracy Than Complex Voting And Reputation Systems
Eric B9y*10

I foresee good reputation systems being extremely valuable (essentially necessary to scale while maintaining quality), with high credence on that being more important than argument structuring features.

Reply
Arbital Claims Are Significantly More Useful When They Are Fairly Well Specified And Unambiguous
Eric B9y*20

Yep, there's at least high variability. Especially if the things it could be taken to mean are things people generally have similar credence for.

And, nods, this was partly a test of trying to disambiguate a claim, and I found it harder than expected / think I did not do very well. Maybe just words would have been better rather than numbers, and more of them. Or maybe doing a simple version and having other people see where it was ambiguous rather than trying to clarify in a vacuum is easier?

Reply1
For Mitigating AI X Risk An Off Earth Colony Would Be About As Useful As A Warm Scarf
Eric B9y*20

It's in the same direction, yea. Even if relocating on earth captured all the wins (I would guess in most scenarios not, due to very different selection effects), that is way better than a warm scarf.

I don't expect the very early colony to be any use in terms of directing AGI research. The full self-sustaining million person civilization made mostly of geniuses version which it seeds is the interesting part, but the early stage is a requisite for something valuable.

Yea, that's not obvious to me either. It's totally plausible that this happens on Earth in another form and we get SV 2.0 with strong enough network effects that Mars ends up not being attractive. However, "better than a warm scarf" is a low bar.

If this claim is clarified to something like "For mitigating AI x-risk, an early-stage off-Earth colony would be very unlikely to help", I would switch.

More general point: I feel like this claim (and all others we have) are insufficiently well-specified. I have the feeling of "this could be taken as a handful of claims, which I have very different credences for". Perhaps there should be a queue for claims where people can ask questions and make sure it's pinned down before being opened for everyone to vote on? Adds an annoying wait, but also saves people from running into poorly specified claims.

Oh, neat, I can use a claim. This is fun.

Reply1
A Permanent Self Sustaining Off Earth Colony Would Be A Much More Effective Mitigation Of X Risk Than Even An Equally Well Funded System Of Disaster Shelters On Earth
Eric B9y*20

A bunch of specifics being pinned down would help. e.g. are the shelters inhabited, or just available? are they isolated in a way that stops them being raided? seasteading-based? self-sustaining? what stops people forcing their way in if disaster strikes?

It may be easier to fund off-earth colonies to this level, because it provides directly for individuals. Few would sell their house for a spot in a disaster shelter, some would for a ticket to mars.

Reply1
For Mitigating AI X Risk An Off Earth Colony Would Be About As Useful As A Warm Scarf
Eric B9y*70

Neat, I'm a contrarian. I guess I should explain why my credence is about 80% different from everyone else's :)

Obviously, being off earth would provide essentially no protection from a uFAI. It may, however. shift the odds of us getting an aligned AI in the first place.

Maybe this is because I'm taking this to mean more than most, I only think it helps if well-established and significant, but by my models both the rate of technological progress and ability to coordinate seems to be proportional to something like density of awesome people with a non-terrible incentive structure. Filtering by "paid half a million dollars to get to mars" and designing the incentive structure from scratch seems like an unusually good way to create a dense pocket of awesome people focused on important problems, in a way which is very hard to dilute.

I claim that if we have long enough timelines for a self-sustaining off-earth colony to be created, the first recursively self-improving AGI has a good chance of being built there. And that a strongly filtered group immersed in other hard challenges with and setting up decision-making infrastructure intentionally rather than working with all the normal civilization cruft are more likely to coordinate on safety than earth-based teams.

I do not expect timelines to be long enough that this is an option, so do not endorse this as a sane use of funding. But having an off-earth colony seems way, way more useful than a warm scarf.

I would agree with:

  • There are currently much better ways to reduce AI x-risk than funding off-earth colonies. (~96%)
  • It is unlikely that off-earth colonies will be sufficiently established in time to mitigate AI x-risk. (~77%)
Reply6
Load More
greenlinking
Arbital claims are significantly more useful* when they are fairly well-specified and unambiguous**
A clarification period for claims is net positive for Arbital
No posts to display.
Nick Bostrom's book Superintelligence
3y
(+9/-10)
Sequence: Why Social Dynamics are So Complicated
8y
Sequence: Why Social Dynamics are So Complicated
8y
(-38)
Sequence: Why Social Dynamics are So Complicated
8y
(+1233)
Accelerator Project
8y
(+117)
The missing step between Zero and Hero
8y
Accelerator Project
8y
(+58/-66)