Wiki Contributions

Comments

For similar reasons I can’t speak to ratio of ads or their quality. With everyone complaining about advertisers pulling their ads it would be odd for ads to be increasing in frequency. Can’t have it both ways.

Seems perfectly possible for a market to simultaneously experience decreased demand and increased supply if exogenous forces make it so.

Re nanotechnology, you link to Ben Snodin's post as agreeing that nanotechnology is feasible, and then ask where all the nanotechnology research institutions are, but fail to mention that Snodin recommends only "2-3 people spending at least 50% of their time on this by 3 years from now". I guess I agree that there should be more EA research on nanotechnology, but I think you exaggerate the amount of attention it should have.

Re coordination failures, there is one group focused on it, the Game B community, however they aren't EAs and I have little confidence that they'll make any progress. EA does have people working on improving institutional decision-making, which seems closely related, like the Effective Institutions Project. I think "solving coordination problems" more generally is not that neglected and/or tractable, given that there are strong incentives for a lot of people and organisations to do so already, but I may be wrong.

Could your follow-up poll with Collison's exact wording have been affected by people (following this discussion and) intentionally voting to reproduce Collison's results? Ideally I guess Twitter would let one send out two versions of the same poll to randomized subsets of one's followers.

Btw, you have a couple of typos: "Agnus" instead of "Agnes".

A few months ago I wrote a post about Game B. The summary:

I describe Game B, a worldview and community that aims to forge a new and better kind of society. It calls the status quo Game A and what comes after Game B. Game A is the activity we’ve been engaged in at least since the dawn of civilisation, a Molochian competition over resources. Game B is a new equilibrium, a new kind of society that’s not plagued by collective action problems.

While I agree that collective action problems (broadly construed) are crucial in any model of catastrophic risk, I think that

  • civilisations like our current one are not inherently self-terminating (75% confidence);
  • there are already many resources allocated to solving collective action problems (85% confidence); and
  • Game B is unnecessarily vague (90% confidence) and suffers from a lack of tangible feedback loops (85% confidence).

I think it can be of interest to some LW users, though it didn't feel on-topic enough to post in full here.

Honestly I think the whole "build from ground up"/"extending, modifying, and fixing" dichotomy here is a little confused though. What scale are we even talking?

I meant to capture something like "lines of code added/modified per labour time spent", and to suggest that Copilot would reap more benefits the higher that number is (all else equal).

This is interesting, though I expect it's an upper bound on Copilot productivity boosts:

  • Writing an HTTP server is a common, clearly defined task which has lots of examples online.
  • JavaScript is a popular language (meaning there's lots of training data for Copilot).
  • I imagine Copilot is better for building a thing from ground up, whereas the programming most programmers do most days consists in extending, modifying and fixing existing stuff, meaning more thinking and reading and less typing.

Yes, exactly. To me it makes perfect sense that an Optimal Decision Algorithm would follow a rule like this, though it's not obvious that it captures everything that the other two statements (the Formula of Humanity and the Kingdom of Ends) capture, and it's also not clear to me that it was the interpretation Kant had in mind.

Btw, I can't take credit for this -- I came across it in Christine Korsgaard's Creating the Kingdom of Ends, specifically the essay on the Formula of Universal Law, which you can find here (pdf) if you're interested.

This is really terrific!

As for the first bullet point, it basically goes like this: If what you are about to do isn’t something you could will to be a universal law—if you wouldn’t want other rational agents to behave similarly—then it’s probably not what the Optimal Decision Algorithm would recommend you do, because an app that recommended you do this would either recommend that others in similar situations behave similarly (and thus lose market share to apps that recommended more pro-social behavior, the equivalent of cooperate-cooperate instead of defect-defect) or it would make an exception for you and tell everyone else to cooperate while you defect (and thus predictably screw people over, and lose customers and then eventually be outcompeted also.)

I think it's even simpler than that, if you take the Formula of Universal Law to be a test of practical contradiction, e.g. whether action X could be the universal method of achieving purpose Y. Then it's really obvious why a central planner could not recommend action X -- because it would not achieve purpose Y. For example, recommending lying doesn't work as, if it were a universal method, no one would trust anyone, so it would be useless.

I think usually Transformative AI.

Load More