Ben Pace

I'm an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality.

Longer bio:

Ben Pace's Comments

An overview of 11 proposals for building safe advanced AI

Wow. This is fantastic. It helps me so much to see the proposals set out next to each other concisely and straightforwardly, and analysed on the same four key dimensions. I feel like my understanding of the relative strengths and weaknesses is vastly improved by having read this post, and that there has been very little writing of this kind that I've been able to find, so thank you, and I've curated this post.

My main hesitation with curating the post is that I found it a bit intimidating and dense, and I expect many others will too. That said it does actually do a great job of saying things clearly and simply and not relying on lots of technical jargon it doesn't explain, and I'm not sure of a clear way to improve it (maybe more paragraph breaks?).

Anyhow, I was so excited when I realised what this post is, thanks again.

A man dies and is sent to hell

Wow. And "This is our monthly coffee meetup" sure is a surprising next line.

AGIs as populations

(I also can't think of a clear reason why anyone would strong-downvote your comments. I liked reading this comment thread, even though I had some sticky-feeling sense that it would be hard to resolve to convo with Richard for some reason I can't easily articulate.)

Should I self-variolate to COVID-19

I think the only way that it would be socially responsible for me to travel, is if I knew with very high confidence that I wasn't carrying the virus*. And the only way that I can have that confidence is if I have already caught it, recovered, and now have antibodies against the virus.

I didn't read all of your post, but I want to mention that people do buy tests (I think like a couple hundred dollars per test but maybe less), and I expect if you looked into it you could find some. I think I know someone who bought some, I know the Joe Rogan podcast gives its guests a test before they come on the show, and also I think that if you know the right people you can just make tests. I mean, it's probably illegal because everything is illegal, but the government primarily finds out when labs try to do mass testing and publish results, so you can probably get a bunch for personal use and nobody will notice.

Then you can take a flight, go to a hotel for one night, use a test, and probably find out pretty quickly (I'd guess either within an hour or within a day but I don't know) whether you're infected. 

I expect this to cost more time and money, but if you're able to take careful precautions in travel (such as those I listed here) I expect there's a large probability that this would be better than getting the disease, especially with the unknown long-term effects and plausibility of a vaccine within 12 months.

The EMH Aten't Dead

Curated, this is a great and really grounding contribution to the ongoing conversation around this, it explains lots of key arguments really well. I hope to see more discussion about whether the EMH is dead and what can be learned from the covid situation, and I liked much of the comments section as well.

Benito's Shortform Feed

I've been thinking lately that picturing an AI catastrophe is helped a great deal by visualising a world where critical systems in society are performed by software. I was spending a while trying to summarise and analyse Paul's "What Failure Looks Like", which lead me this way. I think that properly imagining such a world is immediately scary, because software can deal with edge cases badly, like automated market traders causing major crashes, so that's already a big deal. Then you add ML in, and can talk about how crazy it is to hand critical systems over to code we do not understand and cannot make simple adjustments to, then you're already hitting catastrophes. Once you then argue that ML can become superintelligent then everything goes from "global catastrophe" to "obvious end of the world", but the first steps are already pretty helpful.

While Paul's post helps a lot, it still takes a fair bit of effort for me to concretely visualise the scenarios he describes, and I would be excited for people to take the time to detail what it would look like to hand critical systems over to software – for which systems would this happen, why would we do it, who would be the decision-makers, what would it feel like from the average citizen's vantage point, etc. A smaller version of Hanson's Age of Em project, just asking the question "Which core functions in society (food, housing, healthcare, law enforcement, governance, etc) are amenable to tech companies building solutions for, and what would it look like for society to transition to 1%, 10%, 50% and 90% of core functions to be automated with 1) human-coded software 2) machine learning 3) human-level general AI?"

Isn't Tesla stock highly undervalued?

Moved to answer, as it seems like clearly answering the question.

Load More