As I understand it – with my only source being Ben's post and a couple of comments that I've read – Drew is also a cofounder of Nonlinear. Also, this was reported:
Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
I don't think "we're currently living in a simulation" or "ASI would have effects beyond imagination, at least for the median human imaginer" are such weird beliefs among this crowd that them proving true would qualify for OP to win the bet. Of course, they specifically say that if UAP are special cases in the simulation that counts, but not the mere belief in simulation.
Would you mind sharing how much you will win if the bet goes your way and everyone pays out?
Also, I would like to see more actions like yours, so I'd like to put money into that. I want to unconditionally give you $50; if you win the bet you may (but would be under no obligation to) return this money to me. All I'd need now is an ETH wallet to send money to.I would like this to be construed as a meta-level incentive for people to have this attitude of "put up or shut up" while offering immediate payouts; not as taking a stance on the object-level question.
I hear you, thank you for your comment.I guess I don't have a clear model for how big is the pool of people who:
As soon as someone managed to turn ChatGPT into an agent (AutoGPT), someone created an agent, ChaosGPT, with the explicit goal to destroy humankind. This is the kind of person that might benefit from having what I intend to produce: an overview of AI capabilities required to end the world, how far along we are in obtaining them, and so on. I want this information to be used to prevent an existential catastrophe, not precipitate it.
Thank you for your post. It is important for us to keep refining the overall p(doom) and the ways it might happen or be averted. You make your point very clearly, even in just the version presented here, condensed from your full posts on varios specific points.
It seems to me that you are applying a sort of symmetric argument to values and capabilities and arguing that x-risk requires that we hit the bullseye of capability but miss the one for values. I think this has a problem and I'd like to know your view as to how much this problem affects your overall ... (read more)
Gwern has posted several of Kurzweil's predictions on Predictionbook and I have marked many of them as either right or wrong. In some cases I included comments on the bits of research I did.
I couldn't get things to work here, but thank you Elizabeth, Raymond and Ben for trying to help me! Have fun!
I'm thinking a few things that are perhaps not super important individually, but ought to have at least some weight in such an index:
Standardization and transportation
AlphaGo used about 0.5 petaflops (= trillion floating point operations per second)
Isn't peta- the prefix for quadrillion?
(Also, is there a reason there are almost no comments on these posts?)
They are reposts from slatestarcodex.com.
There's one factor to explain this coincidence that is not referenced here and I couldn't find it mentioned on the SSC post either: polar motion.
As a recap, latitude is the angle between a given point (like the tip of the Pyramid) and the Equator. The Equator is the points at the surface that are equidistant from both poles. And the poles are the points where the rotation axis intersects the surface. They're the points the Earth rotates around, sort of.
Well, it turns out that the axis of rotation is not fixed with respect to the surface. Thi... (read more)
Hi, I'm Bruno from Brazil. I have been involved with stuff in the Lesswrongosphere since 2016. While I was in the US, I participated in the New Hampshire and Boston LW meetup groups, with occasional presence in SSC and EA meetups. I volunteered at EAG Boston 2017 and attended EAG London later that year. I did the CFAR workshop of February 2017 and hung out at the subsequent alumni reunion. After having to move back to Brazil I joined the São Paulo LW and EA groups and tried, unsuccessfully, to host a book club to read RAZ over the course of 2018. (We ... (read more)