Gwern has posted several of Kurzweil's predictions on Predictionbook and I have marked many of them as either right or wrong. In some cases I included comments on the bits of research I did.
I couldn't get things to work here, but thank you Elizabeth, Raymond and Ben for trying to help me! Have fun!
I'm thinking a few things that are perhaps not super important individually, but ought to have at least some weight in such an index:
Standardization and transportation
Legal cooperation/integration
A caveat: while I've phrased all of these in a positive light, this does not preclude there being trade-offs. For example, expanding the freedoms of the air would likely boost air travel, which has bad environmental impacts.
AlphaGo used about 0.5 petaflops (= trillion floating point operations per second)
Isn't peta- the prefix for quadrillion?
(Also, is there a reason there are almost no comments on these posts?)
They are reposts from slatestarcodex.com.
There's one factor to explain this coincidence that is not referenced here and I couldn't find it mentioned on the SSC post either: polar motion.
As a recap, latitude is the angle between a given point (like the tip of the Pyramid) and the Equator. The Equator is the points at the surface that are equidistant from both poles. And the poles are the points where the rotation axis intersects the surface. They're the points the Earth rotates around, sort of.
Well, it turns out that the axis of rotation is not fixed with respect to the surface. This is independent of plate tectonics, the fact that some parts of the surface move with regards to each other. The Earth's surface could be perfectly immovable and we could still have polar motion. The scale of the motion is that, per Wikipedia, it has moved 20m since 1900, and recently the direction has changed from 80 degrees west towards the Prime Meridian.
To illustrate this, imagine that some cosmic force made the exact point where I'm currently sitting writing this comment be a pole. That is, the Earth revolves around this very point in my bedroom. (I guess it's a good thing I'm snuggling under a blanket.) Then the Equator would be this line between the brighter and darker parts of the map (I used the nearest airport, São Paulo-Congonhas, as the pole); it runs somewhere near San Diego, just barely includes all of Great Britain and Antarctica, and crosses Egypt suspiciously close to the Pyramids. They're actually 209 km from it on the opposite hemisphere as me, so their latitude would be just shy of negative two degrees.
Now, of course at the time the aliens Khufu's slaves built the Pyramids the North Pole was somewhere fairly close to its present location, and not in tropical South America. But it'd be very unlikely if it was at precisely its current location! (Or wherever it was at the time when the version of WGS84 Google Maps uses was made.) And since the pole can move 20m in 1.2 centuries, it could have moved way more than the size of the base of the pyramid, which measures just over 200m, since the 26th century BCE.
Hi, I'm Bruno from Brazil. I have been involved with stuff in the Lesswrongosphere since 2016. While I was in the US, I participated in the New Hampshire and Boston LW meetup groups, with occasional presence in SSC and EA meetups. I volunteered at EAG Boston 2017 and attended EAG London later that year. I did the CFAR workshop of February 2017 and hung out at the subsequent alumni reunion. After having to move back to Brazil I joined the São Paulo LW and EA groups and tried, unsuccessfully, to host a book club to read RAZ over the course of 2018. (We made it as far as mid-February, I think.)
I became convinced of the need to sort out the AI alignment problem after first reading RAZ. I knew I needed to level up on lots of basic subjects before I could venture into doing AI safety research. Because doing so could also have instrumental value to my goal of leaving Brazil for good, I studied at a Web development bootcamp and have been teaching there for a year now; I feel this has given me the confidence to acquire new tech skills.
I intend to start posting here in order to clarify my ideas, solve my confusion and eventually join the ranks of the AI safety researchers. My more immediate goal is to be able to live somewhere other than Brazil while doing some sort of relevant work (even if it is just self-study or something not directly related to AI safety that still allows me to study on the side, like my current gig here does).
Thank you for your post. It is important for us to keep refining the overall p(doom) and the ways it might happen or be averted. You make your point very clearly, even in just the version presented here, condensed from your full posts on varios specific points.
It seems to me that you are applying a sort of symmetric argument to values and capabilities and arguing that x-risk requires that we hit the bullseye of capability but miss the one for values. I think this has a problem and I'd like to know your view as to how much this problem affects your overall argument.
The problem, as I see it, is that goal-space is qualitatively different from capability-space. With capabilities, there is a clear ordering that is inherent to the capabilities themselves: if you can do more, then you can do less. Someone who can lift 100kg can also lift 80kg. It is not clear to me that this is the case for goal-space; I think it is only extrinsic evaluation by humans that makes "tile the universe with paperclips" a bad goal.
Do you think this difference between these spaces holds, and if so, do you think it undermines your argument?