Julian Bradshaw

Wiki Contributions


and regardless, CEV merely re-allocates influence to the arbitrary natural preferences of the present generation of humans

I thought CEV was meant to cover the (idealized, extrapolated) preferences of all living humans in perpetuity. In other words, it would include future generations as they were born, and would also update if the wisdom of the current generation grew. (or less charitably, if its moral fashions changed)

I do recognize that classical CEV being speciesist in favor of Humans is probably its central flaw (forget about hypothetical sentient AIs and friendly aliens, what about animals?), but I think it might at least be self-modifying on this front as well? For example, if we ran into some friendly Star Trek aliens, and we wanted to have them join humanity as equals in a new Federation, our CEV would then become "also include these guys as sources of the CEV", and thus they would be.

I'm not sure if a CEV-as-learned-by-AI would necessarily be flexible enough to make those aliens permanent voting shareholders of the CEV, such that if humanity later regretted their decision to include these aliens they wouldn't suddenly get removed from the CEV, but it at least seems plausible?


(Anyway I'm really liking this series, thanks for writing it!)

GitHub copilot is a great deal for the user at only $10 per month. It loses GitHub $20/user/month, says Wall Street Journal.

FWIW, the former GitHub CEO Nat Friedman claims this is false and that Copilot was profitable. He was CEO at the time Copilot was getting started, but left in late 2021. So, it's possible that costs have increased >3x since then, though unless they're constantly using GPT-4 under the hood, I would be surprised to learn that.

Others have speculated that maybe Copilot loses money on average because it's made available to free for students (among others), and free users heavily outweigh paying users. The WSJ article said that:

the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

Which doesn't exactly say that the median user is unprofitable.

On the other hand, Microsoft 365 Copilot is planned to cost $30, conveniently exactly $20 more than the $10 GitHub Copilot costs per month, so perhaps there is something to the figure.

(The OpenAI-Microsoft relationship seems like a big deal. Why haven't I heard more about this?)

It is a big deal, but it's been widely reported on and discussed here for years, and particularly within the last year, given that Microsoft keeps releasing AI products based on OpenAI tech. Not sure why you haven't heard about it.

I would be curious to see what the poll results for Question 1 look like, say, a week from now. I only saw the message in my inbox after Petrov day was over, and still responded.

I don't think they're closely tied in the public mind, but I do think the connection is known to the organs of media and government that interact with AI alignment. It comes up often enough, in the background - details like FTX having a large stake in Anthropic, for example. And the opponents of AI x-risk and EA certainly try to bring it up as often as possible.

Basically, my model is that FTX seriously undermined the insider credibility of AINotKillEveryoneIsm's most institutionally powerful proponents, but the remaining credibility was enough to work with.

Why was the AI Alignment community so unprepared for engaging with the wider world when the moment finally came?

I reject the premise. Actually, I think public communication has gone pretty dang well since ChatGPT. Not only has AI existential risk become a mainstream, semi-respectable concern (especially among top AI researchers and labs, which count the most!), but this is obviously because of the 20 years of groundwork the rationality and EA communities have laid down.

We had well-funded organizations like CAIS able to get credible mainstream signatories. We've had lots and lots of favorable or at least sympathetic articles in basically every mainstream Western newspaper. Public polling shows that average people are broadly responsive. The UK is funding real AI safety to the tune of millions of dollars. And all this is despite the immediately-preceding public relations catastrophe of FTX!

The only perspective from which you can say there's been utter failure is the Yudkowskian one, where the lack of momentum toward strict international treaties runs spells doom. I grant that this is a reasonable position, but it's not the majority one in the community, so it's hardly a community-wide failure for that not to happen. (And I believe it is a victory of sorts that it's gotten into the Overton window at all.)

Nobody ever talks about the lack of drawdown after the Spanish-American war!

The proximate cause appears to be the occupation of the Philippines after the US decided to take them as a colony rather than liberate them. The unexpected insurgency that followed forced Congress to maintain the army's wartime size.

A complete explanation of why the army stayed large after the general end of the Philippine insurgency in 1902 is beyond me, however. I am seeing several general explanations along the lines of "the Spanish-American war revealed serious problems in the US military, and a man named Elihu Root was appointed to correct them; this led to a large reorganization, which included the de facto founding of the National Guard and the creation of many new military bases around the world".

I might chalk it up to "the US decided it ought to become a major military power with colonies after it thrashed Spain", in a more general sense, but that's just armchair history. Interested if anyone with solid historical knowledge of this period could weigh in.

It's nice to see OpenAI, Anthropic, and DeepMind collaborating on a paper like this.

"Sufficiently advanced" tech could also plausibly identify all those hidden civilizations. For example, an underground civilization would produce unusual seismic activity, and taking up some inner portion of a gas giant or star would alter their outward behavior. Ultimately, civilizations use mass-energy in unnatural ways, and I don't see a fundamental physical principle that could protect that from all possible sensing. 

More importantly, I don't think your suggestions address my point that hostile civilizations would get you before you even evolve.

But, let's grant that you're the first civilization to evolve in your galaxy, or at the least among the first before someone starts sending out probes to prevent any new civilizations from arising and threatening them. And let's grant that they will never find you. That is a victory, in that you survive. But the costs are astronomical: you only get to use the mass-energy of a single planet, or star, or Oort Cloud, while someone else gets the entire galaxy.

To put it another way: mass-energy is required for your civilization to exist and fulfill its preferences, so far as we understand the universe. If you redirect any substantial amount of mass-energy away from its natural uses (stars, planets, asteroids), that's going to be proportionally detectable. So, you can only hide by strangling your own civilization in its crib. Not everyone is going to do that; I seriously doubt that humanity (or any artificial descendant) will, for one.

(This comes back to my link about "no stealth in space" - the phrase is most commonly invoked when referring to starships. If your starship is at CMB temperatures and never moves, then yeah, it'd be hard to detect. But also you couldn't live in it, and it couldn't go anywhere! You want your starship—your civilization—to actually do something, and doing work (in a physics sense) is detectable.)

As applied to aliens, I think the Dark Forest frame is almost certainly wrong. Perhaps it's useful in other contexts, and I know you repeatedly disclaimed its accuracy in the alien context, but at least for others I want to explain why it's unlikely.

Basically, there are two reasons:

  1. The only technological civilization we know—humanity—hasn't tried at all to hide.
  2. There is no stealth in space.

To expand on the first, consider that humanity has consistently spammed out radio waves and sent out probes with the express hope aliens might find them. Now, these are unlikely to actually give away Earth's location at any distance (the probes are not moving that fast and are hard to find, the radio waves will fade to background noise quickly), but the important thing is that hiding is not on the agenda. Eventually, we are very likely to do something that really is very visible, such as starting up a Dyson Swarm. Consider that ancient humans were arguably often in analogous situations to a Dark Forest, and that the dominant strategy was not indefinite hiding. Hiding is something of an unnatural act for a civilization that has already conquered its planet.

To expand on the second, the cost to send self-replicating probes to every star system to search for life in your galaxy is trivial for even a K-2 civilization, and doable within a few million years, and their origin could be masked if you were paranoid. Building enormous telescopes capable of spotting biosignatures, or even techosignatures, is also possible. (And even if there was some technology that allowed you to hide, you'd have to invent that technology before you're spotted, and given galactic timescales, other civilizations ought to have scoped out the entire galaxy long before you even evolved.)

For what it's worth, I think the two most likely Fermi Question answers are:

  1. We've fundamentally misunderstood the nature of the universe. (ex. simulation hypothesis)
  2. We're the only intelligent civilization in at least the Milky Way.
Load More