Julian Bradshaw

Wiki Contributions

Comments

Sorted by

The problem with Dark Forest theory is that, in the absence of FTL detection/communication, it requires a very high density and absurdly high proportion of hiding civilizations. Without that, expansionary civilizations dominate. The only known civilization, us, is expansionary for reasons that don't seem path-determinant, so it seems unlikely that the preconditions for Dark Forest theory exist.

To explain:

Hiders have limited space and mass-energy to work with. An expansionary civilization, once in its technological phase, can spread to thousands of star systems in mere thousands of years and become unstoppable by hiders. So, hiders need to kill expansionists before that happens. But if they're going to hide in their home system, they can't detect anything faster than FTL! So you need murderous hiding civs within at least a thousand light years of every single habitable planet in the galaxy, all of which need to have evolved before any expansionary civs in the area. This is improbable unless basically every civ is a murderous hider. The fact that the only known civ is not a murderous hider, for generalizable reasons, is thus evidence against the Dark Forest theory.

 

Potential objections:

  • Hider civs would send out stealth probes everywhere.

Still governed by FTL, expansionary civ would become overwhelmingly strong before probes reported back.

  • Hider civs would send out killer probes everywhere.

If the probes succeed in killing everything in the galaxy before they reach the stars, you didn't need to hide in the first place. (Also, note that hiding is a failed strategy for everyone else in this scenario, you can't do anything about a killer probe when you're the equivalent of the Han dynasty. Or the equivalent of a dinosaur.) If the probes fail, the civ they failed against will have no reason to hide, having been already discovered, and so will expand and dominate. 

  • Hider civs would become so advanced that they could hide indefinitely from expansionary civs, possibly by retreating to another dimension.

Conceivable, but I'd rather be the expansionary civs here?

  • Hider civs would become so advanced that they could kill any later expansionary civ that controlled thousands of star systems.

I think this is the strongest objection. If, for example, a hider civ could send out a few ships that can travel at a higher percentage of lightspeed than anything the expansionary civ can do, and those ships can detonate stars or something, and catching up to this tech would take millions of years, then just a few ships could track down and obliterate the expansionary civ within thousands/tens of thousands of years and win.

The problem is that the "hider civ evolved substantially earlier" part has to be true everywhere in the galaxy, or else somewhere an expansionary civilization wins and then snowballs with their resource advantages - this comes back to the "very high density and absurdly high proportion of hiding civilizations" requirement. The hiding civs have to always be the oldest whenever they meet an expansionary civ, and older to a degree that the expansionary civ's likely several orders of magnitude more resources and population doesn't counteract the age difference.

As a non-finance person who subscribes to Matt Levine, I'm not sure any other industry is as endlessly creative and public. 

Take for example a story from Levine's most recent newsletter, about a recently-popular retail trader strategy. You buy 1 share of a company about do a reverse-split to avoid being delisted from Nasdaq for falling under $1. If the company doesn't support fractional shares, anyone with a partial share gets their 1 share rounded up to a full share in the reverse-split, so ex. a $0.40 share becomes worth $4.00 in a 1:10 split. Only makes sense with one share, since $0.80 of shares also becomes $4.00, and so on. In the three days from the recent announcement of a reverse-split to its occurrence, one particular company went from 5000 shareholders to nearly 200,000, all requesting their "round-up share". The company ended up forfeiting ~20% of their ownership to retail traders trying to make a couple bucks, all for maintaining the dignity of staying listed on Nasdaq.

(Levine tells the story better than I; he has a patented style of building up all the relevant financial instruments and market context, then laying down the punchline where the incentives you've just learned drive bizarre behavior.)

I'm sure there's wacky stories in other industries too (medtech? pharmaceutical?), but they tend to be more private. Because in finance all of this happened on the public market, and everyone involved is required by law to publish lots of information, the story can be told.


EDIT: I just realized the actual linked article is much longer than the snippet presented and offers its own, similar-ish answer. I do want to respond to one point from Gwern's article:

OK, but surely there are still plenty of areas where the preconditions are met? (Particularly rapidly-developing ones, like cryptocurrency or AI recently?) So where are their Matt Levines?

Levine covers both cryptocurrency and AI anyway. Someone could probably focus more on the technical/political side of AI and do something comparable to Levine, though... and, come to think of it, that person is named @Zvi. But, no offense to Zvi who I also read, Levine is a superior writer, so it's not surprising he has much wider reach.

I read A Deepness in the Sky first, and haven't yet gotten around to A Fire Upon the Deep, and I enjoyed the former quite a lot.

Hm, I was interpreting 'pulls towards existential catastrophe' as meaning Leopold's map mismatches the territory because it overrates the chance of existential catastrophe.

If the argument is instead "Leopold publishing his map increases the chance of existential catastrophe" (by charging race dynamics, for example) then I agree that's plausible. (Though I don't think the choice to publish it was inexcusable - the effects are hard to predict, and there's much to be said for trying to say true things.)

If the argument is "following Leopold's plan likely leads to existential catastrophe", same opinion as above.

I agree that it's a good read.

I don't agree that it "pulls towards existential catastrophe". Pulls towards catastrophe, certainly, but not existential catastrophe? He's explicitly not a doomer,[1] and is much more focused on really-bad-but-survivable harms like WW3, authoritarian takeover, and societal upheaval.

  1. ^

    Page 105 of the PDF, "I am not a doomer.", with a footnote where he links a Yudkowsky tweet agreeing that he's not a doomer. Also, he listed his p(doom) as 5% last year. I didn't see an updated p(doom) in Situational Awareness or his Dwarkesh interview, though I might have missed it.

I'm curious for opinions on what I think is a crux of Leopold's "Situational Awareness":

picking the many obvious low-hanging fruit on “unhobbling” gains should take us from chatbots to agents, from a tool to something that looks more like drop-in remote worker replacements.[1]

This disagrees with my own intuition - the gap between chatbot and agent seems stubbornly large. He suggests three main angles of improvement:[2]

  1. Large context windows allowing for fully "onboarding" LLMs to a job or task
  2. Increased inference-time compute allowing for building 'System 2' reasoning abilities
  3. Enabling full computer access

We already have pretty large context windows (which has been surprising to me, admittedly), but they've helped less than I expected - I mostly just don't need to move relevant code right next to my cursor as much when using Copilot. I haven't seen really powerful use cases; the closest is probably Devin, but that doesn't work very well. Using large context windows on documents does reasonably well, but LLMs are too unreliable, biased towards the generic, and memoryless to get solid benefit out of that, in my personal experience.

Put another way, I think large context windows are of pretty limited benefit when LLMs have poor working memory and can't keep properly track of what they're doing over the course of their output.

That leads into the inference-time compute argument, both the weakest and the most essential. By my understanding, the goal is to give LLMs a working memory, but how we get there seems really fuzzy. The idea presented is to produce OOMs more tokens, and keep them on-track, but "keep them on-track" part in his writing feels like merely a restatement of the problem to me. The only substantial suggestion I can see is this single line:

Perhaps a small amount of RL helps a model learn to error correct (“hm, that doesn’t look right, let me double check that”), make plans, search over possible solutions, and so on.[3]

And in a footnote on the same page he acknowledges:

Unlocking this capability will require a new kind of training, for it to learn these extra skills.

Not trivial or baked-in to current AI progress, I think? Maybe I'm misunderstanding something.

As far as for enabling full computer access - yeah multi-modal models should allow this within a few years, but it remains of limited benefit if the working memory problem isn't solved.

EDIT 12/22/24: Well, it seems Leopold knew more than I did and just couldn't talk about it. We still don't have all the details on o3, but it really does seem like "more inference time compute" can be leveraged into reasoning capability.

  1. ^

    Page 9 of the PDF.

  2. ^

    Pages 34-37 of the PDF.

  3. ^

    Page 36 of the PDF.

and regardless, CEV merely re-allocates influence to the arbitrary natural preferences of the present generation of humans

I thought CEV was meant to cover the (idealized, extrapolated) preferences of all living humans in perpetuity. In other words, it would include future generations as they were born, and would also update if the wisdom of the current generation grew. (or less charitably, if its moral fashions changed)

I do recognize that classical CEV being speciesist in favor of Humans is probably its central flaw (forget about hypothetical sentient AIs and friendly aliens, what about animals?), but I think it might at least be self-modifying on this front as well? For example, if we ran into some friendly Star Trek aliens, and we wanted to have them join humanity as equals in a new Federation, our CEV would then become "also include these guys as sources of the CEV", and thus they would be.

I'm not sure if a CEV-as-learned-by-AI would necessarily be flexible enough to make those aliens permanent voting shareholders of the CEV, such that if humanity later regretted their decision to include these aliens they wouldn't suddenly get removed from the CEV, but it at least seems plausible?

 

(Anyway I'm really liking this series, thanks for writing it!)

GitHub copilot is a great deal for the user at only $10 per month. It loses GitHub $20/user/month, says Wall Street Journal.


FWIW, the former GitHub CEO Nat Friedman claims this is false and that Copilot was profitable. He was CEO at the time Copilot was getting started, but left in late 2021. So, it's possible that costs have increased >3x since then, though unless they're constantly using GPT-4 under the hood, I would be surprised to learn that.

Others have speculated that maybe Copilot loses money on average because it's made available to free for students (among others), and free users heavily outweigh paying users. The WSJ article said that:

the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

Which doesn't exactly say that the median user is unprofitable.

On the other hand, Microsoft 365 Copilot is planned to cost $30, conveniently exactly $20 more than the $10 GitHub Copilot costs per month, so perhaps there is something to the figure.

(The OpenAI-Microsoft relationship seems like a big deal. Why haven't I heard more about this?)

It is a big deal, but it's been widely reported on and discussed here for years, and particularly within the last year, given that Microsoft keeps releasing AI products based on OpenAI tech. Not sure why you haven't heard about it.

I would be curious to see what the poll results for Question 1 look like, say, a week from now. I only saw the message in my inbox after Petrov day was over, and still responded.

Load More