nc

Views my own, not my employers.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
nc10

During the COVID-19 pandemic, this became particularly apparent. Someone close to response efforts told me that policymakers frequently had to ask academic secondees to access research articles for them. This created delays and inefficiencies during a crisis where speed was essential.

I wonder if this is why major governments pushed mandatory open access around 2022-2023. In the UK, all public-funded research is now required to be open access. I think the coverage is different in the US.

How big of this is an issue in practice? For AI in particular, considering that so much contemporary research is published on arxiv, it must be relatively accessible?

nc10

telic/partial-niche evodevo

This really clicked for me. I don't blame you for making up the term because, although I can see the theory and examples of papers in that topic, I can't think of a unifying term that isn't horrendously broad (e.g. molecular ecology).

nc20

I am surprised that you find theoretical physics research less tight funding-wise than AI alignment [is this because the paths to funding in physics are well-worn, rather than better resourced?].

This whole post was a little discouraging. I hope that the research community can find a way forward.

nc52

I do think it's conceptually nicer to donate to PauseAI now rather than rely on the investment appreciating enough to offset the time-delay in donation. Not that it's necessarily the wrong thing to do, but it injects a lot more uncertainty into the model that is difficult to quantify.

nc32

The fight for human flourishing doesn't end at the initiation of takeoff [echo many points from Seth Herd here]. More generally, it's very possible to win the fight and lose the war, and a broader base of people who are invested in AI issues will improve the situation.

 

(I also don't think this is an accurate simplification of the climate movement or its successes/failures. But that's tangential to the point I'd like to make.)

nc45

I think PauseAI would be more effective if it could mobilise people who aren't currently associated with AI safety, but from what I can see it largely draws from the same base as EA. It is important to involve as wide a section of society as possible in the x-risk conversation and activism could help achieve this.

nc51

The most likely scenario by far is that a mirrored bacteria would be outcompeted by other bacteria and killed by achiral defenses due to [examples of ecological factors]

I think this is the crux of the different feelings around this paper. There are a lot of unknowns here. The paper does a good job of acknowledging this and (imo) it justifies a precautionary approach, but I think the breadth of uncertainty is difficult to communicate in e.g. policy briefs or newspaper articles.

nc32

It's a good connection to draw - I wonder if increased awareness about AI is sparking increased awareness of safety concepts in related fields. It's a particularly good sign for awareness and action of the safety concepts present in the overlap between AI and biotechnology.

I think you're right that there's very little benefit compared to the risks for mirror life which is not seen as true with AI - on top of the general truth that biotech is harder to monetise.

nc20

Can you explain more about why you think [AGI requires] a shared feature of mammals and not, say, humans or other particular species?

nc10

It's very field-dependent. In ecology & evolution, advisor-student fit is very influential and most programmes are direct admit to a certain professor. The weighting seems different for CS programs, many of which make you choose an advisor after admission (my knowledge is weaker here).

In the UK it's more funding dependent - grant-funded PhDs are almost entirely dependent on the advisor's opinion, whereas DTPs/CDTs have different selection criteria and are (imo) more grades-focused.

Load More