Davidmanheim

Comments

Expansive translations: considerations and possibilities

As the post notes, inferential distance relates to differing worldviews and life experiences. This was written to an audience that mostly understands what inferential distance has to do with different worldviews - how would you explain it to a different audience?

Well, a typical translation doesn't try to bridge the gap between languages, it just picks something on the far side of the gap that seems similar to the one on the near side. But that leaves something out.

An example of this is in translations of Harry Potter, where Dumbledore's password is translated into a local sweet. The UK versions has "Sherbet Lemon" while the US version has "Lemon drop." Are these the same? I assumed so, but actually it seems the UK version has a "fizzy sweet powder" on the inside. In Danish and Swedish, it's (mis?) translated as lemon ice cream - which isn't the same at all. And in Hebrew, it's translated as Krembo, which doesn't even get close to translating the meaning correctly - it's supposed to be an "equivalent children’s dessert" - but the translation simply doesn't work, because you can't carry a Krembo around in your pocket, since it would melt. Does this matter? Well, the difference between a kindly old wizard who carries around a sucking candy, and one who carries around a kind-of-big marshmallow dessert. But that's beside the point - you don't translate the life experience that growing up eating sherbert lemons gives you, you find an analogue. 

The only way to translate a specific word or term accurately could be to provide so much background that the original point is buried, and the only way to translate an idea is to find an analogue that the reader already understands. And that's why translation is impossible - but we do it anyways, and just accept that the results are fuzzy equivalents, and accept that worldviews are different enough that bridging the gap is impossible.

Puzzle Games

Tier 3, I think: Hoplite, on Android. 

The free game is basically a roguelike, but it's full information on each level, with only a little bit of strategy for which abilities to pick, and the Challenge mode available in the paid version, for $3, has a lot more straight puzzles.

Stripping Away the Protections

To what extent are these dynamics the inevitable result of large organizations?

 

I want to note that I've previously argued that much of the dynamics are significantly forced by structure - but not in this context, and I'm thinking about how much or little of that argument applies here. (I'll need to see what yo say in later posts in the series.)

Needed: AI infohazard policy

I think there needs to be individual decisionmaking (on the part of both organizations and individual researchers, especially in light of the unilateralists' curse,) alongside a much broader discussion about how the world should handle unsafe machine learning, and more advanced AI.

I very much don't think that the AI safety community debating and coming up with shared, semi-public guidelines for, essentially, what to withhold from the broader public, done without input from the wider ML / AI and research community who are impacted and whose work is a big part of what we are discussing, would be wise. That community needs to be engaged in any such discussions.

Needed: AI infohazard policy
There's some intermediate options available instead of just "full secret" or "full publish"... and I haven't seen anyone mention that...

OpenAI's phased release of GPT2 seems like a clear example of exactly this. And there is a forthcoming paper looking at the internal deliberations around this from Toby Shevlane, in addition to his extant work on the question of how disclosure potentially affects misuse.

Needed: AI infohazard policy

The first thing I would note is that stakeholders need to be involved in making any guidelines, and that pushing for guidelines from the outside is unhelpful, if not harmful, since it pushes participants to be defensive about their work. There are also an extensive literature discussing the general issue of information dissemination hazards and the issues of regulation in other domains, such as nuclear weapons technology, biological and chemical weapons, and similar.

There is also a fair amount of ongoing work on synthesizing this literature and the implications for AI. Some of it is even on this site. For example, see: https://www.lesswrong.com/posts/RY9XYoqPeMc8W8zbH/mapping-downside-risks-and-information-hazards and https://www.lesswrong.com/posts/6ur8vDX6ApAXrRN3t/information-hazards-why-you-should-care-and-what-you-can-do

So there is tons of discussion about this already, and there is plenty you should read on the topic - I suspect you can start with the paper that provided the name for your post, and continuing with sections of GovAI's research agenda.

Are aircraft carriers super vulnerable in a modern war?

Noting that this is correct, but incomplete. They are very important for force projection even in near-peer engagements, since otherwise you likely can't get your planes to where you need them. The question that matters for this is who wins the area-denial / anti-aircraft battle, i.e. can drones and similar get close enough to sink anything, and this is the critical question anyways, since your carriers and planes are useless if you can't get close enough. And this isn't my area, but my very preliminary impression is that AA/AD makes aerial combat fairly defense-dominant.

What's the best overview of common Micromorts?
"Could someone write a LW style book review of the Norm Chronicles?"

Endorsed.

Reflections on AI Timelines Forecasting Thread
"A good next step would be to create more consensus on the most productive interpretation for AGI timeline predictions. "

Strongly agree with this. I don't think the numbers are meaningful, since AGI could mean anything from "a CAIS system-of-systems that can be used to replace most menial jobs with greater than 50% success," to "a system that can do any one of the majority of current jobs given an economically viable (<$10m) amount of direct training and supervision" to "A system that can do everything any human is able to do at least as well as that human, based only on available data and observation, without any direct training or feedback, for no marginal cost."

What's the best overview of common Micromorts?

Scott's answer is a good one - you should read "The Norm Chronicles." But I think the question has a problem. Micromorts are a time-agnostic measure of dying, and the problem is that most risks you take don't actually translate well into micromorts.

Smoking a cigarette, which reduces your life expectancy by around 30 seconds, translates into either zero micromorts, or one, depending on how you set up the question. Increasing your relative risk of dying from cancer in 30 years isn't really the same as playing Russian roulette with a 1-million-chamber gun. Similarly, a healthy 25 year old getting COVID has about a 70-micromort risk based on direct mortality from COVID. But that number ignores the risks of chronic fatigue, later complications, or reduced life expectancy (all of which we simply don't know enough to quantify well.)

The answer that health economists have to this question is the QALY, which has its own drawbacks. For example, QALYs can't uniformly measure the risks of Russian roulette, since the risk depends on the age and quality of life of the player.

What we're left with is that the actual question we want answered has a couple more dimensions than a single metric can capture - and as I have mentioned once or twice elsewhere, reductive. metrics. have various. problems.

Load More