All of Diffractor's Comments + Replies

"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

Re point 1, 2: Check this out. For the specific case of 0 to even bits, ??? to odd bits, I think solomonoff can probably get that, but not more general relations.

Re: point 3, Solomonoff is about stochastic environments that just take your action as an input, and aren't reading your policy. For infra-Bayes, you can deal with policy-dependent environments without issue, as you can consider hard-coding in every possible policy to get a family of stochastic environments, and UDT behavior naturally falls out as a result from this encoding. There's still some op... (read more)

"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

Ah. So, low expected utility alone isn't too much of a problem. The amount of weight a hypothesis has in a prior after updating depends on the gap between the best-case values and worst-case values. Ie, "how much does it matter what happens here". So, the stuff that withers in the prior as you update are the hypotheses that are like "what happens now has negligible impact on improving the worst-case". So, hypotheses that are like "you are screwed no matter what" just drop out completely, as if it doesn't matter what you do, you might as well pick actions t... (read more)

"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

"mixture of infradistributions" is just an infradistribution, much like how a mixture of probability distributions is a probability distribution.

Let's say we've got a prior , a probability distribution over indexed hypotheses.

If you're working in a vector space, you can take any countable collection of sets in said vector space, and mix them together according to a prior  giving a weight to each set. Just make the set of all points which can be made by the process "pick a point from each set, and mix the points together according to ... (read more)

"Infra-Bayesianism with Vanessa Kosoy" – Watch/Discuss Party

The concave functional view is "the thing you do with a probability distribution is take expectations of functions with it. In fact, it's actually possible to identify a probability distribution with the function  mapping a function to its expectation. Similarly, the thing we do with an infradistribution is taking expectations of functions with it. Let's just look at the behavior of the function  we get, and neglect the view of everything as a set of a-measures."

As it turns out, this view makes proofs a whole lot cleaner a... (read more)

Stuart_Armstrong's Shortform

Sounds like a special case of crisp infradistributions (ie, all partial probability distributions have a unique associated crisp infradistribution)

Given some , we can consider the (nonempty) set of probability distributions equal to  where  is defined. This set is convex (clearly, a mixture of two probability distributions which agree with  about the probability of an event will also agree with  about the probability of an event).

Convex (compact) sets of probability distributions = crisp infradistributions.... (read more)

Introduction To The Infra-Bayesianism Sequence

You're completely right that hypotheses with unconstrained Murphy get ignored because you're doomed no matter what you do, so you might as well optimize for just the other hypotheses where what you do matters. Your "-1,000,000 vs -999,999 is the same sort of problem as 0 vs 1" reasoning is good.

Again, you are making the serious mistake of trying to think about Murphy verbally, rather than thinking of Murphy as the personification of the "inf" part of the  definition of expected value, and writing actual equations. &nb... (read more)

Introduction To The Infra-Bayesianism Sequence

There's actually an upcoming post going into more detail on what the deal is with pseudocausal and acausal belief functions, among several other things, I can send you a draft if you want. "Belief Functions and Decision Theory" is a post that hasn't held up nearly as well to time as "Basic Inframeasure Theory".

2DanielFilan17dThanks for the offer, but I don't think I have room for that right now.
Introduction To The Infra-Bayesianism Sequence

If you use the Anti-Nirvana trick, your agent just goes "nothing matters at all, the foe will mispredict and I'll get -infinity reward" and rolls over and cries since all policies are optimal. Don't do that one, it's a bad idea.

For the concave expectation functionals: Well, there's another constraint or two, like monotonicity, but yeah, LF duality basically says that you can turn any (monotone) concave expectation functional into an inframeasure. Ie, all risk aversion can be interpreted as having radical uncertainty over some aspects of how the environment... (read more)

2rohinmshah18dSorry, I meant the combination of best-case reasoning (sup instead of inf) and the anti-Nirvana trick. In that case the agent goes "Murphy won't mispredict, since then I'd get -infinity reward which can't be the best that I do". Hmm, that makes sense, I think? Perhaps I just haven't really internalized the learning aspect of all of this.
Introduction To The Infra-Bayesianism Sequence

Maximin, actually. You're maximizing your worst-case result.

It's probably worth mentioning that "Murphy" isn't an actual foe where it makes sense to talk about destroying resources lest Murphy use them, it's just a personification of the fact that we have a set of options, any of which could be picked, and we want to get the highest lower bound on utility we can for that set of options, so we assume we're playing against an adversary with perfectly opposite utility function for intuition. For that last paragraph, translating it back out from the "Murphy" t... (read more)

1awenonian11dI'm glad to hear that the question of what hypotheses produce actionable behavior is on people's minds. I modeled Murphy as an actual agent, because I figured a hypothesis like "A cloaked superintelligence is operating the area that will react to your decision to do X by doing Y" is always on the table, and is basically a template for allowing Murphy to perform arbitrary action Y. I feel like I didn't quite grasp what you meant by "a constraint on Murphy is picked according to this probability distribution/prior, then Murphy chooses from the available options of the hypothesis they picked" But based on your explanation after, it sounds like you essentially ignore hypotheses that don't constrain Murphy, because they act as an expected utility drop on all states, so it just means you're comparing -1,000,000 and -999,999, instead of 0 and 1. For example, there's a whole host of hypotheses of the form "A cloaked superintelligence converts all local usable energy into a hellscape if you do X", and since that's a possibility for every X, no action X is graded lower than the others by its existence. That example is what got me thinking, in the first place, though. Such hypotheses don't lower everything equally, because, given other Laws of Physics, the superintelligence would need energy to hell-ify things. So arbitrarily consuming energy would reduce how bad the outcomes could be if a perfectly misaligned superintelligence was operating in the area. And, given that I am positing it as a perfectly misaligned superintelligence, we should both expect it to exist in the environment Murphy chooses (what could be worse?) and expect any reduction of its actions to be as positive of changes as a perfectly aligned superintelligence's actions could be, since preventing a maximally detrimental action should match, in terms of Utility, enabling a maximally beneficial action. Therefore, entropy-bombs. Thinking about it more, assuming I'm not still making a mistake, this might jus
Dark Matters

I found this Quanta magazine article about it which seems to indicate that it fits the CMB spectrum well but required a fair deal of fiddling with gravity to do so, but I lamentably lack the physics capabilities to evaluate the original paper.

Dark Matters

If there's something wrong with some theory, isn't it quite odd that looking around at different parts of the universe seems to produce such a striking level of agreement on how much missing mass there is? If there was some out-of-left-field thing, I'd expect it to have confusing manifestations in many different areas and astronomers angsting about dramatically inconsistent measurements, I would not expect the CMB to end up explained away (and the error bars on those measurements are really really small) by the same 5:1 mix of non-baryonic matter vs baryon... (read more)

6Davidmanheim1mo"isn't it quite odd that looking around at different parts of the universe seems to produce such a striking level of agreement on how much missing mass there is?" But they don't. Dark matter, as a theory, posits that the amount of mass that "must be there somewhere" varies in amount and distribution in an ad-hoc fashion to explain the observations. I think it's likely that whatever is wrong with the theory, on the other hand, isn't varying wildly by where in the universe it is. Any such explanation would (need to) be more parsimonious, not less so. And I agree that physics isn't obligated to make things easy to find - but when the dark matter theory was postulated, they guessed it was a certain type of WIMP, and then kept not finding it. Postulating that it must be there somewhere, and physics doesn't need to make it easy, isn't properly updating against the theory as each successive most likely but still falsifiable guess has been falsified.
Dark Matters

Yes, pink is gas and purple is mass, but also the gas there makes up the dominant component of the visible mass in the Bullet Cluster, far outweighing the stars.

Also, physicists have come up with a whole lot of possible candidates for dark matter particles. The supersymmetry-based ones took a decent kicking at the LHC, and I'm unsure of the motivations for some of the other ones, but the two that look most promising (to me, others may differ in opinion) are axions and sterile neutrinos, as those were conjectured to plug holes in the Standard Model, so they... (read more)

2CellBioGuy6dAdditionally, there's no reason to assume that all dark matter is just one thing. There could be multiple things going on, as long as most of the things going on don't self-interact. Heck, for that matter there could be a small (!) dark sector that DOES self-interact as long as its total mass was within the error bars for baryonic mass inferred from primordial nucleosynthesis.
Avoid Unnecessarily Political Examples

I'd go with number 2, because my snap reaction was "ooh, there's a "show personal blogposts" button?"

EDIT: Ok, I found the button. The problem with that button is that it looks identical to the other tags, and is at the right side of the screen when the structure of "Latest" draws your eyes to the left side of the screen. I'd make it a bit bigger and on the left side of the screen.

1Sherrinford3moAnother way you can follow the new posts of all kinds is the RSS button on the frontpage (together with an RSS feed reader). You can also select to see all kinds of posts above a certain threshold of "karma", e.g. this [https://www.lesswrong.com/feed.xml?view=community-rss&karmaThreshold=45]. (I think that is independent of whether it's just a personal blogpost, but I currently have a technical problem and cannot really check that.)
Belief Functions And Decision Theory

So, first off, I should probably say that a lot of the formalism overhead involved in this post in particular feels like the sort of thing that will get a whole lot more elegant as we work more things out, but "Basic inframeasure theory" still looks pretty good at this point and worth reading, and the basic results (ability to translate from pseudocausal to causal, dynamic consistency, capturing most of UDT, definition of learning) will still hold up.

Yes, your current understanding is correct, it's rebuilding probability theory in more generality to be sui... (read more)

4alexflint3moAh this is helpful, thank you. So let's say I'm estimating the position of a train on a straight section of track as a single real number and I want to do an update each time I receive a noisy measurement of the train's position. Under the theory you're laying out here I might have, say, three Gaussians N(0, 1), N(1, 10), N(4, 6), and rather than updating a single pdf over the position of the train, I'm updating measures associated with each of these three pdf. Is that roughly correct? (I realize this isn't exactly a great example of how to use this theory since train positions are perfectly realizable, but I just wanted to start somewhere familiar to me.) Do you by chance have any worked examples where you go through the update procedure for some concrete prior and observation? If not, do you have any suggestions for what would be a good toy problem where I could work through an update at a very concrete level?
Less Basic Inframeasure Theory

So, we've also got an analogue of KL-divergence for crisp infradistributions. 

We'll be using  and  for crisp infradistributions, and  and  for probability distributions associated with them.  will be used for the KL-divergence of infradistributions, and  will be used for the KL-divergence of probability distributions. For crisp infradistributions, the KL-divergence is defined as

I'm not entirely sure why it's like this, but it has the basic properties yo... (read more)

CO2 Stripper Postmortem Thoughts

It is currently disassembled in my garage, will be fully tested when the 2.0 version is built, and the 2.0 version has had construction stalled for this year because I've been working on other projects. The 1.0 version did remove CO2 from a room as measured by a CO2 meter, but the size and volume made it not worthwhile.

John_Maxwell's Shortform

Potential counterargument: Second-strike capabilities are still relevant in the interstellar setting. You could build a bunch of hidden ships in the oort cloud to ram the foe and do equal devastation if the other party does it first, deterring a first strike even with tensions and an absence of communication. Further, while the "ram with high-relativistic objects" idea works pretty well for preemptively ending a civilization confined to a handful of planets, AI's would be able to colonize a bunch of little asteroids and KBO's and comets in the oort cloud, and the higher level of dispersal would lead to preemptive total elimination being less viable.

2John_Maxwell5moThat's possible, but I'm guessing that it's not hard for a superintelligent AI to suddenly swallow an entire system using something like gray goo.
Introduction to Cartesian Frames

I will be hosting a readthrough of this sequence on MIRIxDiscord again, PM for a link.

The rationalist community's location problem

Reno has 90F daily highs during summer. Knocking 10 degrees off is a nonneglible improvement over Las Vegas, though.

Needed: AI infohazard policy

So, here's some considerations (not an actual policy)

It's instructive to look at the case of nuclear weapons, and the key analogies or disanalogies to math work. For nuclear weapons, the basic theory is pretty simple and building the hardware is the hard part, while for AI, the situation seems reversed. The hard part there is knowing what to do in the first place, not scrounging up the hardware to do it.

First, a chunk from Wikipedia

Most of the current ideas of the Teller–Ulam design came into public awareness after the DOE attempted to censor a magazine ar

... (read more)
3hg006moIt seems to me that under ideal circumstances, once we think we've invented FAI, before we turn it on, we share the design with a lot of trustworthy people we think might be able to identify problems. I think it's good to have the design be as secret as possible at that point, because that allows the trustworthy people to scrutinize it at their leisure. I do think the people involved in the design are liable to attract attention--keeping this "FAI review project" secret will be harder than keeping the design itself secret. (It's easier to keep the design for the bomb secret than hide the fact that top physicists keep mysteriously disappearing.) And any purported FAI will likely come after a series of lesser systems with lucrative commercial applications used to fund the project, and those lucrative commercial applications are also liable to attract attention. So I think it's strategically valuable to have the distance between published material and a possible FAI design be as large as possible. To me, the story of nuclear weapons is a story of how this is actually pretty hard even when well-resourced state actors try to do it. Of course, that has to be weighed against the benefit of openness. How is openness helpful? Openness lets other researchers tell you if they think you're pursuing a dangerous research direction, or if there are serious issues with the direction you're pursuing which you are neglecting. Openness helps attract collaborators. Openness helps gain prestige. (I would argue that prestige is actually harmful because it's better to keep a low profile, but I guess prestige is useful for obtaining required funding.) How else is openness helpful? My suspicion is that those papers on Arxiv with 5 citations are mostly getting cited by people who already know the author, and the Arxiv publication isn't actually doing much to attract collaboration. It feels to me like if our goal is to help researchers get feedback on their research direction or find collab
3ofer7moPublishing under a pseudonym may end up being counterproductive due to the Streisand effect. Identities behind many pseudonyms may suddenly be publicly revealed following a publication on some novel method for detecting similarities in writing style between texts.
2Vanessa Kosoy7moRegarding making a policy ahead of time, I think we can have an evolving model of what ingredients are missing to get transformative AI, and some rule of thumb that says how dangerous your result is, given how much progress it makes towards each ingredient (relevant but clearly insufficient < might or might not be sufficient < plausibly a full solution), how concrete/actionable it is (abstract idea < impractical method < practical method) and how original/surprising it is (synthesis of ideas in the field < improvement on idea in the field < application of idea outside the field < completely out of the blue). One problem is, the model itself might be an infohazard. This consideration pushes towards making the guidelines secret in themselves, but that would make it much harder to debate and disseminate them. Also, the new result might have major implications for the model. So, yes, certainly there is no replacement for the inside view, but I still feel that we can have guidelines that help focusing on the right considerations.
1Davidmanheim7moOpenAI's phased release of GPT2 seems like a clear example of exactly this. And there is a forthcoming paper looking at the internal deliberations around this from Toby Shevlane [https://www.law.ox.ac.uk/people/toby-shevlane], in addition to his extant work [https://dl.acm.org/doi/10.1145/3375627.3375815] on the question of how disclosure potentially affects misuse.
1maximkazhenkov7moAnother advantage AI secrecy has over nuclear secrecy is that there's a lot of noise and hype these days around ML both within and outside the community, making hiding in plain sight much easier.
CO2 Stripper Postmortem Thoughts

Person in a room: - 35 g of O2/hr from room
Person in a room with a CO2 stripper: -35 g of O2/hr from room

How does the presence of a CO2 stripper do anything at all to the oxygen amount in the air?

Introduction To The Infra-Bayesianism Sequence

Do you think this problem is essentially different from "suppose Omega asks you for 10 bucks. You say no. Then Omega says "actually I flipped a fair coin that came up tails, if it had come up heads, I would have given you 100 dollars if I predicted you'd give me 10 dollars on tails"?

(I think I can motivate "reconsider choosing heads" if you're like "yeah, this is just counterfactual mugging with belated notification of what situation you're in, and I'd pay up in that circumstance")

Introduction To The Infra-Bayesianism Sequence

Maximin over outcomes would lead to the agent devoting all its efforts towards avoiding the worst outcomes, sacrificing overall utility, while maximin over expected value pushes towards policies that do acceptably on average in all of the environments that it may find itself in.

Regarding "why listen to past me", I guess to answer this question I'd need to ask about your intuitions on Counterfactual mugging. What would you do if it's one-shot? What would you do if it's repeated? If you were told about the problem beforehand, would you pay money for a commitment mechanism to make future-you pay up the money if asked? (for +EV)

Basic Inframeasure Theory

Yeah, looking back, I should probably fix the m- part and have the signs being consistent with the usual usage where it's a measure minus another one, instead of the addition of two signed measures, one a measure and one a negative measure. May be a bit of a pain to fix, though, the proof pages are extremely laggy to edit.

Wikipedia's definition can be matched up with our definition by fixing a partial order where  iff there's a  that's a sa-measure s.t. , and this generalizes to any closed c... (read more)

Basic Inframeasure Theory

We go to the trouble of sa-measures because it's possible to add a sa-measure to an a-measure, and get another a-measure where the expectation values of all the functions went up, while the new a-measure we landed at would be impossible to make by adding an a-measure to an a-measure.

Basically, we've gotta use sa-measures for a clean formulation of "we added all the points we possibly could to this set", getting the canonical set in your equivalence class.

Admittedly, you could intersect with the cone of a-measures again at the end (as we do in the next post... (read more)

Introduction To The Infra-Bayesianism Sequence

Can you elaborate on what you meant by locally distinguishing between hypotheses?

2magfrump7moI mean distinguishing between hypotheses that give very similar predictions--like the difference between a coin coming up heads 50% vs. 51% of the time. As I said in my other comment, I think the assumption that you have discrete hypotheses is what I was missing. Though for any countable set of hypotheses, you can expand that set by prepending some finite number of deterministic outcomes for the first several actions. The limit of this expansion is still countable, and the set of hypotheses that assign probability 1 to your observations is the same at every time step. I'm confused in this case about (1) whether or not this set of hypotheses is discrete and (2) whether hypotheses with shorter deterministic prefixes assign enough probability to allow meaningful inference in this case anyway. I may mostly be confused about more basic statistical inference things that don't have to do with this setting.
Coronavirus: Justified Practical Advice Thread

If hospitals are overwhelmed, it's valuable to have a component of the hospital treatment plan for pneumonia on-hand to treat either yourself or others who have it especially bad. One of these is oxygen concentrators, which are not sold out yet and are ~$400 on Amazon. This doesn't deal with especially severe cases, but for cases which fall in the "shortness of breath, low blood oxygen" class without further medical complications, it'd probably be useful if you can't or don't want to go to a hospital due to overload. http... (read more)

4John_Maxwell1yGood idea. I purchased an oxygen concentrator. However, you might wish to use the device on a low pressure setting (or use it in "pulse" mode instead of "continuous" mode) because I'm hearing some rumors that oxygen therapy can be counterproductive: https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fwww.sohu.com%2Fa%2F376403787_162522 [https://translate.google.com/translate?hl=en&sl=auto&tl=en&u=https%3A%2F%2Fwww.sohu.com%2Fa%2F376403787_162522]
7Steven Byrnes1yI think I would feel guilty if I bought one of those, and then learned later that they're sold out, and the people building temporary hospitals can't get them.
2romeostevensit1yWHO report indicated that 75% of severe cases only needed oxygen. Only 25% needed forced respiration + suction. So this seems good.
(A -> B) -> A

I found a paper about this exact sort of thing. Escardo and Olivia call that type signature a "selection functional", and the type signature is called a "quantification functional", and there's several interesting things you can do with them, like combining multiple selection functionals into one in a way that looks reminiscent of game theory. (ie, if has type signature , and has type signature , then has type signature ... (read more)

Counterfactual Induction

Oh, I see what the issue is. Propositional tautology given means , not . So yeah, when A is a boolean that is equivalent to via boolean logic alone, we can't use that A for the exact reason you said, but if A isn't equivalent to via boolean logic alone (although it may be possible to infer by other means), then the denominator isn't necessarily small.

1Gurkenglas1ySo the valuation of any propositional consequence of A is going to be at least 1, with equality reached when it does as much of the work of proving bottom as it is possible to do in propositional calculus. Letting valuations go above 1 doesn't seem like what you want?
Counterfactual Induction

Yup, a monoid, because and , so it acts as an identitity element, and we don't care about the order. Nice catch.

You're also correct about what propositional tautology given A means.

1Gurkenglas1yThen that minimum does not make a good denominator because it's always extremely small. It will pick phi to be as powerful as possible to make L small, aka set phi to bottom. (If the denominator before that version is defined at all, bottom is a propositional tautology given A.)
Counterfactual Induction (Algorithm Sketch, Fixpoint proof)

Yup! The subscript is the counterfactual we're working in, so you can think of it as a sort of conditional pricing.

The prices aren't necessarily unique, we set them anew on each turn, and there may be multiple valid prices for each turn. Basically, the prices are just set so that the supertrader doesn't earn money in any of the "possible" worlds that we might be in. Monotonicity is just "the price of a set of possibilities is greater than the price of a subset of possibilities"

Counterfactual Induction

If there's a short proof of from and a short proof of from and they both have relatively long disproofs, then counterfacting on , should have a high value, and counterfacting on , should have a high value.

The way to read is that the stuff on the left is your collection of axioms ( is a finite collection of axioms and just means we're using the stuff in as well as the statement as our axioms), and it proves some statement.

For the first formulation of the va... (read more)

CO2 Stripper Postmortem Thoughts

Yup, this turned out to be a crucial consideration that makes the whole project look a lot less worthwhile. If ventilation at a bad temperature is available, it's cheaper to just get a heat exchanger and ventilate away and eat the increased heating costs during winter than to do a CO2 stripper.

There's still a remaining use case for rooms without windows that aren't amenable to just feeding an air duct outside, but that's a lot more niche than my original expectations. Gonna edit the original post now.

CO2 Stripper Postmortem Thoughts

Also, a paper on extremely high-density algal photobioreactors quotes algal concentration by volume as being as high as 6% under optimal conditions. The dry mass is about 1/8 of the wet mass of algae, so that's 0.75% concentration by weight percent. If the algal inventory in your reactor is 9 kg dry mass (you'd need to waste about 3 kg/day of dry weight or 24 kg/day of wet weight, to keep up with 2 people worth of CO2, or a third of the algae each day), that's 1200 kg of water in your reactor. Since a gallon is about 4 kg of water, that&apos... (read more)

1df fd1yI vaguely remember the 4 g/L/day but on further inspection, I now realise that we can't ever reach that efficiency. If we use the 1 g/L/day but human use 1kg/day (they use less in sleep don't they?) divided by 2/3 it would be about 1.5 m3 (still quite big) and we have to account for the actual foot print which would be much higher (2x higher? 3 m3?). I am tempted to argue that we don't have to match the CO2 production rate so perfectly since the CO2 should naturally diffuse a bit, but let's leave some margin of error on our side. for the total volume, can't we just continuously filter out the algae to retain the optimal condition while barely increasing total volume? and since we only use it at night, the total volume need would be less. If we automatically drain it to a hidden larger tank or press it straigth to oil in the basement we maybe saving effort here but increasing volume.
CO2 Stripper Postmortem Thoughts

[EDIT: I see numbers as high as 4 g/L/day quoted for algae growth rates, I updated the reasoning accordingly]

The numbers don't quite add up on an algae bioreactor for personal use. The stated growth rate for chlorella algae is 0.6 g/L/day, and there are about 4 liters in a gallon, so 100 gallons of algae solution is 400 liters is 240 g of algae grown per day, and since about 2/3ds of new biomass comes from CO2 via the 6CO2+6H2O->C6H12O6 reaction, that's 160 g of CO2 locked up per day, or... about 1/6 of a person worth of CO2 in a 24 hour peri... (read more)

2Diffractor1yAlso, a paper on extremely high-density algal photobioreactors quotes algal concentration by volume as being as high as 6% under optimal conditions. The dry mass is about 1/8 of the wet mass of algae, so that's 0.75% concentration by weight percent. If the algal inventory in your reactor is 9 kg dry mass (you'd need to waste about 3 kg/day of dry weight or 24 kg/day of wet weight, to keep up with 2 people worth of CO2, or a third of the algae each day), that's 1200 kg of water in your reactor. Since a gallon is about 4 kg of water, that's... 300 gallons, or 6 55-gallon drums, footprint 4 ft x 6 ft x 4 ft high, at a bare minimum (probably 3x that volume in practice), so we get the same general sort of result from a different direction. I'd be quite surprised if you could do that in under a thousand dollars.
CO2 Stripper Postmortem Thoughts

I have the relevant air sensor, it'd be really hard to blind it because it makes noise, and the behavioral effects thing is a good idea, thank you.

It's not currently with me.

I think the next thing to do is build the 2.0 design, because it should perform better and will also be present with me, then test the empirical CO2 reduction and behavioral effects (although, again, blinding will be difficult), and reevaluate at that point.

I have the relevant air sensor, it'd be really hard to blind it because it makes noise, and the behavioral effects thing is a good idea, thank you.

Just randomizing would be useful; obviously, your air sensor doesn't care in the least if it is 'blinded' or not. And if it's placed in a room you don't go into, that may be enough. As well, maybe you can modify it to have a flap or door or obstruction which opens or closes, greatly changing the rate of CO2 absorption, and randomize that; or if you have someone willing to help, they can come in every n time u

... (read more)
So You Want to Colonize The Universe Part 5: The Actual Design

Good point on phase 6. For phase 3, smaller changes in velocity further out are fine, but I still think that even with less velocity changes, you'll still have difficulty finding an engine that gets sufficient delta-V that isn't fission/fusion/antimatter based. (also in the meantime I realized that neutron damage over those sorts of timescales are going to be *really* bad.) For phase 5, I don't think a lightsail would provide enough deceleration, because you've got inverse-square losses. Maybe you could decelerate with a lightsail in t... (read more)

1maximkazhenkov2yAssuming acceleration occurs over a 40 light year distance and uniform acceleration (because why not; we have a variable power source), the ship would experience a constant acceleration of ~0.3m/s^2 ( convertalot.com/relativistic_star_ship_calculator.html [http://convertalot.com/relativistic_star_ship_calculator.html] ). If we wanted the same peak deceleration using only lightsail and a sun-like star, we'd get a deceleration of 83km/s (back of envelope calculation analogizing photon pressure as a reversed gravitational well), so we'll need 72 stars in total. That is quite reasonable considering the star density in the galactic core. T̶h̶e̶ ̶o̶n̶l̶y̶ ̶p̶r̶o̶b̶l̶e̶m̶ ̶h̶e̶r̶e̶ ̶o̶f̶ ̶c̶o̶u̶r̶s̶e̶ ̶i̶s̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶r̶ ̶l̶i̶g̶h̶t̶s̶a̶i̶l̶ ̶m̶i̶g̶h̶t̶ ̶b̶e̶ ̶s̶o̶ ̶s̶m̶a̶l̶l̶ ̶t̶h̶a̶t̶ ̶g̶r̶a̶v̶i̶t̶a̶t̶i̶o̶n̶ ̶d̶o̶m̶i̶n̶a̶t̶e̶s̶,̶ ̶i̶n̶ ̶w̶h̶i̶c̶h̶ ̶c̶a̶s̶e̶ ̶y̶o̶u̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶l̶o̶o̶k̶ ̶f̶o̶r̶ ̶s̶t̶a̶r̶s̶ ̶w̶i̶t̶h̶ ̶h̶i̶g̶h̶e̶r̶ ̶p̶h̶o̶t̶o̶n̶-̶p̶r̶e̶s̶s̶u̶r̶e̶-̶t̶o̶-̶m̶a̶s̶s̶ ̶r̶a̶t̶i̶o̶, which are less densely populated. It's a trade-off between peak acceleration, destination constraint and sail size. Our sun for example would be among the worst targets for decelerating an incoming intergalactic spaceship. Is it though? Radiation in general tends to attenuate exponentially in matter, so a merely linear increase in shielding should solve the problem completely. Btw this sequence has been a very enjoyable read; I'm glad I'm not the only speculating about Clarketech-level space travel in free time.
87,000 Hours or: Thoughts on Home Ownership

I'd be extremely interested in the quantitative analysis you've done so far.

So You Want to Colonize The Universe Part 4: Velocity Changes and Energy

I'm talking about using a laser sail to get up to near c (0.1 g acceleration for 40 lightyears is pretty strong) in the first place, and slowing down by other means.

This trick is about using a laser sail for both acceleration and deceleration.

1Vanessa Kosoy2yMakes perfect sense, forget I asked.
So You Want to Colonize The Universe Part 4: Velocity Changes and Energy

Yeah, I think the original proposal for a solar sail involved deceleration by having the central part of the sail detach and receive the reflected beam from the outer "ring" of the sail. I didn't do this because IIRC the beam only maintains coherence over 40 lightyears or so, so that trick would be for nearby missions.

1Vanessa Kosoy2yI'm confused. Wouldn't it mean that even without this trick laser sail is only for nearby missions?
So You Want To Colonize The Universe Part 3: Dust

For 1, the mental model for non-relativistic but high speeds should be "a shallow crater is instantaneously vaporized out of the material going fast" and for relativistic speeds, it should be the same thing but with the vaporization directed in a deeper hole (energy doesn't spread out as much, it keeps in a narrow cone) instead of in all directions. However, your idea of having a spacecraft as a big flat sheet and being able to tolerate having a bunch of holes being shot in it is promising. The main issue that I see is that this approach is ... (read more)

3Vanessa Kosoy2yUsual neutrinos or dark matter won't work, but if we go to the extremely speculative realm, there might be some "hidden sector" of matter that doesn't normally interact with ordinary matter but allows complex structure. Producing it and doing anything with it would be very hard, but not necessarily impossible.
5qbolec2yI was afraid my questions might get ridiculed or ignored, but instead I've got a very gentle and simply expressed explanations helping me get out of confusion. Thank you for taking your time for writing your answer so clearly :)
So You Want to Colonize The Universe

Whoops, I guess I messed up on that setting. Yeah, it's ok.

4habryka2yAlmost certainly our fault. We had some janky behavior on that checkbox that caused a bunch of false positives. Sorry for that. (Will be fixed in the next few hours)
So You Want to Colonize The Universe Part 4: Velocity Changes and Energy

Actually, no! The activation energy for the conversion of diamond to graphite is about 540 kJ/mol, and using the Arrhenius equation to get the rate constant for diamond-graphite conversion, with a radiator temperature of 1900 K, we get that after 10,000 years of continuous operation, 99.95% of the diamond will still be diamond. At room temperature, the diamond-to-carbon conversion rate is slow enough that protons will decay before any appreciable amount of graphite is made.

Even for a 100,000 year burn, 99.5% of the diamond will still be intact at 1900 K.

Th... (read more)

So You Want to Colonize The Universe

Agreed. Also, there's an incentive to keep thinking about how to go faster until the marginal gain in design by one day of thought speeds the rocket up by less than one day, instead of launching, otherwise you'll get overtaken, and agreeing on a coordinated plan ahead of time (you get this galaxy, I get that galaxy, etc...) to avoid issues with lightspeed delays.

So You Want to Colonize The Universe

Or maybe accepting messages from home (in rocket form or not) of "whoops, we were wrong about X, here's the convincing moral argument" and acting accordingly. Then the only thing to be worried about would be irreversible acts done in the process of colonizing a galaxy, instead of having a bad "living off resources" endstate.

So You Want to Colonize the Universe Part 2: Deep Time Engineering

Edited. Thanks for that. I guess I managed to miss both of those, I was mainly going off of the indispensable and extremely thorough Atomic Rockets site having extremely little discussion of intergalactic missions as opposed to interstellar missions.

It looks like there are some spots where me and Armstrong converged on the same strategy (using lasers to launch probes), but we seem to disagree about how big of a deal dust shielding is, how hard deceleration is, and what strategy to use for deceleration.

2avturchin2y:) Ok, and now I will take a chance to advertise my two ideas of intergalactic colonisation. First is a SETI-attack - that is sending AI-contaminated messages to possible naive civilizations. LW post [https://www.lesswrong.com/posts/Jng2cZQtyuXDPihNg/risks-of-downloading-alien-ai-via-seti-search] . Not sure we should start it. Second is the use of a nanoprobe's accelerator to send many nanoprobes with different speeds - such nanoprobes will reach each other in the flight and organise in large obejct, which will then capable to deceleration - more details in the comment to the deceleration post.
So You Want to Colonize The Universe

Yeah, Atomic Rockets was an incredibly helpful resource for me, I definitely endorse it for others.

What makes people intellectually active?

This doesn't quite seem right, because just multiplying probabilities only works when all the quantities are independent. However, I'd put higher odds on someone having the ability to recognize a worthwhile result conditional on them having an ability to work on a problem, then having the ability to recognize a worthwhile result, so the multiplication of probabilities will be higher than it seems at first.

I'm unsure whether this consideration affects whether the distribution would be lognormal or not.

Load More