All of MakoYass's Comments + Replies

How do I use caffeine optimally?

And you haven't been able to reset your tolerance with a break? Or would it not be worth it? (I can't provide any details about what the benefits would be sry)

How do I use caffeine optimally?

Why work your way up at all? The lower you can keep your tolerance, the better, I'd guess?

I don't intend on ever switching away from my sencha/japanese green tea.

Unfortunately, sometimes your body doesn't give you a choice! If you use caffeine once a week, maybe you can avoid acclimating to it, but in my experience, drinking black tea went from "whoa, lots of caffeine" to "slight boost" over ~2 years of drinking it 5 days/week.
Crypto-fed Computation

Given this as a foundation, I wonder if it'd be possible to make systems that report potentially dangerously high concentrations of compute, places where an abnormally large amount of hardware is running abnormally hot, in an abnormally densely connected network (where members are communicating with very low latency, suggesting that they're all in the same datacenter).

Could it be argued that potentially dangerous ML projects will usually have that characteristic, and that ordinary distributed computations (EG, multiplayer gaming) will not? If so, a system like this could expose unregistered ML projects without imposing any loss of privacy on ordinary users.

I think this depends a lot on the use case. I envision for the most part this would be used in/on large known clusters of computation, as an independent check on computation usage and a failsafe. In that case it will be pretty easy to distinguish from other uses like gaming or cryptocurrency mining. If we're in the regime where we're worried about sneaky efforts to assemble lots of GPUs under the radar and do ML with them, then I'd expect there would be pattern analysis methods that could be used as you suggest, or the system could be set up to feed back more information than just computation usage.
Experience LessWrong without the Time-Wasting RabbitHole Effect

the less readable your posts become because the brain must make a decision with each link whether to click it for more information or keep reading. After several of these links, your brain starts to take on more cognitive load

I don't think it's reasonable to try to avoid the cognitive load of deciding whether to investigate subclaims or follow up on interesting ledes while reading. I think it's a crucial impulse for critical thinking and research and we have to have it well in hand.

The Transparent Society: A radical transformation that we should probably undergo

Wondering if radical transparency about (approximate) wealth + legalizing discriminatory pricing would sort of steadily, organically reduce inequality to the extent that would satisfy anyone.

Price discrimination is already all over the place, people just end up doing it in crappy ways, often by artificially crippling the cheaper versions of their products. If they were allowed to just see and use estimates of each customer's wealth or interests, the incentives to cripple cheap versions would become negative, perhaps more people would get the complete featu... (read more)

AGI Safety FAQ / all-dumb-questions-allowed thread

Since everything can fit into the "agent with utility function" model given a sufficiently crumpled utility function, I guess I'd define "is an agent" as "goal-directed planning is useful for explaining a large enough part of its behavior." This includes humans while discluding bacteria. (Hmm unless, like me, one knows so little about bacteria that it's better to just model them as weak agents. Puzzling.)

AGI Safety FAQ / all-dumb-questions-allowed thread

Most of what people call morality is conflict mediation: techniques for taking the conflicting desires of various parties and producing better outcomes for them than war.
That's how I've always thought of the alignment problem. The creation of a very very good compromise that almost all of humanity will enjoy.

There's no obvious best solution to value aggregation/cooperative bargaining, but there are a couple of approaches that're obviously better than just having an arms race, rushing the work, and producing something awful that's nowhere near the average human preference.

AGI Safety FAQ / all-dumb-questions-allowed thread

Agreed. Humans are constantly optimizing a reward function, but it sort of 'changes' from moment to moment in a near-focal way, so it often looks irrational or self-defeating, but once you know what the reward function is, the goal-directedness is easy to see too.

Sune seems to think that humans are more intelligent than they are goal-directed, I'm not sure this is true, human truthseeking processes seems about as flawed and limited as their goal-pursuit. Maybe you can argue that humans are not generally intelligent or rational, but I don't think you can ju... (read more)

6Amadeus Pagel24d
Doesn't this become tautological? If the reward function changes from moment to moment, then the reward function can just be whatever explains the behaviour.
1DeLesley Hutchins24d
On the other hand, the development of religion, morality, and universal human rights also seem to be a product of civilization, driven by the need for many people to coordinate and coexist without conflict. More recently, these ideas have expanded to include laws that establish nature reserves and protect animal rights. I personally am beginning to think that taking an ecosystem/civilizational approach with mixture of intelligent agents, human, animal, and AGI, might be a way to solve the alignment problem.
AGI Safety FAQ / all-dumb-questions-allowed thread

Do not use FAIR as a symbol of villainy. They're a group of real, smart, well-meaning people who we need to be capable of reaching, and who still have some lines of respect connecting them to the alignment community. Don't break them.

AGI Safety FAQ / all-dumb-questions-allowed thread

Seems useless if the first system pretends convincingly to be aligned (which I think is going to be the norm) so you never end up deploying the second system?

And "defeat the first AGI" seems almost as difficult to formalize correctly as alignment, to me:

  • One problem is that when the unaligned AGI transmits itself to another system, how do define it as the same AGI? Is there a way of defining identity that doesn't leave open a loophole that the first can escape through in some way?
  • So I'm considering "make the world as if neither of you had ever been made", t
... (read more)
We need a theory of anthropic measure binding

sort of incoherent and not definable in the general case

Why? Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I'm currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It's about 0.1^10^10^10^10)

infra-Bayesianism would (I think) tell you to act as if you're the brain whose future you believe to have the lowest expected utility

I haven't listened to that one yet, but ... wasn't it a bit hard to swallow as a decisio... (read more)

3Nora Belrose1mo
I'll address your points in reverse order. The Boltzmann brain issue is addressed in infra-Bayesian physicalism [] with a "fairness" condition that excludes worlds from the EU calculation where you are run with fake memories or the history of your actions is inconsistent with what your policy says you would actually do. Vanessa talks about this in AXRP episode 14. The "worlds that have somehow fallen under the rule of fantastical devils" thing is only a problem if that world is actually assigned high measure in one of the sa-measures (fancy affine-transformed probability distributions) in your prior. The maximin rule is only used to select the sa-measure in your convex set with lowest EU, and then you maximize EU given that distribution. You don't pick the literal worst conceivable world. Notably, if you don't like the maximin rule, it's been shown in Section 4 of this post [] that infra-Bayesian logic still works with optimism in the face of Knightian uncertainty, it's just that you don't get worst-case guarantees anymore. I'd suspect that you could also get away with something like "maximize 10th percentile EU" to get more tempered risk-averse behavior. I'm not sure I follow your argument. I thought your view was that minds implemented in more places, perhaps with more matter/energy, have more anthropic measure? The Kolmogorov complexity of the mind seems like an orthogonal issue. Maybe you're already familiar with it, but I think Stuart Armstrong's Anthropic Decision Theory paper (along with some of his LW posts on anthropics) do a good job of "deflating" anthropic probabilities and shifting the focus to your values and decision theory.
Distributed research journals based on blockchains

I don't really disagree with any of that, but yeah I think you might be missing the issue of curation, which is kind of most of the work that journals do.
A revolutionary publication, before being widely recognized, will look just like an error. Most of the time, it will turn out to be an error. Fully evaluating it takes time and energy, and if no one is paying reviewers to do that, it's generally totally unrewarding work and no one will do it and the diamonds in the rough wont be made discoverable.
If you understand why we must reward the boring work of rep... (read more)

I'm not a published academic and I haven't done any serious analysis to validate this, but I think that improving transparency of academic contribution might provide motivation for peer review. The hard work of evaluating and filtering published research would be attractive if it were publicly recognized. If a reputation could be built around critically analyzing and responding to new research rather than just publishing, then more people would do it, whether it was paid or not.
Science-informed normativity

I'm generally a fan of pursuing this sort of moral realism of the ideals, but I want to point out one very hazardous amoral hole in the world that I don't think it will ever be able to bridge over for us, lest anyone assume otherwise, and fall into the hole by being lax and building unaligned AGI because they think it will be kinder than it will.
(I don't say this lightly: Confidently assuming kindness that we wont get as a result of overextended faith in moral realism, and thus taking on catastrophically bad alignment strategies, is a pattern I see shockin... (read more)

I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background
  1. Hmm makes sense if you really don't care about energy. But how much energy will they need, in the end, to reorganize all of that matter?
  2. I don't think there's going to be a tradeoff between expansion and transcension for most agents within each civ, or most civs (let alone all agents in almost all civs). If transcension increases the value of any given patch of space by s^t, and you get more space from expansion at s*t^3, well, the two policies are nonexpansion:  vs expansion:  :/ there's no contest.
    If it's not value per region of
... (read more)
I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

In situations like that, I'd say, more.. you should process it with reduced energy, in correct proportion. I wouldn't say you should completely deafen yourself to anyone (unless it's literally a misaligned AIXI).

I think even this slackened phrasing is not applicable to the current situation, because the people I'm primarily listening to are mostly just ordinary navy staff who are pretty clearly not wired up to any grand disinformation apparatus about UAP.

I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

and we are either completely left alone or have been put in a simulation, in which case occasional UFO sightings don't seem like an optimal feature of the outcome.

Agreed. A way of using our matter (the earth) for something else, without killing us.

So I've been thinking about that. For any simulator, there are things they do and don't care about capturing accurately in the simulation. I'd guess that the simulation has a lot to do with whether we hold to the reciprocal kind-colonization pacts that they're committed to themselves. For that, it's important tha... (read more)

I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

Is there writing about that? Last time I thought deeply about reversible computing, it didn't seem like it was going to be useful for really anything that we care about.

I'll put it this way.. if you look at almost any subroutine in a real program, it consists of taking a large set of inputs and reducing them to a smaller output. In a reversible computer, iirc, the outputs have to be as big as the inputs, informationally (yeah that sounds about right). So you have to be throwing out a whole lot of useless outputs to keep the info balanced, that's what you h... (read more)

Simulation timesteps compute a new similar size model state from previous, and since physics is reversible simulations tend to be roughly reversible as well. And more generally you can balance entropy producing compression with entropy consuming generation/sampling.
I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

Even if stars only make up a small fraction of the matter in the universe, it's still matter, they'd still probably have something they'd prefer to do with it than this. I'm not really sure what kind of value system (that's also power-seeking enough to exert control over a broad chunk of the universe) could justify leaving it fallow.

Stars consist mostly of low value hydrogen/helium, but left to their own devices they cook that fuel into higher value heavier elements. But anyway that is mostly irrelevant - the big picture issue is whether future civs transcend vs expand. The current trajectory of civilization is exponential, and continuing that trajectory requires transcension. Spatial expansion allows for only weak quadratic growth.
I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

I will politely decline to undergo epistemic learned helplessness as it seems transparently antithetical to the project of epistemic rationality

1Alex Hollow1mo
Less so under potentially adversarial conditions, when there are politics/culture-war aspects. For example, many people have large personal and social incentives to convince you of various ideas related to UFOs. In that case, it may not be the correct move to engage with the presented arguments, if they are words chosen to manipulate and not to inform. Do not process untrusted input,. I'm curious if you think that this formulation of the above idea is still antithetical to epistemic rationality.
I just watched the Open C3 Subcommittee Hearing on Unidentified Aerial Phenomena (UFOs). Here's a succinct summary and commentary + some background

Even if it were true, how would they know it was a propulsion technology?

Uh, because there seemed to be a solid object (showed up in a kind of radar that we don't know how to spoof) that was moving around really fast in line with the visual. As stated, I still think it might not be a propulsion technology, but the witnesses don't tend to float any other possibility. I haven't seen them asked about the plasma image theory.


I wouldn't say I think that it's an alcubierre drive specifically, what I mean is I don't know what else to liken it to and it woul... (read more)

“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments

It's squarely relevant to the post, but it is mostly irrelevant to Eliezer's comment specifically, and I think the actual drives underlying the decision to make it a reply to Eliezer are probably not in good faith, like, you have to at least entertain the hypothesis that they pretty much realized it wasn't relevant and they just wanted eliezer's attention or they wanted the prominence of being a reply to his comment.
Personally I hope they receive eliezer's attention, but piggybacking messes up the reply structure and makes it harder to navigate discussions... (read more)

Sorry, I did not mean to violate any established norms. I posted as a reply to Eliezer's comment because they said that the "hardware-destroying capabilities" suggested by the OP is "obviously impossible in real life". I did not expect that my reply would be considered off-topic or irrelevant in that context.
Jeff Shainline thinks that there is too much serendipity in the physics of optical/superconducting computing, suggesting that they were part of the criteria of Cosmological Natural Selection, which could have some fairly lovecraftian implications

Compute is physically simpler than life. Where there is life, there is necessarily also compute. Where there is compute, there isn't necessarily also life.

Good, and cheap, is the thing. If we didn't have silicon computing, we would still have vacuum tubes, we'd still have computers. But as I understand it, vacuum tubes sucked, so I wouldn't expect that that machine learning would be moving so quickly at this point.

If that were the case, there'd be more measure in the next year than in the next second, but you don't suddenly find yourself a year from now. (

... (read more)
20 Modern Heresies

Private information is evil. (Though I'm still on the fence as to whether it's a necessary evil to avoid world-sized preference falsification cascades.)

20 Modern Heresies

Clippy is not ideal, but better than humanity.

There's a weird genre of paranoia where people worry that the thing we value will turn out to be something we disvalue. But I guess you mean it's a case where the values of the average LWer disagree sharply from the values of the globe, right. (I don't see that, personally.)

Announcing Impact Island: A New EA Reality TV Show

I'm bullish on radical transparency at this point. Whoever is the most unrelentingly brash will seize the next moral aesthetics cycle.

Beyond Blame Minimization

Regarding moving beyond blame minimization, I think it's worth mentioning my Venture Granters, a system for protecting sane risk-takers in public funding institutions:

Manhattan project for aligned AI

Research that makes the case for AGI x-risk clearer

I ended up going into detail on this, in the process of making an entry to the FLI's aspirational worldbuilding contest. So, it'll be posted in full about a month from now. But for now, I'll summarize:

  • We should prepare stuff in advance for identifying and directly manipulating the components of an AGI that engage in ruminative thought. This should be possible, there are certain structures of questions and answers that will reliably emerge, "what is the big blank blue thing at the top of the image" "it's pr
... (read more)
What should rationalists think about the recent claims that air force pilots observed UFOs?

Rationalists should be deeply interested in the Princeton-Nimitz encounters, regardless of whether it was confusion, aliens, or a secret human technology, because cases of confusion on this level teach us a lot about how epistemic networks operate, and if it were aliens or a secret human technology that would be strategically significant.

So, since those were pretty much the only possibilities, I was deeply interested.

I eventually settled loosely into the theory that the tictacs were probably a test of a long-range plasma volumetric display decoy/spoofing t... (read more)

What should rationalists think about the recent claims that air force pilots observed UFOs?

The princeton-nimitz reports are unambiguously worth the oxygen it takes to contemplate them, given the consistency of the reports and the ramifications it would have even if it was "just" a human technology. So if you had the virtue of curiosity, you would contemplate it, and you would get led down the path that ends with the resolution that the "lie", "mistake", or "human technology" theories don't really make deep sense either, and a rationalist does indeed have to start considering the other theory, that some aliens end up being much stranger than we w... (read more)

What should rationalists think about the recent claims that air force pilots observed UFOs?

Doesn't land a hit on the story as it's always been told: They're piloted by intelligent beings and that they only want to be seen occasionally. They'd notice that we're all carrying cameras and deliberately appear less frequently (while still acting aloof).

Would (myopic) general public good producers significantly accelerate the development of AGI?

Btw, I'm open to the possibility that the answer is "yes, but it will accelerate alignment techniques more than capabilities, so it's still good to do."

(Note, though, not all acceleration of deployment is bad. Imagine that we manage to secure against the period of peril, where fully general capabilities have been pretty much found but aren't being deployed because the discoverers are too responsible to do it without a convincing alignment solution. That's a case where alignment work itself accelerates the deployment of AGI, but the acceleration is purely good.)

[Beta Feature] Google-Docs-like editing for LessWrong posts

Ah, yeah link previews are good. I guess the problem with LW's ones that they're difficult to find out about on mobile, the user has to figure out to click and hold, then close the browser popup. I prefer gwern's way, where clicking a link on mobile will only open the preview, and you have to click again to traverse the link. Others have complained about that, though.

2Esben Kran4mo
I mostly use it from the computer so that missed me but it seems like a very good idea as well!
Grabby Aliens could be Good, could be Bad

This post is relevant, and has more to say about the benefits of neighbors in approaching lightspeed travel

Apparently there's an armstrong - sandberg paper that found that getting 99% of lightspeed is totally feasible with coil guns. So the benefits are mild.

Ethicality Behind Breaking the Glass Ceiling

I notice that pay transparency seems to be a key subproblem here. If we just knew how salary was distributed in these organizations, then we preeety much know how power is distributed. It would simplify the auditing pretty drastically.

There are pros and cons to pay transparency (I'm mostly pro, but I do fear that envy is a bigger problem in the US than in Scandanavian countries where transparency is working well).  But I'm not sure that's the key subproblem.  

I'd expect it's cultural devaluation of women that's the key subproblem.  Even where women aren't a small minority, there's an amazing double-standard about appearance, presentation, and discussion style throughout most US and UK (and I presume elsewhere, but I know less about that) businesses.

there are so many benefits to pay transparency beyond this issue as well however it is heavily stigmatized (at least in more traditional companies).
Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers

Will that cause more harmful project to succeed?

In reality I'm not sure the trap would remain effective for long enough for too many of those to start turning up. Humans aren't rigidly CDTish. They'll catch on. Perhaps many professional traders already have some principle against playing games of collective chicken.

I guess a good question here is... is the opportunity cost of assessing the failure rate, to the level of accuracy where the risk of project success is low enough that you can be sure that you'll get your refund bonus, actually lower than the re... (read more)

Ethicality Behind Breaking the Glass Ceiling

however there are professions that are more heavily concentrated by females and I would imagine that their dealings with sexism are more minimal.

Would it help at all to promote information about which finance firms are closest to having gender parity (cut through their PR), so that women who would strongly prefer not to be an extreme minority know which firms to give preference to, and apply to first?

This is an interesting topic that I believe further and impartial ESG analysis would be of great use
Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers

For posterity, the original title was 

Alex Tabarrok proposed improving crowdfunding mechanisms with Refund Bonuses. I think this might be a natural occurrence of a dutch book against Causal Decision Theory

I also removed these sections which I kind of left in by accident and had already decided at the time of posting that I couldn't really stand behind. Sorry about those.
Could be true, but I think I was probably understating the value of the credible signal that is sent by having refund bonuses, even for a LDT agent.

As a Logical Decision Theory (LDT) s

... (read more)
Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers

I see how that can be misleading. I'll try to clarify it. The reason it ended up looking like that was that "kickstarter with refund bonuses added" is, as he acknowledges, a really good way of describing it, even though it was not a product of taking kickstarter and adding refund bonuses.

For posterity, the original title was I also removed these sections which I kind of left in by accident and had already decided at the time of posting that I couldn't really stand behind. Sorry about those. Could be true, but I think I was probably understating the value of the credible signal that is sent by having refund bonuses, even for a LDT agent.
Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers

Mm, to add context, you're mentioning this because it's a very anti-inductive market, yes? And yet people keep participating. So why wouldn't they keep participating in the refund bonus extraction game of chicken.

Alex Tabarrok advocates for crowdfunding systems with *Refund Bonuses*. I think this might be a natural occurrence of a money pump against Causal Decision Theory pledgers

What's do you think is wrong? I don't see any contradictions here.

Are you confused about who I'm saying is getting dutch booked? I'm saying pledgers dutchbook themselves, the project will be more than fine, it would be extremely good for the project. It seems like a very good mechanism from the project's perspective, and I approve of it.

"Alex Tabarrok proposed improving crowdfunding mechanisms with Refund Bonuses." The proposal predates kickstarter.
I Want To Live In A Baugruppe

I'd love to participate, but I'm a mostly single male without US residency (just NZ/Au/UK), which I realize is unlikely to be a bottlenecked resource, so I'll un-vote my comment here heh

Grabby Aliens could be Good, could be Bad

For travel through neighboring grabby civs, mm, I guess you'd want to get to know them first. Are there ways they could prove that they're a certain kind of civ, with a certain trusted computing model, that lets them prove that they wont leak you?

For travel through neighboring primitive civs in the vulnerable stage... Maybe you'd send a warrior emissary who doesn't attribute negative utility to any of its own states of mind. If it's successful... Hmm... it establishes an encryption protocol with home, and only then do you start sending softer minds.

But tha... (read more)

Grabby Aliens could be Good, could be Bad

It would be worth writing, yeah. It would be an update for me.

P(any civilization in its early computing stage will run any code that is sent to them) ≈ 1 for me, not sure about the other terms. Transmission would also require that a civilization within the broadcast radius enters its computer age, and notices the message, before they mature and stop being vulnerable to being hacked, all before that region of space is colonized by a grabby civ (Oh, note, though, this model of spread, if it is practical, we might be able to assume that grabby civs can't othe... (read more)

Would (myopic) general public good producers significantly accelerate the development of AGI?

What are some of those components? We can put them on a list.

By the way, "myopic" means "pathologically short-term".

Good question. I don't have a list, just a general sense of the situation. Making a list would be a research project in itself. Also, different people here would give you different answers. That being said, * I occasionally see comments from alignment research orgs who do actual software experiments that they spend a lot of time on just building and maintaining the infrastructure to run large scale experiments. You'd have to talk to actual orgs to ask them what they would need most. I'm currently a more theoretical alignment researcher, so I cannot offer up-to-date actionable insights here. * As a theoretical researcher, I do reflect on what useful roads are not being taken, by industry and academia. One observation here is that there is an under-investment in public high-quality datasets for testing and training, and in the (publicly available) tools needed for dataset preparation and quality assurance. I am not the only one making that observation, see for example [] . Another observation is that everybody is working on open source ML algorithms, but almost nobody is working on open source reward functions that try to capture the actual complex details of human needs, laws, or morality. Also, where is the open source aligned content recommender? * On a more practical note, AI benchmarks have turned out to be a good mechanism for drawing attention to certain problems. Many feel that this benchmarks are having a bad influence on the field of AI, I have a lot of sympathy for that view, but you might also go with the flow. A (crypto) market that rewards progress on selected alignment benchmarks may be a thing that has value. You can think here of benchmarks that reward cooperative behaviour, truthfulness and morality in answers given by natural language querying systems, playing games ethically ( https:/
Continuous Minority Elimination

I like it. Less thoroughly descriptive, but it might generalize to more cases.

Would (myopic) general public good producers significantly accelerate the development of AGI?

you are trying to build an incentive structure that will accelerate the development of AGI.

No, I'm not sure how you got that impression (was it "failing to coordinate"?), I'm asking for the opposite reason.

I guess I got that impression from the 'public good producers significantly accelerate the development of AGI' in the title, and then looking at the impactcerts website. I somehow overlooked the bit where you state that you are also wondering if that would be a good idea. To be clear: my sense of the current AI open source space is that it definitely under-produces certain software components, software components that could be relevant for improving AI/AGI safety.
Grabby Aliens could be Good, could be Bad

Okay, no, the Teilhardian laser-as-nanomanufacturer idea is probably not workable. I read an extremely basic article about laser attenuation and, bad news: lasers attenuate.

The best a laser could do to any of the planets about the nearest star seems to be making a pulse of somewhat bright light visible to all of them.

I still wonder about sending packets of resilient self-organizing material that could survive a landing, though.

5Donald Hobson4mo
There are 2 possible cheats I can think of to attenuating lasers. Firstly, attenuation depends on radius of the emitter. If you have a 100ly bubble of your tech, it should in principle be possible to do high precision laser stuff 200ly away. A whole bunch of lasers across your bubble, tuned to interfere in just the right way. Secondly quantum entanglement. You can't target one photon precisely, but can you ensure 2 photons go in precisely the same direction as each other?
Yep. There's hints that you might be able to alleviate this somewhat with a very powerful laser (vacuum self-focusing is arguably a thing[1] [#fne9mvgz9lb], although it hasn't been observed thus far I don't believe), but good luck getting the accuracy necessary to do anything with it beyond signaling. (Ditto, a Bessel-beam arguably doesn't attenuate... but requires infinite energy and beamwidth. Finite approximations do start attenuating eventually.) 1. ^ [#fnrefe9mvgz9lb]See e.g. []
Implications of the Grabby Aliens Model

On the other hand, even if we went extinct, the universe wouldn’t remain empty since some other civilization would be there to take our place.

Yes, but most of my existential risk comes from AGI Misalignment, which would not follow this law, because a Misaligned AGI is likely to spread up and fill our volume and be as immovable to alien civs as we would have been.

Moving quickly can allow humanity to gather a larger fraction of the universe for itself.

The incentives to move quickly were actually a lot greater before grabby aliens, due to accelerating cosmolo... (read more)

Grabby aliens and Zoo hypothesis

I guess they wouldn't need a firmament if they were doing a thing where.. they just let life-supporting planets see them until intelligent life emerges, because unintelligent life would be indifferent to them, and then once intelligent life starts to build telescopes they descend and scan everyone and move it into a simulation. This would get them a completely accurate biological history. The simulation, from then on, might not be completely accurate, but if so, I am not sensitive to what would be missing from it.

Load More