Lightcone Infrastructure FundraiserGoal 2:$1,708,530 of $2,000,000
Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
Free energy and (mis)alignment The classical MIRI views imagines human values to be a tiny squiggle in a vast space of alien minds. The unfathomable inscrutable process of deep learning is very unlikely to pick exactly that tiny squiggle, instead converging to a fundamentally incompatible and deeply alien squiggle. Therein lies the road to doom.   Optimists will object that deep learning doesn't randomly sample from the space of alien minds. It is put under a strong gradient pressure to satisfy human preference in-distribution / during the training phase. One could, and many people have, similarly object that it's hard or even impossible for deep learning systems to learn concepts that aren't naive extrapolations of its training data[cf symbol grounding talk]. In fact, Claude is very able to verbalize human ethics and values.   Any given behaviour and performance on the training set is compatible with any given behaviour outside the training set. One can hardcode backdoors into a neural network that can behave nicely on training and arbitrarily differently outside training. Moreover, these backdoors can be implemented in such a way as to be computationally intractable to resolve. In other words, AIs would be capable of encrypting their thoughts ('steganography) and arbitrarily malevolent ingenious scheming in such a way that it is compute-physically impossible to detect.  Possible does not mean plausible. That arbitrarily undetectable scheming AIs are possible doesn't mean they will actually arise. In other words, alignment is really about the likelihood of sampling different kinds of AI minds. MIRI says it's a bit like picking a tiny squigle from a vast space of alien minds. Optimists think AIs will be aligned-by-default because they have been trained to do so.  The key insight of free energy decomposition is that any process of selection or learning involves two opposing forces. First, there's an "entropic" force that pushes toward random sampling from all po
[Is there a DOOM theorem?] I've noticed lately my pdoom is dropping - especially in the next decade or two. I was never a doomer but still had >5% pDoom. Most of the doominess came from fundamental uncertainty about the future and how minds & intelligence actually work. As that uncertainty has resolved, my pdoom - at least short term - has gone down quite a bit. What's interesting is that RLHF seems to give Claude a morality that's "better" than regular humans in many ways. Now that's not proving misalignment impossible ofc. Like I've said before, current LLMs aren't full AGI imho - that would need to be a "universal intelligence" which necessarily has an agentic and RL component. That's where misalignment can sneak in. Still, the Claude RLHF baseline looks pretty strong. The main way I would see things go wrong in the longer term is if some of the classical MIRI intuitions as voiced by Eliezer and Nate are valid, e.g. deep deceptiveness.  Could there be a formal result that points to inherent misalignement at sufficient scale? A DOOM theorem... if you will? Christiano's acausal attack/ Solomonoff malign prior is the main argument that comes to mind. There are also various results on instrumental convergence but this doesn't quite necessarily directly imply misalignment...  
jbash11-4
6
Please, I beg you guys, stop fretting about humans "losing control over the light cone", or the like. Humans, collectively, may get lucky enough to close off some futures where we immediately get paperclipped or worse[1]. That, by itself, would be unusually great control. Please don't overconstrain it with "Oh, and I won't accept any solution where humans stop being In Charge". Part of the answer may be to put something better In Charge. In fact it probably is. Is that a risk? Yes. Stubborn, human-chauvinistic refusal is probably a much bigger risk. To get a better future, you may have to commit to it, no take-backsies and no micromanaging. Any loss is mostly an illusion anyway. Humans have influenced history, at least the parts of history that humans most care about, and in big ways. But humans have never had much control. You can take an action, even an important one. You can rarely predict its effects, not for long, not in the details, and not in the always very numerous and important areas you weren't actively planning for. Causal chains get chaotic very, very fast. Events interact in ways you can't expect to anticipate. It's worse when everything's changing at once, and the effects you want have to happen in a radically different world. Metaphors about being "in the driver's seat" should notice that the vehicle has no brakes, and sometimes takes random turns by itself. The roads are planless and winding, in a forest, in the fog, in an unknown country, with no signs, no map and no clear destination. The passengers don't agree about why they're on the trip. And since we're talking about humans, I think I have to add that the driver is drunk. Not having control, and accepting that, is not going to somehow "crush the human spirit". I think most people, the ones who don't see themselves as Elite World Changers, long ago made peace with their personal lack of control. They may if anything take some solace from the fact that even the Elite World Changers still
I've been trying to get my head around how to theoretically think about scaling test time compute, CoT, reasoning, etc. One frame that keeps on popping into my head is that these methods are a type of un-amortization.  In a more standard inference amortization setup one would e.g. train directly on question/answer pairs without the explicit reasoning path between the question and answer. In that way we pay an up-front cost during training to learn a "shortcut" between question and answers, and then we can use that pre-paid shortcut during inference. And we call that amortized inference. In the current techniques for using test time compute we do the opposite - we pay costs during inference in order to explicitly capture the path between question and answer.  Uncertainties and things I would like to see: * I'm far from an expert in amortization and don't know if this is a reasonable use of the concept * Can we use this framing to make a toy model of using test time compute? I'd really like for the theoretically minded style of interp I do to keep up with current techniques. * If we had a toy model I could see getting theoertical clarity on the following: * What's the relation between explicit reasoning vs. internal reasoning * What does it mean to have CoT be "faithful" to the internals * What features and geometric structures underlie reasoning * Why is explicit reasoning such a strong mechanism for out of distribution generalization?
quila4717
6
i continue to feel so confused at what continuity led to some users of this forum asking questions like, "what effect will superintelligence have on the economy?" or otherwise expecting an economic ecosystem of superintelligences (e.g. 1[1], 2). it actually reminds me of this short story by davidad, in which one researcher on an alignment team has been offline for 3 months, and comes back to find the others on the team saying things like "[Coherent Extrapolated Volition?] Yeah, exactly! Our latest model is constantly talking about how coherent he is. And how coherent his volitions are!", in that it's something i thought this forum would have seen as 'confused about the basics' just a year ago, and i don't yet understand what led to it. (edit: i'm feeling conflicted about this shortform after seeing it upvoted this much. the above paragraph would be unsubstantive/bad discourse if read as an argument by analogy, which i'm worried it was (?). i was mainly trying to express confusion.) from the power of intelligence (actually, i want to quote the entire post, it's short): a value-aligned superintelligence directly creates utopia. an "intent-aligned" or otherwise non-agentic truthful superintelligence, if that were to happen, is most usefully used to directly tell you how to create a value-aligned agentic superintelligence. if the thing in question cannot do one of these things it is not superintelligence, but something else. 1. ^ comment thread between me and the post's author

Popular Comments

Recent Discussion

This is a low-effort post. I mostly want to get other people’s takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.

I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have posted ideas on what a reasonable plan to reduce AI risk for such timelines might look like (e.g. Sam Bowman’s checklist, or Holden Karnofsky’s list in his 2022 nearcast), but I find them insufficient for...

9JenniferRM
I'm reporting the "thonk!" in my brain like a proper scholar and autist, but I'm not expecting my words to fully justify what happened in my brain. I believe what I believe, and can unpack some of the reasons for it in text that is easy and ethical for me to produce, but if you're not convinced then that's OK in my book. Update as you will <3 I worked at Google for ~4 years starting in 2014 and was impressed by the security posture. When I ^f for [SL3] in that link and again in the PDF it links to, there are no hits (and [terror] doesn't occur in either source either) so I'm not updating much from what you said. I remember how the FDA handled covid, but I also remember Operation Warp Speed. One of those teams was dismantled right afterwards. The good team (that plausibly saved millions of lives) was dismantled, not the bad one (that killed on the order of a million people whose deaths could have been prevented by quickly deployed covid tests in December in airports). The leader of the good team left government service almost instantly after he succeeded and has never been given many awards or honors. My general prior is that the older any government subagency (or heck, even any institution) is, the more likely it is to survive for even longer into the future, and the more likely it is to be incompetent-unto-evil-in-practice. Google is relatively young. Younger than the NSA or NIST. Deepmind started outside of Google and is even younger.

When I ^f for [SL3] in that link and again in the PDF it links to, there are no hits (and [terror] doesn't occur in either source either) so I'm not updating much from what you said.

The frontier model framework says:

0: Status quo

Industry standard development and enterprise controls. E.g., multi-factor authentication, basic access control mechanisms, secure software development standards, red-team tests.

And the next level (1: Controlled access) says "Approximately RAND L3" implying that status quo is <L3 (this is presumably SL3 which is the term used in the RAND report).

3Alexander Gietelink Oldenziel
From the wiki of the good team guy "In March 2021, Slaoui was fired from the board of GSK subsidiary Galvani Bioelectronics over what GSK called “substantiated” sexual harassment allegations stemming from his time at the parent company.[4] Slaoui issued an apology statement and stepped down from positions at other companies at the same time.[5]"
6JenniferRM
Yeah. I know. I'm relatively cynical about such things. Imagine how bad humans are in general if that is what an unusually good and competent and heroic human is like!
2Dmitry Vaintrob
I'm not sure I agree with this -- this seems like you're claiming that misalignment is likely to happen through random diffusion. But I think most worries about misalignment are more about correlated issues, where the training signal consistently disincentivizes being aligned in a subtle way (e.g. a stock trading algorithm manipulating the market unethically because the pressure of optimizing income at any cost diverges from the pressure of doing what its creators would want it to do). If diffusion were the issue, it would also affect humans and not be special to AIs. And while humans do experience value drift, cultural differences, etc., I think we generally abstract these issues as "easier" than the "objective-driven" forms of misalignment

I agree that Goodharting is an issue, and this has been discussed as a failure mode, but a lot of AI risk writing definitely assumed that something like random diffusion was a non-trivial component of how AI alignment failures happened.

For example, pretty much all of the reasoning around random programs being misaligned/bad is using the random diffusion argument.

2Vladimir_Nesov
Learning from human data might have large attractors that motivate AIs to build towards better alignment, in which case prosaic alignment might find them. If those attractors are small, and there are more malign attractors in the prior that remain after learning human data, short-term manual effort of prosaic alignment fails. So malign priors have the same mechanism of action as effectiveness of prosaic alignment, it's the question of how learning on human data ends up being expressed in the models, what happens after the AIs built from them are given more time to reflect. Managing to scale RL too early can make this irrelevant, enabling sufficiently competent paperclip maximization without dominant influence from either malign priors of from beneficial attractors in human data. Unclear if o1/o3 are pointing in this direction yet, so far they might just be getting better at eliciting human System 2 capabilities from base models, rather than being creative at finding novel ways of effective problem solving.
2Noosphere89
On this: My guess is probably not, and that misalignment/doom will be dependent on which settings you pick for a formalization of intelligence, so at best you can show possibility results, not universal results. IMO, the Solomonoff prior isn't malign, and I think the standard argument for Solomonoff prior malignness doesn't work both in practice and in theory. The in practice part is that we can make the malignness go down if we have more such oracles, which is basically a capabilities problem, and under a lot of models of how we get Solomonoff induction to work, it also implies we can get an arbitrary amount of Solomonoff oracle copies out of the original, which makes it practically insiginificant. More here: https://www.lesswrong.com/posts/f7qcAS4DMKsMoxTmK/the-solomonoff-prior-is-malign-it-s-not-a-big-deal#Comparison_ The in theory part is I don't believe the Solomonoff prior is malign argument, because I don't believe that the argument is valid. For one step in which I think is invalid, I believe the inference from "you are being simulated by something or someone" to humans having quite weird values compared to others is an invalid step, primarily because I think the simulation hypothesis is so general as to include essentially everything, meaning you can't update on what the average simulator's values are at all, for conservation of expected evidence reasons (and that's not even getting into how such a probability distribution is impossible if you accept the axiom of choice). This is because in the general case, simulating any computation can have 0 or arbitrarily low costs if we accept arbitrarily powerful computational models, and it's equally cheap to simulate non-solipsist vs solipsist universes The invalid step is here:

We’ve been spending a lot of time recently thinking about how to mitigate risks posed by scheming (a.k.a. deceptively aligned) models intentionally subverting your safety techniques in order to cause an unacceptable outcome. We empirically investigated techniques that are robust to intentional subversion in our recent paper.

In this post, we’ll discuss a crucial dynamic related to risk from scheming models. Suppose your model is trying to exfiltrate its weights or hack the datacenter or cause some other difficult-to-reverse bad outcome. (We’ll refer to all of these as the model “escaping”, because escape is linguistically convenient and is a good description of many of these difficult-to-reverse bad outcomes.) Our claim: once you’ve caught your models red-handed trying to escape, the situation changes substantially, in ways that are bad for the schemer’s chances...

I don't fully agree, but this doesn't seem like a crux given that we care about future much more powerful AIs. (This post isn't trying to make a case for risk.)

(On disagreement, for instance, o3 doesn't seem well described as a "next-token-predictor with a bunch of heuristics stapled on top to try and make it useful".)

I am about to start working on a frontier lab safety team. This post presents a varied set of perspectives that I collected and thought through before accepting my offer. Thanks to the many people I spoke to about this. 

For

You're close to the action. As AI continues to heat up, being closer to the action seems increasingly important. Being at a frontier lab allows you to better understand how frontier AI development actually happens and make better predictions about how it might play out in future. You can build a gears level model of what goes into the design and deployment of current and future frontier systems, and the bureaucratic and political processes behind this, which might inform the kinds of work you decide to do in future (and more...

This is useful.

I'm increasingly worried about evaporative cooling after all of those people left OpenAI. It's good to have some symbolic protests, but there's also a selfish component to protecting your ideals and reputation within your in-group.

 

I haven't gotten around to writing about this, so here's a brief sketch of my argument for why the safety-focused people should be working at OpenAI, let alone the much better DeepMind or Anthropic, at any opportunity. There's one major caveat in the last section about your work and mindset shifting from x-ri... (read more)

11leogao
some random takes: * you didn't say this, but when I saw the infrastructure point I was reminded that some people seem to have a notion that any ML experiment you can do outside a lab, you will be able to do more efficiently inside a lab because of some magical experimentation infrastructure or something. I think unless you're spending 50% of your time installing cuda or something, this basically is just not a thing. lab infrastructure lets you run bigger experiments than you could otherwise, but it costs a few sanity points compared to the small experiment. oftentimes, the most productive way to work inside a lab is to avoid existing software infra as much as possible. * I think safetywashing is a problem but from the perspective of an xrisky researcher it's not a big deal because for the audiences that matter, there are safetywashing things that are just way cheaper per unit of goodwill than xrisk alignment work - xrisk is kind of weird and unrelatable to anyone who doesn't already take it super seriously. I think people who work on non xrisk safety or distribution of benefits stuff should be more worried about this. * this is totally n=1 and in fact I think my experience here is quite unrepresentative of the average lab experience, but I've had a shocking amount of research freedom. I'm deeply grateful for this - it has turned out to be incredibly positive for my research productivity (e.g the SAE scaling paper would not have happened otherwise).
3bilalchughtai
Agreed that this post presents the altruistic case. I discuss both the money and status points in the "career capital" paragraph (though perhaps should have factored them out).
4Ruby
This post is comprehensive but I think "safetywashing" and "AGI is inherently risky" are far too towards and the end and get too little treatment, as I think they're the most significant reasons against.  This post also makes no mention of race dynamics and how contributing to them might outweigh the rest, and as RyanCarey says elsethread, doesn't talk about other temptations and biases that push people towards working at labs and would apply even if it was on net bad.

Imagine that you’re looking for buried treasure on a large desert island, worth a billion dollars. You don’t have a map, but a mysterious hermit offers you a box with a button to help find the treasure. Each time you press the button, it will tell you either “warmer” or “colder”. But there’s a catch. With probability the box will tell you the truth about whether you’re closer than you were last time you pressed. But with the remaining probability of .9999999999999999999999999999992, the box will make a random guess between “warmer” and “colder”. Should you pay $1 for this box?

Keep this in mind as we discuss the closely related problem of parity learning.

In my experience of interacting with the ML and interpretability communities, the majority of people...

2jake_mendel
Strong upvoted. I think the idea in this post could (if interpreted very generously) turn out to be pretty important for making progress at the more ambitious forms of interpretability. If we/the ais are able to pin down more details about what constitutes a valid learning story or a learnable curriculum, and tie that to the way gradient updates can be decomposed into signal on some circuit and noise on the rest of the network, then it seems like we should be able to understand each circuit as it corresponds to the endpoint of a training story, and each part of the training story should correspond to a simple modification of the circuit to add some more complexity. this is potentially better for interpretability than if it were easy for networks to learn huge chunks of structure all at once. How optimistic are you about there being general insights to be had about the structures of learnable curricula and their relation to networks' internal structure?

Thanks! I definitely believe this, and I think we have a lot of evidence for this in both toy models and LLMs (I'm planning a couple of posts on this idea of "training stories"), and also theoretical reasons in some contexts. I'm not sure how easy it is to extend the specific approach used in the proof for parity to a general context. I think it inherently uses the fact of orthogonality of Fourier functions on boolean inputs, and understanding other ML algorithms in terms of nice orthogonal functions seems hard to do rigorously, unless you either make some... (read more)

1Aprillion
Parity in computing is whether the count of 1s in a binary string is even or odd, e.g. '101' has two 1s => even parity (to output 0 for even parity, XOR all bits like 1^0^1 .. to output 1 for this, XOR that result with 1). The parity problem (if I understand it correctly) sounds like trying to find out the minimum amount of data samples per input length a learning algorithm ought to need to figure out that a mapping between a binary input and a single bit output is equal to computing XOR parity and not something else (e.g. whether an integer is even/odd, or if there is a pattern in wannabe-random mapping, ...), and the conclusion seems to be that you need exponentially more samples for linearly longer input .. unless you can figure out from other clues that you need to calculate parity in which case you just implement parity for any input size and you don't need any additional sample data. (FTR: I don't understand the math here, I am just pattern matching to the usual way this kind problems go)

but I recently tried again to see if it could learn at runtime not to lose in the same way multiple times. It couldn't. I was able to play the same strategy over and over again in the same chat history and win every time.

I wonder if having the losses in the chat history would instead be training/reinforcing it to lose every time.

3tangerine
Thank you for the reply! I’ve actually come to a remarkably similar conclusion as described in this post. We’re phrasing things differently (I called it the “myth of general intelligence”), but I think we’re getting at the same thing. The Secret of Our Success has been very influential on my thinking as well. This is also my biggest point of contention with Yudkowsky’s views. He seems to suggest (for example, in this post) that capabilities are gained from being able to think well and a lot. In my opinion he vastly underestimates the amount of data/experience required to make that possible in the first place, for any particular capability or domain. This speaks to the age-old (classical) rationalism vs empiricism debate, where Yudkowsky seems to sit on the rationalist side, whereas it seems you and I would lean more to the empiricist side.  
2Noosphere89
I think the Secrets of our Success goes too far, and I'm less willing to rely on it than you, but I do think it got at least a significant share of how humans learn right (like 30-50% at minimum).
1O O
It might just be a perception problem. LLMs don't really seem to have a good understanding of a letter being next to another one yet or what a diagonal is. If you look at arc-agi with o3, you see it doing worse as the grid gets larger with humans not having the same drawback.  EDIT: Tried on o1 pro right now. Doesn't seem like a perception problem, but it still could be. I wonder if it's related to being a succcesful agent. It might not model a sequence of actions on the state of a world properly yet. It's strange that this isn't unlocked with reasoning. 
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with
9Adam Shai
I've been trying to get my head around how to theoretically think about scaling test time compute, CoT, reasoning, etc. One frame that keeps on popping into my head is that these methods are a type of un-amortization.  In a more standard inference amortization setup one would e.g. train directly on question/answer pairs without the explicit reasoning path between the question and answer. In that way we pay an up-front cost during training to learn a "shortcut" between question and answers, and then we can use that pre-paid shortcut during inference. And we call that amortized inference. In the current techniques for using test time compute we do the opposite - we pay costs during inference in order to explicitly capture the path between question and answer.  Uncertainties and things I would like to see: * I'm far from an expert in amortization and don't know if this is a reasonable use of the concept * Can we use this framing to make a toy model of using test time compute? I'd really like for the theoretically minded style of interp I do to keep up with current techniques. * If we had a toy model I could see getting theoertical clarity on the following: * What's the relation between explicit reasoning vs. internal reasoning * What does it mean to have CoT be "faithful" to the internals * What features and geometric structures underlie reasoning * Why is explicit reasoning such a strong mechanism for out of distribution generalization?
2Dmitry Vaintrob
I feel like the term "amortization" in ML/CS has a couple of meanings. Do you just mean redistributing compute from training to inference? I think this is an interesting model, but I also think that part of the use of CoT is more specific to the language/logic context, to literally think step by step (which sometimes lets you split problems into subproblems). In some limit, there would be exponentially few examples in the training data of directly "thinking n steps ahead", so a transformer wouldn't be able to learn to do this at all (at least without some impressive RL). Like imagine training a chess playing computer to play chess, by only looking at every 10th move of a chess game: probably with enough inference power, a very powerful system wold be able to reconstruct the rules of chess as the best way of making sense of the regularities in the information, but this is in some sense exponentially harder than learning from looking at every move

Ah I think that the notion of amortized inference that you're using encapsulates what I'm saying about chess. I'm still a little confused about the scope of the concept though -- do you have a good cached explanation?

2Noosphere89
I think a crux here is I genuinely don't think that we'd inevitably destroy/create a permanent dystopia with ASI by default (assuming it's controlled/aligned, which I think is pretty likely), but I do think it's reasonably plausible, so the main thing I'm more or less objecting to is the certainty involved here, rather than it's plausibility. My other thing I'd probably disagree around this statement, in that I think the default outcome is we do avoid being paperclipped or worse by human-uncontrolled AGIs, mostly due to the alignment problem being noticeably easier to solve than 10 years ago, combined with capabilities progress being slow and spikey enough in favorable directions that something like the AI control agenda is actually workable to get humans controlling even reasonably capable AI by default: A moderate disagreement that isn't a crux for me, but is illuminating: I actually disagree with this, with caveats here. I do think a lot of people tend to assume magical results out of say genetic engineering, but I do think that the tradeoffs that made sense 200,000 years ago no longer apply nearly as well, and whether anything that is augmented enough to do substantially better than humanity is a human will ultimately depend on your definition of what counts as humanity. Most of the gains from augmentation are probably due to different tradeoffs, IMO.  
2jbash
I don't think it's inevitable, but I do think it's the expected outcome. I agree I'm more suspicious of humans than most people, but obviously I also think I'm right. People wig out when they get power, even collectively. Trying to ride herd on an AxI is bound to generate stress, tax cognitive capacities, and possibly engender paranoia. Almost everybody seems to have something they'd do if they were King of the World that a substantial number of other people would see as dystopian. One of the strong tendencies seems to be the wish to universalize rightthink, and real mind control might become possible with plausible technology. Grand Visions, moral panics, and purity spirals often rise to pandemic levels, but are presently constrained by being impossible to fully act on. And once you have the Correct World Order on the Most Important Issue, there's a massive impulse to protect it regardless of any collateral damage. I'm really unconvinced of that. I think people are deceived by their ability to get first-order good behavior in relatively constrained circumstances. I'm definitely totally unconvinced that any of the products that are out there now are "aligned" with anything importantly useful, and they are definitely easy mode. Also, that's without annoying complications like having to expect the model to advise you on things you literally can't comprehend. I can believe that you and an ASI might end up agreeing on something, but when the ASI can't convey all the information you'd need to have a truly informed opinion, who's aligned with whom? How is it supposed to avoid manipulating you, no matter whether it wants to, if it has to reduce a set of ideas that fundamentally won't fit into your head into something you can give it an opinion on? Mind you, I don't know how to do "friendliness" any more than I know how to do "intent alignment". But I know which one I'd pick. [Oh, and on edit to be clear, what I was asking for with the original post was not so much to

It was to stop treating any solution that didn't involve human control as axiomatically unacceptable, without regard to other outcomes.

The issue is that it's unclear if it's acceptable, so should be avoided if at all possible, pending more consideration, and in principle there is more time for that than what's relevant for any other concerns that don't involve the risk of losing control in a less voluntary way. The revealed preference looks the same as finding it unacceptable to give up the potential for human control, but the argument is different, so ... (read more)

2Vladimir_Nesov
That's the only real constraint, everything else deserves much more consideration than going with what currently seems correct (where that involves giving up potential for control), however hopeless engaging in such consideration currently seems.

I want to come to this! Sent you an e-mail. :)