All of roystgnr's Comments + Replies

In economics, "we can model utility as logarithmic in wealth", even after adding human capital to wealth, feels like a silly asymptotic approximation that obviously breaks down in the other direction as wealth goes to zero and modeled utility to negative infinity.

In cosmology, though, the difference between "humanity only gets a millionth of its light cone" and "humanity goes extinct" actually does feel bigger than the difference between "humanity only gets a millionth of its light cone" and "humanity gets a fifth of its light cone"; not infinitely bigger,... (read more)

It's hard to apply general strategic reasoning to anything in a single forward pass, isn't it?  If your LLM has to come up with an answer that begins with the next token, you'd better hope the next token is right.  IIRC this is the popular explanation for why LLM output seems to be so much better when you just add something like "Let's think step by step" to the prompt.

Is anyone trying to incorporate this effect into LLM training yet?  Add an "I'm thinking" and an "I'm done thinking" to the output token set, and only have the main "predict t... (read more)

This is also basically an idea I had - I actually made a system design and started coding it, but haven't made much progress due to lack of motivation... Seems like it should work, though

I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".

The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soo... (read more)

Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?

That was astonishingly easy to get working, and now on my laptop 3060 I can write a new prompt and generate another 10-odd samples every few minutes.  Of course, I do mean 10 odd samples: most of the human images it's giving me have six fingers on one hand and/or a vaguely fetal-alcohol-syndrome vibe about the face, and none of them could be mistaken for a photo or even art by a competent artist yet.  But they're already better than any art I could make, and I've barely begun to experiment with "prompt engineering"; maybe I should have done that ... (read more)

we still need to address ocean acidification

And changes in precipitation patterns (I've seen evidence that reducing solar incidence is going to reduce ocean evaporation, independent of temperature).

There's also the "double catastrophe" problem to worry about. Even if the median expected outcome of a geoengineering process is decent, the downside variance becomes much worse.

I still suspect MCB is our least bad near- to medium-term option, and even in the long term the possibility of targeted geoengineering to improve local climates is a... (read more)

3mako yass4y
Of course, it may help that the way MCB reduces solar incidence is mainly through artificially increasing ocean evaporation. But it would be good to make sure of that.
Alex has not skipped a grade or put in an some secret fast-track program for kids who went to preschool, because this does not exist.

Even more confounding: my kids have been skipping kindergarten in part because they didn't go to preschool. My wife works from home, and has spent a lot of time teaching them things and double-checking things they teach themselves.

Preschools don't do tracking any more than grade schools, so even if in theory they might provide better instruction than the average overworked parent(s), the output will be 100% totally... (read more)

Gah, of course you're correct. I can't imagine how I got so confused but thank you for the correction.

You don't need any correlation between and to have . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.

Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that's just how "uncorrelated" in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]-E[X]E[Y] is not really the most fundamental definition of covariance, I'd say, but it's easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)

Not quite. Expected value is linear but doesn't commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you'd *only* have the mean of the result, whereas what would really be a "paradox" is if turned out to be tiny.

The authors grant Drake's assumption that everything is uncorrelated, though.
the thing you know perfectly well people mean when they say "chemicals"

I honestly don't understand what that thing is, actually.

To use an example from a Facebook post I saw this week:

Is P-Menthane-3,8-diol (PMD) a chemical? What about oil from the lemon eucalyptus tree? Oil of Lemon Eucalyptus is typically refined until it's 70% PMD instead of 2%; does that turn it into a chemical? What if we were to refine it all the way to 100%? What if, now that we've got 100% PMD, we just start using PMD synthesized at a chemical plant inst... (read more)

Perceived chemical-ness is a very rough heuristic for the degree of optimization a food has undergone for being sold in a modern economy (see for why this might be something you want to avoid). Very, very rough--you could no doubt list examples of 'non-chemicals' that are more optimized than 'chemicals' all day, as well as optimizations that are almost certainly not harmful. And yet I'd wager the correlation is there.

It's actually an implicit two-place predicate. Part of what's meant by "chemical" is that it's suspicious, and whether something's suspicious or not depends on what you know about it. How things are labelled on food packages is related to their safety in such a way that treating "P-Menthane-3,8-diol" as more suspicious than "lemon eucalyptus extract" is actually correct.
I honestly don't understand what that thing is, actually.

This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.

Suppose I successfully convinc... (read more)

So, does 1+ω make sense? It does, for the ordinals and hyperreals only.

It make sense for cardinals (the size of "a set of some infinite cardinality" unioned with "a set of cardinality 1" is "a set with the same infinite cardinality as the first set") and in real analysis (if lim f(x) = infinity, then lim f(x)+1 = infinity) too.

What about -1+ω? That only makes sense for the hyperreals.

And for cardinals (the size of the set difference between "a set of some infinite cardinality" and "a subset of one element" is the same infinite cardinality) and in real analysis (if lim f(x) = infinity, then lim -1+f(x) = infinity) too.

Have rephrased as "So, does 1+ω make sense as something different from ω?".
5Scott Garrabrant5y
The cardinal set difference one is not well defined. If I remove the evens from the integers, I have infinitely many left over. If I remove the integers from the integers, I have nothing. The limits are also not well defined with addition and subtraction, as you can add a function that goes to infinity with one that goes to negative infinity and get all sorts of stuff. Hyperreals are what you get when you take the limits and try to make them well defined under that stuff.

I believe the answer to your second question is probably technically "yes"; if there's any way in which AZ mispredicts relative to a human, then there's some Ensemble Learning classifier that weights AZ move choices with human move choices and performs better than AZ alone. And because Go has so many possible states and moves at each state, humans would have to be much, much, much worse at play overall for us to conclude that humans were worse along every dimension.

However, I'd bet the answer is practically "no". If Alp... (read more)

Oh, well in that case the point isn't subtlely lacking, it's just easily disproven. Given any function from I^N to R, I can take the tensor product with cos(k pi x) and get a new function from I^{N+1} to R which has k times as many non-globally-optimal local optima. Pick a decent k and iterate, and you can see the number growing exponentially with higher dimension, not approaching 0.

Perhaps there's something special about the functions we try to optimize in deep learning, a property that rules out such cases? That could be. But you'... (read more)

I definitely intended the implied context to be 'problems people actually use deep learning for,' which does impose constraints which I think are sufficient. Certainly the claim I'm making isn't true of literally all functions on high dimensional spaces. And if I actually cared about all functions, or even all continuous functions, on these spaces then I believe there are no free lunch theorems that prevent machine learning from being effective at all (e.g. what about those functions that have a vast number of huge oscillations right between those two points you just measured?!) But in practice deep learning is applied to problems that humans care about. Computer vision and robotics control problems, for example, are very widely used. In these problems there are some distributions of functions that empirically exist, and a simple model of those types of problems is that they can be locally approximated over an area with positive size by taylor series at any point of the domain that you care about, but these local areas are stitched together essentially at random. In that context, it makes sense that maybe the directional second derivatives of a function would be independent of one another and rarely would they all line up. Beyond that I'd expect that if you impose a measure on the space of such functions in some way (maybe limiting by number of patches and growth rate of power series coefficients) that the density of functions with even one critical point would quickly approach zero, even while infinitely many such functions exist in an absolute sense. I got a little defensive thinking about this since I felt like the context of 'deep learning as it is practiced in real life' was clear but looking back at the original post it maybe wasn't outlined in that way. Even so I think your reply feels disingenuous because you're explicitly constructing adversarial examples rather than sampling functions from some space to suggest that functions with many local optima ar
a Leviathan could try to transcend/straddle these fruits/niches and force them upward into a more Pareto optimal condition, maybe even into the non-Nash E. states if we're extra lucky.

Remember that old Yudkowsky post about Occam's Razor, wherein he points out how "a witch did it" sounds super-simple, but the word "witch" hides a ton of hidden complexity? I'm pretty sure you're doing the same thing here with the word "could". Instead of trying to picture what an imaginary all-powerful leader could do, im... (read more)

In the BFR announcment Musk promized that tickets from intercontinental rocket travel for the price of a current economy ticket.

"Full-fare" economy, which is much more expensive than even the "typical" international economy seat tickets you're thinking of, but yes, and even outsiders don't think it's impossible. It is very sensitive to a lot of assumptions - third party spreadsheets I've seen say low thousands of dollars per ticket is possible, but it wouldn't take many assumptions to fall short before prices j... (read more)

Failing to follow good strategic advice isn't even the worst failure mode here; unless you're lucky you may not be given any strategic advice at all in response to a tactical question. If nobody notices that you're committing the XY Problem, then you may be given good advice for the tactical problem you asked about, follow it, and end up worse off than you were before with respect to the strategic problem you should have been asking about instead.

3Said Achmiz6y
This is true. The converse problem [] also exists, however.

This argument doesn't seem to take into account selection bias.

We don't get into a local optimum becuase we picked a random point and wow, it's a local optimum, what are the odds!?

We get into a local optimum because we used an algorithm that specifically *finds* local optima. If they're still there in higher dimensions then we're still liable to fall into them rather than into the global optimum.

The point is that they really are NOT still there in higher dimensions.

Is there some more general limit to power begetting power that would also affect AGI?

The only one which immediately comes to mind is inflexibility. Often companies shrink or fail entirely because they're suddenly subject to competition armed with a new idea. Why do the new ideas end up implemented by smaller competitors? The positive feedback of "larger companies have more people who can think of new ideas" is dominated by the negative feedbacks of "even the largest company is tiny compared to its complement" and "companies

... (read more)

You should provide some more explicit license, if you don't want to risk headaches for others later. "yes [It's generally intended to be Open Source]" may be enough reassurance to copy the code once, but "yes, you can have it under the new BSD (or LGPL2.1+, or whatever) license" would be useful to have in writing in the repository in case others want to create derived works down the road.

Thanks very much for creating this!

Added the text of the "unlicence" to the script file "toyscript.txt".

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

But attacking a territory requires long supply lines, whereas defenders are on their home turf.

But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.

But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into th... (read more)

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error. Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

$2 debt squared does make sense, though, it is $4 and no debt.

No, it is $$4.

If that's what you meant to write, and it's also obvious to you that you could have written 40000¢¢ instead and still been completely accurate, then I'd love to know if you have any ideas of how this computation could map to anything in the real world. I would have thought that "kilogram meters squared per second cubed" was utter nonsense if anyone had just tried to show me the arithmetic without explaining what it really meant.

If that's not what you meant to write, o... (read more)

Facebook has privacy settings, such that anyone who wants to limit their posts' direct visibility can.

Whether you should take someone else's settings as explicit consent should probably vary from person to person, but I think the "if he didn't want it to be widely seen he wouldn't have set it up to be widely seen" heuristic is probably accurate when applied to EY, even if it's not applicable to every Joe Grandpa.

Even in the Joe Grandpa case, it doesn't seem like merely avoiding citing and moving on is a good solution. If you truly fear that some... (read more)

EY was explicit about him posting certain views to facebook instead of posting them to LW because of the reputational cost that comes from him posting far-out ideas on LW. That why he deleted his last big post (the 1st april post).

The form of the pathologies makes a difference, no?

IIRC the worst pathology with IRV in a 3 way race is basically: you can now safely vote for a third party who can't possibly win, but if they show real support then it's time to put one of the two Schelling party candidates in front of them. So it's not worse than plurality but it's not much of an improvement. Plus, with more candidates IRV becomes more and more potentially insane.

With Schulze or Ranked Pairs the pathology is DH3: If a third party can win, you can often help them win by voting a "da... (read more)

That's very, very hard to pull off in practice, and it's obvious to anyone who descends from the omniscient view to an actual campaign. Suppose there's the A party(45%), the B party(40%), and the generally-disliked C party (15%). In order for B not to be the Condorcet winner already but make it as easy as possible for DH3, let's suppose that C is evenly split on the A vs B issue. This is nearly the ideal case for DH3. Picking a lower- or higher-scoring party would present a somewhat different set of challenges. The DH3 strategy is for B to lie and say that C is better than A, so as to create a cycle, and then win it. A has a lead over B of 45-40=5%, and a lead over C of 85-15=60%. B also has a lead over C of 85-15=60%. B needs to get A to lose to C by an amount greater than A's lead over B, but less than B's lead over C. Each percentage point of B's strategic votes of C over A shifts A's lead over C down by 2%. So, if they can get that +60 down to -7 or something, then A's lead over B will be the smallest one. But look at what actually has to happen for B to win this cycle: 1)First, they need to successfully create the cycle, which means getting a huge number of their people on board (>13/16 of them). This cannot be done secretly. 2) A voters need to reliably prefer B over C. If B was campaigning in favor of C, then some A voters could be lured away legitimately. If B was issuing ballot instructions to put C above A, some A voters could think that's so bad they will put B at the bottom, again legitimately. The entire strategy relies on A voters cooperating with the B party in stealing the election, and B's pursuing this strategy could well make them worse in A's eyes than C would have been. Each percentage point of A voters who votes for C over B pushes B's lead over C down from 60% by 2%. Supposing that B got 90% of their voters on board with the strategy (good luck with that), then C's lead over A would be 12%. A voters would have a smaller swing to achieve

There's a high-stakes variational calculus problem. For what seasonal temperature profile do we get the best long-term minimum for the sum of "deaths due to extreme cold" and "deaths due to tropical diseases whose vector insects are stymied by extreme cold".

The magnitude of the variation isn't nearly the same in the O2 vs CO2 cases. "16% O2 reduction is lost in the noise" is devastating evidence against the theory "0.2% O2 reduction has significant cognitive effects", but "16% CO2 reduction is lost in the noise" is weaker evidence against the theory "66% and 300% CO2 increases have significant cognitive effects".

I'm not arguing with you about implausible effect sizes, though. We should especially see significant seasonal effects in every climate where people typically seal up buildings against the cold or the heat for months at a time.

You don't know the effect because the existing experiments do not vary or hold constant oxygen levels. All you see is the net average effect, without any sort of partitioning among causes.

Existing experiments do vary oxygen levels systematically, albeit usually unintentionally, by geography. Going up 100 meters from sea level gives you a 1% drop in oxygen pressure and density. If that was enough for a detectable effect on IQ, then even the 16% lower oxygen levels around Denver should leave Coloradans obviously handicapped. IIRC altitude sickness does show a strong effect on mental performance, but only at significantly lower air pressures still.

And they also vary CO2 levels systematically by geography as well; if that was enough for a detectable effect on IQ, then the lower CO2 levels around Denver should make the rest of us at lower altitudes, such as sea level, look obviously handicapped. If you believe the altitude point refutes effects of oxygen, then it must refute effects of carbon dioxide and nitrogen as well... Which is part of my original point about implausible effect sizes: the causal effect is underidentified, but whether it's oxygen or CO2 or nitrogen, it is so large that we should be able to see its repercussions all over in things like the weather (or altitude, yes).

Pakistan, for example, is so dysfunctional and clannish that iodization and polio programs have had serious trouble making any headway.

To be fair, that's not entirely Pakistanis' fault. Is paranoia about Communist fluoridation plots more or less dysfunctional than paranoia about CIA vaccination plots? Does it make a difference that only the latter has a grain of truth to it?


Fluoridation of drinking water has never been shown to be safe or effective in randomized trials and you could never get approval from the FDA today to use it. The claimed benefits are pretty small in both health and monetary terms and would be wiped out by even a fraction of an IQ point loss; the expected benefit is quite small and so conspiracy theorists incorrectly killing fluoridation would not cause much regret.

Polio vaccines on the other hand have been shown to be safe & effective, and even if the CIA were using the polio program to kill doz... (read more)

Less, because in the former case your kids could have a few more cavities and in the latter case your kids could grow up dumb and/or crippled.

This looks like a special case of a failure of intentionality. If a child knows where the marble is, they've managed first-order intentionality, but if they don't realize that Sally doesn't know where the marble is, they've failed at second order.

The orders go higher, though, and it's not obvious how much higher humans can naturally go. If

Bob thinks about "What does Alice think about Bob?" and on rare occasions "What does Alice think Bob thinks about Alice?" but will not organically reason "What does Alice think Bob thinks abo

... (read more)
That's actually pretty easy: Alice doesn't :-) Obligatory reference: Battle of Wits [].

I'd agree that most of the best scientific ideas have been relatively simple... but that's at least partly selection bias.

Compare two possible ideas:

"People with tuberculosis should be given penicillum extract"

"People with tuberculosis should be given (2S,5R,6R)-3,3-dimethyl-7-oxo-6-(2-phenylacetamido)-4-thia-1-azabicyclo[3.2.0]heptane-2-carboxylic acid"

The first idea is no better than the second. But we'd have taken forever to come up with the second, complex idea by just sifting through all the equally-chemically-complex alternatives... (read more)

And possibly the simple ideas which look true are the shadows of the more complicated truth. And possibly they are the only path to the truths which we can find. The puzzle in science is not really 'the unreasonable effectiveness of mathematics'. It is 'the unreasonable effectiveness of simple ideas'. And the puzzle, as you say, is that there have always been simple ideas to lead us to the more complicated ideas. But as you point out, that may well have a simple explanation.

(the following isn't off-topic, I promise:)

Attention, people who have a lot of free time and want to found the next reddit:

When a site user upvotes and downvotes things, you use that data to categorize that user's preferences (you'll be doing a very sparse SVD sort of operation under the hood). Their subsequent votes can be decomposed into expressions of the most common preference vectors, and their browsing can then be sorted by decomposed-votes-with-personalized-weightings.

This will make you a lot of friends (people who want to read ramblings about phil... (read more)

There is Omilibrium [], which does the vote SVD-ing thing.
+5 Insightful

I can imagine it. You just have to embed it in a non-Euclidean geometry. A great circle can be constructed from 4 straight lines, and thus is a square, and it still has every point at a fixed distance from a common center (okay, 2 common centers), and thus is a circle.

The four straight lines in your construction don't meet at right angles.

There exists an irrational number which is 100 minus delta where delta is infinitesimally small.

Just as an aside, no there isn't. Infinitesimal non-zero numbers can be defined, but they're "hyperreals", not irrationals.

I don't think this quite fits the Prisoner's Dilemma mold, since certain knowledge that the other player will defect makes it your best move to cooperate; in a one-shot Prisoner's Dilemma your payoff is improved by defecting whether the other player defects or not.

The standard 2x2 game type that includes this problem is Chicken.


The Deceptive Turn Thesis seems almost unavoidable if you start from the assumptions "the AI doesn't place an inhumanly high value on honesty" and "the AI is tested on inputs vaguely resembling the real world". That latter assumption is probably unavoidable, unless it turns out that human values can be so generalized as to be comprehensible in inhuman settings. If we're stuck testing an AI in a sandbox that resembles reality then it can probably infer enough about reality to know when it would benefit by dissembling.

My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications. We're saying there are five people on this track and one person on this other track, with no explanation of why? Unless the answer really is "quantum fluctuations", utilitarianism demands considering the long-term implications of that explanation. My utility function isn't "save as many lives as possible during the next five minutes", it's (still oversimplifying) "save as many lives as po... (read more)

In the least convenient possible world, you happen upon these people and don't know anything about them, their past, or their reasons.
The purpose of the trolley problem is to consider the clash between deontological principles and allowing harm to occur. So the best situation to consider is one which sets up the purest clash possible. Of course, you can always consider multiple variants of the trolley problem if you then want to explore other aspects.
"My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications." While limited knowledge is inconvenient, that's reality. We have limited knowledge. You place your bets and take your chances.

Patrilineal ancestor, not just ancestor. When talking about someone who lived 40 generations ago, there's a huge difference.

Were any of Silver's previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he's not exactly employing his full statistical toolkit here.

Isolated demands for rigor -- what do you think Adams is doing? (I think he's generating traffic.) -------------------------------------------------------------------------------- But sure, I agree, that's more of a reasonable prior than an argument. There's more info on the table now.

The obstacle to making a river is usually getting the water uphill to begin with. Regular cloud seeding of moist air currents that would otherwise head out to sea? Modifying land albedo to change airflow patterns? That's all dubious, but I can't think of any other ideas for starting a new river with new water.

If you've got a situation where the water you want to flow is already "uphill", then the technology is simply digging, and if you wanted to do enough of it you could make whole new seas.

Fair point-- I was thinking in terms of something more dramatic involving tectonics.

Careful, as new seas can sometimes backfire horribly or change rapidly depending on local hydrologic, agricultural, or geological conditions -

Hmm... I believe you're correct. It would be hard to revise that, too, without making the "Are you a cop? It's entrapment if you lie!" urban legend into truth. It does feel like "posing as a medical worker" should be considered a crime above and beyond "posing as a civilian".

There are always ways around these things for innovative people. With the cop one, one could tell a potential undercover that they are not allowed to enter their premises, playfully. A true cop would be breaking the law if they did, an undercover would not. Alternatively, the potential undercover could be challenged to break a petty crime like jaywalking.
I wouldn't expect it to apply more strongly during peace than during war but conducting military or military intelligence operations under the symbols of humanitarian/Aid organizations has such vast externalities. I'd count handing out food/medical supplies/vaccines, not just medical work. Basically military actions under the guise of amelioration of suffering.

That's one of the most amusing phrases on Wikipedia: "specific contexts such as decision making under risk". In general you don't have to make decisions and/or you can predict the future perfectly, I suppose.

"The feigning of civilian, non-combatant status" is already a subcategory of perfidy, prohibited by the Geneva Conventions. Perfidy is probably the least-prosecuted war crime there is, though.

I was under the impression those rules only applied during an active conflict/war.

Where did "pacifists" and the scare quotes around it come from?

The UFAI debate isn't mainly about military robots.

Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.

Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capa... (read more)

Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.

"What will the world look like when our source of super-accurate predictions suddenly disappears" is not usually the question we'd really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?

We might not even need... (read more)

I don't know if it's the mainstream of transhumanist thought but it's certainly a significant thread.

Information hazard warning: if your state of mind is again closer to "panic attack" and "grief" than to "calmer", or if it's not but you want to be very careful to keep it that way, then you don't want to click this link.

I read it. Your warning did the opposite of what you intended, and the fact that you posted it at all is an incredible error of judgment. Did you even take ten seconds to think this through? Anyway, the piece wasn't very convincing and I've already considered almost everything that was in it. No real harm done. This time. (Shame on you!)

Isn't using a laptop as a metaphor exactly an example

The sentence could have stopped there. If someone makes a claim like "∀ x, p(x)", it is entirely valid to disprove it via "~p(y)", and it is not valid to complain that the first proposition is general but the second is specific.

Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating syste... (read more)

Problems with computer operating systems do not do arbitrary things in the absence of someone consciously using the exploit to make it do arbitrary things. If Windows was a metaphor for unfriendly AI, then it would be possible for AIs to halt in situations where they were intended to work, but they would only turn hostile if someone intentionally programmed them to become hostile. Unfriendly AI as discussed here is not someone intentionally programming the AI to become hostile.

If everybody understood the problem, then allowing farmers to keep their current level of water rights but also allowing them to choose between irrigation and resale would be a Pareto improvement. "Do I grow and export an extra single almond, or do I let Nestle export an extra twenty bottles of water?" is a question which is neutral with respect to water use but which has an obvious consistent answer with respect to profit and utility.

But as is typical, beneficiaries of price controls benefit from not allowing the politicians' electorate to unde... (read more)

I think the public understands that there are farming subsidies and is in principle okay with farming being subsidized since the new deal.

Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.

For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).

Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answe... (read more)

I never claimed "hypothetical optimal output sensitive approximation algorithms" are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that. Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent's observations could require forward simulation of the recent history of other agents/computers in the system. Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT). Also, we should keep in mind that the simulator has direct access to our mental states. Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem - verifying the solution yourself, and thus proving to yourself that you don't live in a simulation which uses less than 10^30 flops. Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.
Load More