This is a special post for short-form writing by Alexander Gietelink Oldenziel. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.
72 comments, sorted by Click to highlight new comments since: Today at 7:58 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Pockets of Deep Expertise 

Why am I so bullish on academic outreach? Why do I keep hammering on 'getting the adults in the room'? 

It's not that I think academics are all Super Smart. 

I think rationalists/alignment people correctly ascertain that most professors don't have much useful to say about alignment & deep learning and often say silly things. They correctly see that much of AI congress is fueled by labs and scale not ML academia. I am bullish on non-ML academia, especially mathematics, physics and to a lesser extent theoretical CS, neuroscience, some parts of ML/ AI academia. This is because while I think 95 % of academia is bad and/or useless there are Pockets of Deep Expertise. Most questions in alignment are close to existing work in academia in some sense - but we have to make the connection!

A good example is 'sparse coding' and 'compressed sensing'. Lots of mech.interp has been rediscovering some of the basic ideas of sparse coding. But there is vast expertise in academia about these topics. We should leverage these!

Other examples are singular learning theory, computational mechanics, etc

Abnormalised sampling?
Probability theory talks about sampling for probability distributions, i.e. normalized measures. However, non-normalized measures abound: weighted automata, infra-stuff, uniform priors on noncompact spaces, wealth in logical-inductor esque math, quantum stuff?? etc.

Most of probability theory constructions go through just for arbitrary measures, doesn't need the normalization assumption. Except, crucially, sampling. 

What does it even mean to sample from a non-normalized measure? What is unnormalized abnormal sampling?

I don't know.... (read more)

Why don't animals have guns? 

Or why didn't evolution evolve the Hydralisk?

Evolution has found (sometimes multiple times) the camera, general intelligence, nanotech, electronavigation, aerial endurance better than any drone, robots more flexible than any human-made drone, highly efficient photosynthesis, etc. 

First of all let's answer another question: why didn't evolution evolve the wheel like the alien wheeled elephants in His Dark Materials?

Is it biologically impossible to evolve?

Well, technically, the flagella of various bacteria is a proper wheel.

No the likely answer is that wheels are great when you have roads and suck when you don't. Roads are build by ants to some degree but on the whole probably don't make sense for an animal-intelligence species. 

Aren't there animals that use projectiles?

Hold up. Is it actually true that there is not a single animal with a gun, harpoon or other projectile weapon?

Porcupines have quils, some snakes spit venom, a type of fish spits water as a projectile to kick insects of leaves than eats insects. Bombadier beetles can produce an explosive chemical mixture. Skunks use some other chemicals. Some snails shoot harpoons from very c... (read more)

3Daniel Murfet2d
Please develop this question as a documentary special, for lapsed-Starcraft player homeschooling dads everywhere.
1nim1mo
Animals do have guns. Humans are animals. Humans have guns. Evolution made us, we made guns, therefore guns indirectly exist because of evolution. Or do you mean "why don't animals have something like guns but permanently attached to them instead of regular guns?" There, I'd start with wondering why humans prefer to have our guns separate from our bodies, compared to affixing them permanently or semi-permanently to ourselves. All the drawbacks of choosing a permanently attached gun would also disadvantage a hypothetical creature that got the accessory through a longer, slower selection process.
1Tao Lin1mo
Another huge missed opportunity is thermal vision. Thermal infrared vision is a gigantic boon for hunting at night, and you might expect eg owls and hawks to use it to spot prey hundreds of meters away in pitch darkness, but no animals do (some have thermal sensing, but only extremely short range)
2Alexander Gietelink Oldenziel1mo
Woah great example didn't know bout that. Thanks Tao
3quetzal_rainbow1mo
Thermal vision for warm-blooded animals has obvious problems with noise.
2Alexander Gietelink Oldenziel1mo
Care to explain? Noise?
1quetzal_rainbow1mo
If you are warm, any warm-detectors inside your body will detect mostly you. Imagine if blood vessels in your own eye radiated in visible spectrum with the same intensity as daylight environment.
4Alexander Gietelink Oldenziel1mo
Can't you filter that out? . How do fighter planes do it?
2Nathan Helm-Burger1mo
Most uses of projected venom or other unpleasant substance seem to be defensive rather than offensive. One reason for this is that it's expensive to make the dangerous substance, and throwing it away wastes it. This cost is affordable if it is used to save your own life, but not easily affordable to acquire a single meal. This life vs meal distinction plays into a lot of offense/defense strategy expenses. For the hunting options, usually they are also useful for defense. The hunting options all seem cheaper to deploy: punching mantis shrimp, electric eel, fish spitting water... My guess it that it's mostly a question of whether the intermediate steps to the evolved behavior are themselves advantageous. Having a path of consistently advantageous steps makes it much easier for something to evolve. Having to go through a trough of worse-in-the-short-term makes things much less likely to evolve. A projectile fired weakly is a cost (energy to fire, energy to producing firing mechanism, energy to produce the projectile, energy to maintain the complexity of the whole system despite it not being useful yet). Where's the payoff of a weakly fired projectile? Humans can jump that gap by intuiting that a faster projectile would be more effective. Evolution doesn't get to extrapolate and plan like that.
2Alexander Gietelink Oldenziel1mo
Fair argument I guess where I'm lost is that I feel I can make the same 'no competitive intermediate forms' for all kinds of wondrous biological forms and functions that have evolved, e.g. the nervous system. Indeed, this kind of argument used to be a favorite for ID advocates.
5Garrett Baker1mo
My naive hypothesis: Once you're able to launch a projectile at a predator or prey such that it breaks skin or shell, if you want it to die, its vastly cheaper to make venom at the ends of the projectiles than to make the projectiles launch fast enough that there's a good increase in probability the adversary dies quickly.
4Alexander Gietelink Oldenziel1mo
Why don't lions, tigers, wolves, crocodiles, etc have venom-tipped claws and teeth? (Actually, apparently many ancestral mammal species like did have venom spurs, similar to the male platypus)
7JBlack1mo
My completely naive guess would be that venom is mostly too slow for creatures of this size compared with gross physical damage and blood loss, and that getting close enough to set claws on the target is the hard part anyway. Venom seems more useful as a defensive or retributive mechanism than a hunting one.

Reasonable interpretations of Recursive Self Improvement are either trivial, tautological or false?

  1. (Trivial)  AIs will do RSI by using more hardware - trivial form of RSI
  2.  (Tautological) Humans engage in a form of (R)SI when they engage in meta-cognition. i.e. therapy is plausibly a form of metacognition. Meta-cognition is  plausible one of the remaining hallmarks of true general intelligence. See Vanessa Kosoy's "Meta-Cognitive Agents". 
    In this view, AGIs will naturally engage in meta-cognition because they're generally intelligent. The
... (read more)
3Vladimir_Nesov1mo
SGD finds algorithms. Before the DL revolution, science studied such algorithms. Now, the algorithms become inference without as much as a second glance. With sufficient abundance of general intelligence brought about by AGI, interpretability might get a lot out of studying the circuits SGD discovers. Once understood, the algorithms could be put to more efficient use, instead of remaining implicit in neural nets and used for thinking together with all the noise that remains from the search.
1lukehmiles1mo
I think the AI will improve (itself) via better hardware and algorithms, and it will be a slog. The AI will frequently need to do narrow tasks where the general algorithm is very inefficient.
2Alexander Gietelink Oldenziel1mo
As I state in the OP I don't feel these examples are nontrivial examples of RSI.
2Michaël Trazzi1mo
I think most interpretations of RSI aren't useful. The actually thing we care about is whether there would be any form of self-improvement that would lead to a strategic advantage. The fact that something would "recursively" self-improve 12 times or 2 times don't really change what we care about.  With respect to your 3 points. 1) could happen by using more hardware, but better optimization of current hardware / better architecture is the actually scary part (which could lead to the discovery of "new physics" that could enable an escape even if the sandbox was good enough for the model before a few iterations of the RSI). 2) I don't think what you're talking about in terms of meta-cognition is relevant to the main problem. Being able to look at your own hardware or source code is though. 3) Cf. what I said at the beginning. The actual "limit" is I believe much higher than the strategic advantage threshold.
2niplav1mo
:insightful reaction: I give this view ~20%: There's so much more info in some datapoints (curvature, third derivative of the function, momentum, see also Empirical Bayes-like SGD, the entire past trajectory through the space) that seems so available and exploitable!
2acertain1mo
What about specialized algorithms for problems (e.g. planning algorithms)?
2Alexander Gietelink Oldenziel1mo
What do you mean exactly? There are definitely domains in which humans have not yet come close to optimal algorithms.
2Thomas Kwa1mo
What about automated architecture search?
2Alexander Gietelink Oldenziel1mo
Architectures mostly don't seem to matter, see 3.  When they do (like in Vanessa's meta-MDPs) I think it's plausible automated architecture search is a simply an instantiation of the algorithm for general intelligence (see 2.)

SLT and phase transitions

The morphogenetic SLT story says that during training the Bayesian posterior concentrates around a series of subspaces  with rlcts   and losses . As the size of the data sample  is scaled the Bayesian posterior makes transitions  trading off higher complexity (higher ) for better accuracy (lower loss ).

This is the radical new framework of SLT: phase transitions happen i... (read more)

Alignment by Simulation?

I've heard this alignment plan that is a variation of 'simulate top alignment researchers' with an LLM. Usually the poor alignment researcher in question is Paul. 

This strikes me as deeply unserious and I am confused why it is having so much traction. 

That AI-assisted alignment is coming (indeed, is already here!) is undeniable. But even somewhat accurately simulating a human from textdata is a crazy sci-fi ability, probably not even physically possible. It seems to ascribe nearly magical abilities to LLMs. 

Predicting... (read more)

Fractal Fuzz: making up for size

GPT-3 recognizes 50k possible tokens. For a 1000 token context window that means there are  possible prompts. Astronomically large. If we assume the output of a single run of gpt is 200 tokens then for each possible prompt there are  possible continuations. 

GPT-3 is probabilistic, defining for each possible prompt  () a distribution  on a set of size , in other words a  dimensional space. [1]

Mind-boggingly large. Compared to these numbers the amount of data (40 trillion tokens??) and the size of the model (175 billion parameters) seems absolutely puny in comparison.

I won't be talking about the data, or 'overparameterizations' in this short, that is well-explained by Singular Learning Theory. Instead, I will be talking about nonrealizability.

Nonrealizability & the structure of natural data

Recall the setup of (parametric) Bayesian learning: there is a sample space , a true distribution  on  and a parameterized family of probability distributions .

It is often assumed that the true distrib... (read more)

1Zach Furman2mo
Very interesting, glad to see this written up! Not sure I totally agree that it's necessary for W to be a fractal? But I do think you're onto something. In particular you say that "there are points y in the larger dimensional space that are very (even arbitrarily) far from W," but in the case of GPT-4 the input space is discrete, and even in the case of e.g. vision models the input space is compact. So the distance must be bounded. Plus if you e.g. sample a random image, you'll find there's usually a finite distance you need to travel in the input space (in L1, L2, etc) until you get something that's human interpretable (i.e. lies on the data manifold). So that would point against the data manifold being dense in the input space. But there is something here, I think. The distance usually isn't that large until you reach a human interpretable image, and it's quite easy to perturb images slightly to have completely different interpretations (both to humans and ML systems). A fairly smooth data manifold wouldn't do this. So my guess is that the data "manifold" is in fact not a manifold globally, but instead has many self-intersections and is singular. That would let it be close to large portions of input space without being literally dense in it. This also makes sense from an SLT perspective. And IIRC there's some empirical evidence that the dimension of the data "manifold" is not globally constant.
2Alexander Gietelink Oldenziel2mo
The input and output spaces etc Ω are all discrete but the spaces of distributions Δ(Ω) on those spaces are infinite (but still finite-dimensional).  It depends on what kind of metric one uses, compactness assumptions etc whether or not you can be arbitrarily far. I am being rather vague here. For instance, if you use the KL-divergence, then K(q|puniform) is always bounded -  indeed it equals the entropy of the true distribution H(q)! I don't really know what ML people mean by the data manifold so won't say more about that.  I am talking about the space W of parameter values of a conditional probability distribution p(x|w).   I think that W having nonconstant local dimension doesn't seem that relevant since the largest dimensional subspace would dominate? Self-intersections and singularities could certainly occur here. (i) singularities in the SLT sense have to do with singularities in the level sets of the KL-divergence (or loss function)  - don't see immediately how these are related to the singularities that you are talking about here (ii) it wouldn't increase the dimensionality (rather the opposite).  The fractal dimension is important basically because of space-filling curves : a space that has a low-dimensional parameterization can nevertheless have a very large effective dimensions when embedded fractally into a larger-dimensional space. These embeddings can make a low-dimensional parameterization effectively have higher dimension. 
1Zach Furman2mo
Sorry, I realized that you're mostly talking about the space of true distributions and I was mainly talking about the "data manifold" (related to the structure of the map x↦p(x∣w∗) for fixed w∗). You can disregard most of that. Though, even in the case where we're talking about the space of true distributions, I'm still not convinced that the image of W under p(x∣w) needs to be fractal. Like, a space-filling assumption sounds to me like basically a universal approximation argument - you're assuming that the image of W densely (or almost densely) fills the space of all probability distributions of a given dimension. But of course we know that universal approximation is problematic and can't explain what neural nets are actually doing for realistic data.
3Alexander Gietelink Oldenziel2mo
Obviously this is all speculation but maybe I'm saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)? Curious what's your beef with universal approximation? Stone-weierstrass isn't quantitative - is that the reason? If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.
1Zach Furman2mo
Oh I actually don't think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of W is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm C(X,R), the Banach subalgebra of polynomials is dense in C(X,R). In practice you'd only have a finite-dimensional subset of the polynomials, so this obviously can't hold exactly, but as you increase the size of the polynomials, they'll be more space-filling and the error bound will decrease. The problem is that the dimension of W required to achieve a given ϵ error bound grows exponentially with the dimension d of your underlying space X. For instance, if you assume that weights depend continuously on the target function, ϵ-approximating all Cn functions on [0,1]d with Sobolev norm ≤1 provably takes at least O(ϵ−d/n) parameters (DeVore et al.). This is a lower bound. So for any realistic d universal approximation is basically useless - the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table. Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can't just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.
2Alexander Gietelink Oldenziel2mo
Ooooo okay so this seems like it's directly pointing to the fractal story! Exciting!
2Alexander Gietelink Oldenziel2mo
Obviously this is all speculation but maybe I'm saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)? Stone-weierstrass isn't quantitative. If true it suggest the fractal dimension (probably related to the information dimension I linked to above) may be important.

Trivial but important

Aumann agreement can fail for purely epistemic reasons because real-world minds do not do Bayesian updating. Bayesian updating is intractable so realistic minds sample from the prior. This is how e.g. gradient descent works and also how human minds work.

In this situation a two minds can end in two different basins with similar loss on the data. Because of computational limitations. These minds can have genuinely different expectation for generalization.

(Of course this does not contradict the statement of the theorem which is correct.)

Optimal Forward-chaining versus backward-chaining.

In general, this is going to depend on the domain. In environments for which we have many expert samples and there are many existing techniques backward-chaining is key.  (i.e. deploying resources & applying best practices in business & industrial contexts)

In open-ended environments such as those arising Science, especially pre-paradigmatic fields backward-chaining and explicit plans breakdown quickly. 

 

Incremental vs Cumulative

Incremental: 90% forward chaining 10% backward chaining f... (read more)

Corrupting influences

The EA AI safety strategy has had a large focus on placing EA-aligned people in A(G)I labs. The thinking was that having enough aligned insiders would make a difference on crucial deployment decisions & longer-term alignment strategy. We could say that the strategy is an attempt to corrupt the goal of pure capability advance & making money towards the goal of alignment. This fits into a larger theme that EA needs to get close to power to have real influence. 

[See also the large donations EA has made to OpenAI & Anthropic. ]

Whether this strategy paid off...  too early to tell.

What has become apparent is that the large AI labs & being close to power have had a strong corrupting influence on EA epistemics and culture. 

  • Many people in EA now think nothing of being paid Bay Area programmer salaries for research or nonprofit jobs.
  •  There has been a huge influx of MBA blabber being thrown around. Bizarrely EA funds are often giving huge grants to for profit organizations for which it is very unclear whether they're really EA-aligned in the long-term or just paying lip service. Highly questionable that EA should be trying to do venture
... (read more)
5Daniel Murfet2d
As a supervisor of numerous MSc and PhD students in mathematics, when someone finishes a math degree and considers a job, the tradeoffs are usually between meaning, income, freedom, evil, etc., with some of the obvious choices being high/low along (relatively?) obvious axes. It's extremely striking to see young talented people with math or physics (or CS) backgrounds going into technical AI alignment roles in big labs, apparently maximising along many (or all) of these axes! Especially in light of recent events I suspect that this phenomenon, which appears too good to be true, actually is.
2Noosphere893mo
I'd arguably say this is good, primarily because I think EA was already in danger of it's AI safety wing becoming unmoored from reality by ignoring key constraints, similar to how early Lesswrong before the deep learning era around 2012-2018 turned out to be mostly useless due to how much everything was stated in a mathematical way, and not realizing how many constraints and conjectured constraints applied to stuff like formal provability, for example..
3RHollerith3mo
Yes!
8Thomas Kwa3mo
I'm not too concerned about this. ML skills are not sufficient to do good alignment work, but they seem to be very important for like 80% of alignment work and make a big difference in the impact of research (although I'd guess still smaller than whether the application to alignment is good) * Primary criticisms of Redwood involve their lack of experience in ML * The explosion of research in the last ~year is partially due to an increase in the number of people in the community who work with ML. Maybe you would argue that lots of current research is useless, but it seems a lot better than only having MIRI around * The field of machine learning at large is in many cases solving easier versions of problems we have in alignment, and therefore it makes a ton of sense to have ML research experience in those areas. E.g. safe RL is how to get safe policies when you can optimize over policies and know which states/actions are safe; alignment can be stated as a harder version of this where we also need to deal with value specification, self-modification, instrumental convergence etc.
2Alexander Gietelink Oldenziel3mo
I mostly agree with this. I should have said 'prestige within capabilities research' rather than ML skills which seems straightforwardly useful. The former is seems highly corruptive.

Thin versus Thick Thinking

 

Thick: aggregate many noisy sources to make a sequential series of actions in mildly related environments, model-free RL

carnal sins: failure of prioritization / not throwing away enough information , nerdsnipes, insufficient aggegration, trusting too much in any particular model,  indecisiveness, overfitting on noise, ignoring consensus of experts/ social reality

default of the ancestral environment

CEOs, general, doctors, economist, police detective in the real world, trader

Thin: precise, systematic analysis, preferably ... (read more)

[Thanks to Vlad Firoiu for helping me]

An Attempted Derivation of the Lindy Effect
Wikipedia:

The Lindy effect (also known as Lindy's Law[1]) is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age.

Laplace Rule of Succesion 

What is the probability that the Sun will rise tomorrow, given that is has risen every day for 5000 years? 

Let  denote the probability that the Sun will rise tomorrow. A priori we have no information on the value of&... (read more)

2JBlack3mo
I haven't checked the derivation in detail, but the final result is correct. If you have a random family of geometric distributions, and the density around zero of the decay rates doesn't go to zero, then the expected lifetime is infinite. All of the quantiles (e.g. median or 99%-ile) are still finite though, and do depend upon n in a reasonable way.

Imprecise Information theory 

Would like a notion of entropy for credal sets. Diffractor suggests the following:

let  be a credal set. 

Then the entropy of  is defined as

where  denotes the usual Shannon entropy.

I don't like this since it doesn't satisfy the natural desiderata below. 


Instead, I suggest the following. Let  denote the (absolute) maximum entropy distribution, i.e.  and let .

Desideratum 1: ... (read more)

Generalized Jeffrey Prior for singular models?

For singular models the Jeffrey Prior is not well-behaved for the simple fact that it will be zero at minima of the loss function. 
Does this mean the Jeffrey prior is only of interest in regular models? I beg to differ. 

Usually the Jeffrey prior is derived as parameterization invariant prior. There is another way of thinking about the Jeffrey prior as arising from an 'indistinguishability prior'.

The argument is delightfully simple: given two weights  if they encode the same distributi... (read more)

1Daniel Murfet2d
You might reconstruct your sacred Jeffries prior with a more refined notion of model identity, which incorporates derivatives (jets on the geometric/statistical side and more of the algorithm behind the model on the logical side).

Latent abstractions Bootlegged.

Let  be random variables distributed according to a probability distribution  on a sample space 

Defn. A (weak) natural latent of  is a random variable  such that

(i)   are independent conditional on 

(ii) [reconstructability]   for all 

[This is not really reconstructability, more like a stability property. The information is contained in many parts of the system... I might al... (read more)

Inspired by this Shalizi paper defining local causal states. The idea is so simple and elegant I'm surprised I had never seen it before. 

Basically, starting with a a factored probability distribution  over a dynamical DAG  we can use Crutchfield causal state construction locally to construct a derived causal model factored over  the dynamical DAG as  where  is defined by considering the past and forward lightcone of  defined as  all those points/ variables  which influence  respectively are influenced by  (in a causal interventional sense) . Now take define the equivalence relatio on realization  of   (which includes  by definition)[1] whenever the conditional probability distribution   on the future light cones are equal. 

These factored probability distributions over dynamical DAGs are called 'fields' by physicists. Given any field  we define a derived local causal state field  in the above way. Woah!

 ... (read more)

8johnswentworth5mo
That condition doesn't work, but here's a few alternatives which do (you can pick any one of them): * Λ=(x↦P[X=x|Λ]) - most conceptually confusing at first, but most powerful/useful once you're used to it; it's using the trick from Minimal Map. * Require that Λ be a deterministic function of X, not just any latent variable. * H(Λ)=I(X,Λ) (The latter two are always equivalent for any two variables X,Λ and are somewhat stronger than we need here, but they're both equivalent to the first once we've already asserted the other natural latent conditions.)

Reasons to think Lobian Cooperation is important

Usually the modal Lobian cooperation is dismissed as not relevant for real situations but it is plausible that Lobian cooperation extends far more broadly than what is proved currently.

 It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives. 
Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another's blueprint. This is ... (read more)

5Noosphere896mo
I definitely agree that cooperation can definitely be way better in the future, and Lobian cooperation, especially with Payor's Lemma, might well be enough to get coordination across entire solar system. That stated, it's much more tricky to expand this strategy to galactic scales, assuming our physical models aren't wrong, because light speed starts to become a very taut constraint under a galaxy wide brain, and acausal strategies will require a lot of compute to simulate entire civilizations. Even worse, they depend on some common structure of values, and I suspect it's impossible to do in the fully general case.

Evidence Manipulation and Legal Admissible Evidence

[This was inspired by Kokotaljo's shortform on comparing strong with weak evidence] 


In the real world the weight of many pieces of weak evidence is not always comparable to a single piece of strong evidence. The important variable here is not strong versus weak per se but the source of the evidence. Some sources of evidence are easier to manipulate in various ways. Evidence manipulation, either consciously or emergently, is common and a large obstactle to truth-finding. 

Consider aggregating many ... (read more)

2ChristianKl1y
In other cases like medicine, many people argue that direct observation should be ignored ;)

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.

Why Roko's basilisk probably doesn't work for simulation fidelity reasons: 

Roko's basilisk threatens to simulate and torture you in the future if you don't comply. Simulation cycles cost resources. Instead of following through on torturing our wo... (read more)

1Richard_Kennaway1y
I have always taken Roko's Basilisk to be the threat that the future intelligence will torture you, not a simulation, for not having devoted yourself to creating it.
1TAG1y
How do you know you are not in a low fidelity simulation right now? What could you compare it against?
2Vladimir_Nesov1y
If the agents follow simple principles, it's simple to simulate those principles with high fidelity, without simulating each other in all detail. The obvious guide to the principles that enable acausal coordination is common knowledge of each other, which could be turned into a shared agent that adjudicates a bargain on their behalf.

Imagine a data stream 

 

assumed infinite in both directions for simplicity. Here  represents the current state ( the "present") and while  and  represents the future

Predictible Information versus Predictive Information

Predictible information is the maximal information (in bits) that you can derive about the future given the access to the past. Predictive information is the amount of bits that you need from the past to make that optimal prediction.

Suppose you are... (read more)

"The links between logic and games go back a long way. If one thinks of a debate as a kind of game, then Aristotle already made the connection; his writings about syllogism are closely intertwined with his study of the aims and rules of debating. Aristotle’s viewpoint survived into the common medieval name for logic: dialectics. In the mid twentieth century Charles Hamblin revived the link between dialogue and the rules of sound reasoning, soon after Paul Lorenzen had connected dialogue to constructive foundations of logic." from the Stanford Encyclopedia ... (read more)

"I dreamed I was a butterfly, flitting around in the sky; then I awoke. Now I wonder: Am I a man who dreamt of being a butterfly, or am I a butterfly dreaming that I am a man?"- Zhuangzi

Questions I have that you might have too:

  • why are we here? 
  • why do we live in such an extraordinary time?  
  • Is the simulation hypothesis true? If so, is there a base reality?
  • Why do we know we're not a Boltzmann brain?
  • Is existence observer-dependent?
  • Is there a purpose to existence, a Grand Design?
  • What will be computed in the Far Future?

In this shortform I will try and... (read more)

2Richard_Kennaway1y
In this comment I will try and write the most boring possible reply to these questions. 😊 These are pretty much my real replies. "Ours not to reason why, ours but to do or do not, there is no try." Someone must. We happen to be among them. A few lottery tickets do win, owned by ordinary people who are perfectly capable of correctly believing that they have won. Everyone should be smart enough to collect on a winning ticket, and to grapple with living in interesting (i.e. low-probability) times. Just update already. It is false. This is base reality. But I can still appreciate Eliezer's fiction on the subject. The absurdity heuristic. I don't take BBs seriously. Even in classical physics there is no observation without interaction. Beyond that, no, however many quantum physicists interpret their findings to the public with those words, or even to each other. Not that I know of. (This is not the same as a flat "no", but for most purposes rounds off to that.) Either nothing in the case of x-risk, nothing of interest in the case of a final singleton, or wonders far beyond our contemplation, which may not even involve anything we would recognise as "computing". By definition, I can't say what that would be like, beyond guessing that at some point in the future it would stand in a similar relation to the present that our present does to prehistoric times. Look around you. Is this utopia? Then that future won't be either. But like the present, it will be worth having got to. Consider a suitable version of The Agnostic Prayer inserted here against the possibility that there are Powers Outside the Matrix who may chance to see this. Hey there! I wouldn't say no to having all the aches and pains of this body fixed, for starters. Radical uplift, we'd have to talk about first.

The Vibes of Mathematics:

Q: What is it like to understand advanced mathematics? Does it feel analogous to having mastery of another language like in programming or linguistics?

A: It's like being stranded on a tropical island where all your needs are met, the weather is always perfect, and life is wonderful.

Except nobody wants to hear about it at parties.

Vibes of Maths: Convergence and Divergence

level 0: A state of ignorance.  you live in a pre-formal mindset. You don't know how to formalize things. You don't even know what it would even mean 'to prove something mathematically'. This is perhaps the longest. It is the default state of a human. Most anti-theory sentiment comes from this state. Since you've neve

You can't productively read Math books. You often decry that these mathematicians make books way too hard to read. If they only would take the time to explain things simply you would understand. 

level 1 : all math is amorphous blob

You know the basic of writing an epsilon-delta proof. Although you don't know why the rules of maths are this or that way you can at least follow the recipes. You can follow simple short proofs, albeit slowly. 

You know there are differen... (read more)

1Daniel Murfet2d
  You seem to do OK...  This is an interesting one. I field this comment quite often from undergraduates, and it's hard to carve out enough quiet space in a conversation to explain what they're doing wrong. In a way the proliferation of math on YouTube might be exacerbating this hard step from tourist to troubadour.
7PhilGoetz1y
I say that knowing particular kinds of math, the kind that let you model the world more-precisely, and that give you a theory of error, isn't like knowing another language.  It's like knowing language at all.  Learning these types of math gives you as much of an effective intelligence boost over people who don't, as learning a spoken language gives you above people who don't know any language (e.g., many deaf-mutes in earlier times). The kinds of math I mean include: * how to count things in an unbiased manner; the methodology of polls and other data-gathering * how to actually make a claim, as opposed to what most people do, which is to make a claim that's useless because it lacks quantification or quantifiers * A good example of this is the claims in the IPCC 2015 report that I wrote some comments on recently.  Most of them say things like, "Global warming will make X worse", where you already know that OF COURSE global warming will make X worse, but you only care how much worse. * More generally, any claim of the type "All X are Y" or "No X are Y", e.g., "Capitalists exploit the working class", shouldn't be considered claims at all, and can accomplish nothing except foment arguments. * the use of probabilities and error measures * probability distributions: flat, normal, binomial, poisson, and power-law * entropy measures and other information theory * predictive error-minimization models like regression * statistical tests and how to interpret them These things are what I call the correct Platonic forms.  The Platonic forms were meant to be perfect models for things found on earth.  These kinds of math actually are.  The concept of "perfect" actually makes sense for them, as opposed to for Earthly categories like "human", "justice", etc., for which believing that the concept of "perfect" is coherent demonstrably drives people insane and causes them to come up with things like Christianity. They are, however, like Aristotle's Forms, in that the u

Agent Foundations Reading List [Living Document]
This is a stub for a living document on a reading list for Agent Foundations. 

Causality

Book of Why, Causality - Pearl

Probability theory 
Logic of Science - Jaynes

Ambiguous Counterfactuals

[Thanks to Matthias Georg Mayer for pointing me towards ambiguous counterfactuals]

Salary is a function of eXperience and Education

We have a candidate  with given salary, experience  and education .

Their current salary is given by 


We 'd like to consider the counterfactual where they didn't have the education . How do we evaluate their salary in this counterfactual?

This is slightly ambiguous - there are two counterfactuals:

 or  

In the second c... (read more)

Hopfield Networks = Ising Models = Distributions over Causal models?

Given a joint probability distributions  famously there might be many 'Markov' factorizations. Each corresponds with a different causal model.

Instead of choosing a particular one we might have a distribution of beliefs over these different causal models. This feels basically like a Hopfield Network/ Ising Model. 

You have a distribution over nodes and an 'interaction' distribution over edges. 

The distribution over nodes corresponds to the joint probability di... (read more)

Insights as Islands of Abductive Percolation?

I've been fascinated by this beautiful paper by Viteri & DeDeo. 

What is a mathematical insight? We feel intuitively that proving a difficult theorem requires discovering one or more key insights. Before we get into what the Dedeo-Viteri paper has to say about (mathematical) insights let me recall some basic observations on the nature of insights:

(see also my previous shortform)

  • There might be a unique decomposition, akin to prime factorization. Alternatively, there might many roads to Rome: some theorems
... (read more)