Aaro Salosensaari

Posts

Sorted by New

Comments

Aaro Salosensaari's Shortform

Eventually, yes, it is related to arguments concerning people. But I was curious about what aesthetics remain after I try to abstract away the messy details. 

Aaro Salosensaari's Shortform

>Is this a closed environment, that supports 100000 cell-generations?

Good question! No. I was envisioning it as a system where a constant population of 100 000 would be viable. (RA pipettes in a constant amount of nutritional fluid every day or something).  Now that you asked the question, it might make more sense to investigate this assumption more.

Aaro Salosensaari's Shortform

I have a small intuition pump I am working on, and thought maybe others would find it interesting.

Consider a habitat (say, a Petri dish) that in any given moment has maximum carrying capacity for supporting 100 000 units of life (say, cells), and two alternative scenarios.

Scenario A. Initial population of 2 cells grows exponentially, one cell dying but producing two descendants each generation. After the 16th generation, the habitat overflows, and all cells die in overpopulation. The population experienced a total of 262 142 units of flourishing.

Scenario B. More or less stable population of x cells (x << 100 000 units, say, approximately 20) continues for n generations, for total of x * n units of flourishing until the habitat meets its natural demise after n generations.

For some reason or other, I find the scenario B much more appealing even for relatively small numbers of n. For example, while n=100 000 (2 000 000 units of total flourishing) would be obviously better for utilitarian who cares about total equal sum of flourishing units (utilitons), I personally find already meager n=100 (x*n = 2000) sounding better than A. 

Maybe this is just because of me assuming that because n=100 is possible, also larger n sounds possible. Or maybe I am utiliton-blind and just think 100 > 17. Or maybe something else.

Background. In a recent discussion with $people, I tried to argue why I find a long term existence of a limited human population much more important than mere potential size of total experienced human flourishing or something more abstract. I have not tried to "figure in" more details, but somethings I have thought about adding in, is various probabilistic scenarios / uncertainty about total carrying capacity. No, I have not read (/remember reading) previous relevant LW posts, if you can think of something useful / relevant, please link it! 

How long does it take to become Gaussian?

I agree the non-IID result is quite surprising. Careful reading of the Berry-Esseen gives some insight on the limit behavior. In the IID case, the approximation error is bounded by constants / $\sqrt{n}$ (where constants are proportional to third moment / $\sigma^3$.

The not-IID generalization for n distinct distribution has the bound more or less sum of third moments divided by (sum of sigma^2)^(3/2) times (sum of third moments), which is surprisingly similar to IID special case. My reading of it suggests that if the sigmas / third moments of all n distributions are all bounded below / above some sigma / phi (which of course happens when you pick up any finite number of distributions by hand), the error is again diminishes at rate $1/\sqrt{n}$ if you squint your eyes.

So, I would guess for a series of not-IID distributions to sum into a Gaussian as poorly as possible (while Berry-Esseen still applies), one would have to pick a series of distributions with as wildly small variances and wildly large skews...? And getting rid of the assumptions of CLT/its generalizations gives that the theorem no longer applies.

Reason isn't magic

It gets worse. This isn't a randomly selected example - it's specifically selected as a case where reason would have a hard time noticing when and how it's making things worse.

Well, the history of bringing manioc to Africa is not the only example. Scientific understanding of human nutrition (alongside with disease) had several similar hiccups along the way, several which have been covered in SSC (can't remember the post titles where):

There was a time when Japanese army lost many lives to beriberi during Russo-Japanese war, thinking it was a transmissible disease, several decades [1] after the one of the first prominent Japanese young scholars with Western medical training discovered it was a deficiency related to nutrition with a classical trial setup in Japanese navy (however, he attributed it -- wrongly -- to deficiency of nitrogen). It took several decades to identify vitamin B1. [2]

Earlier, there was a time when scurvy was a problem in navies, including the British one, but then British navy (or rather, East India Company) realized citrus fruits were useful preventing scurvy, in 1617 [3]. Unfortunately it didn't catch on. Then they discovered it again with an actual trial and published the results, in 1740-50s [4]. Unfortunately it again didn't catch on, and the underlying theory was also as wrong as the others anyway. Finally, against the scientific consensus at the time, the usefulness of citrus was proven by a Navy read admiral in 1795 [5]. Unfortunately they still did not have proper theory why the citrus was supposed to work, so when the Navy managed to switch to using lime juice with minimal vitamin C content [6], then managed reason themselves out of use of citrus, and scurvy was determined as a result of food gone bad [7]. Thus Scott's Arctic expedition was ill-equipped to prevent scurvy, and soldiers in Gallipoli 1915 also suffered from scurvy.

Story of discovering vitamin D does not involve as dramatic failings, but prior to discovery of UV treatment and discovery of vitamin D, John Snow suggested the cause was adulterated food [8]. Of course, even today one can easily find internet debates about what is "correct" amount of vitamin D supplement if one has not sunlight in winter. Solving B12 deficiency induced anemia appears a true triumph of the science, as a Nobel prize was awarded for dietary recommendation for including liver in the diet [9] before B12 (present in liver) was identified [10].

Some may notice that we have now covered many of the significant vitamins in human diet. I have not even started with the story of Semmelweis.

And anyway, I dislike the whole premise of casting the matter about "being for reason" or "against reason". The issue with manioc, scurvy, beriberi, and hygiene was that people had unfortunate overconfidence in their per-existing model of reality. With sufficient overconfidence, rationalization or mere "rational speculation", they could explain how seemingly contradictory experimental results actually fitted in their model, and thus claim the nutrition-based explanations as an unscientific hogwash, until the actual workings of vitamins was discovered. (The article [1] is very instructive about rationalizations Japanese army could come up to dismiss Navy's apparent success with fighting beriberi: ships were easier to keep clean, beriberi was correlated with spending time on contact with damp ground, etc.)

While looking up food-borne diseases while writing this comment, I was reminded about BSE [11], which is hypothesized to cause vCJD in humans because humans thought it was a good idea to feed dead animals to cattle to improve nutrition (which I suppose it does, barring prion disease). I would view this as a failing from not having a full model what side-effects behavior suggested by the partial model would cause. 

On the positive side, sometimes the partial model works well enough: It appears that miasma theory of disease like cholera was the principal motivator for building modern sewage systems. While it is today obvious cholera is not caused by miasma, getting rid of smelly sewage in orderly fashion turned out to be a good idea nevertheless [12].

I am uncertain if I have any proper suggested conclusion, except for that, in general, mistakes of reason are possible and possibly fatal, and social dynamics may prevent proper corrective action for a long time. This is important to keep in mind when making decisions, especially novel and unprecedented, and when evaluating the consequences of action. (The consensus does not necessarily budge easily.)

Maybe a more specific conclusion could be: If one has only evidently partial scientific understanding of some issue, it is very possible acting on it can have unintended consequences. It may even not be obvious where the holes in the scientific understanding are. (Paraphrasing the response to Semmelweis: "We don't exactly know what causes childbed fever, it manifests in many different organs so it could be several different diseases, but the idea of invisible corpse particles that defy water and soap is simply laughable.")

 

[1] https://pubmed.ncbi.nlm.nih.gov/16673750/

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3725862/

[3] https://en.wikipedia.org/wiki/John_Woodall

[4] https://en.wikipedia.org/wiki/James_Lind

[5] https://en.wikipedia.org/wiki/Alan_Gardner,_1st_Baron_Gardner 

[6] https://en.wikipedia.org/wiki/Scurvy#19th_century 

[7] https://idlewords.com/2010/03/scott_and_scurvy.htm 

[8] https://en.wikipedia.org/wiki/Rickets#History 

[9] https://www.nobelprize.org/prizes/medicine/1934/whipple/facts/

[10] https://en.wikipedia.org/wiki/Vitamin_B12#Descriptions_of_deficiency_effects 

[11] https://en.wikipedia.org/wiki/Bovine_spongiform_encephalopathy 

[12] https://en.wikipedia.org/wiki/Joseph_Bazalgette 

Developmental Stages of GPTs

(Reply to gwern's comment but not only addressing gwern.)

Concerning the planning question:

I agree that next-token prediction is consistent with some sort of implicit planning of multiple tokens ahead. I would phrase it a bit differently. Also, "implicit" is doing lot of work here

(Please someone correct me if I say something obviously wrong or silly; I do not know how GPT-3 works, but I will try to say something about how it works after reading some sources [1].)

The bigger point about planning, though, is that the GPTs are getting feedback on one word at a time in isolation. It's hard for them to learn not to paint themselves into a corner.

To recap what I have thus far got from [1]: GPT-3-like transformers are trained by regimen where the loss function evaluates prediction error of the next word in the sequence given the previous word. However, I am less sure if one can say they do it in isolation. During training (by SGD I figure?), transformer decoder layers have (i) access to previous words in the sequence, and (ii) both attention and feedforward parts of each transformer layer has weights (that are being trained) to compute the output predictions. Also, (iii) the GPT transformer architecture considers all words in each training sequence, left to right, masking the future. And this is done for many meaningful Common Crawl sequences, though exact same sequences won't repeat.

So, it sounds a bit trivial that GPTs trained weights allow "implicit planning": if given a sequence of words w_1 to w_i-1 GPT would output word w for position i, this is because a trained GPT model (loosely speaking, abstracting away many details I don't understand) "dynamically encodes" many plausible "word paths" to word w, and [w_1 ... w_i-1] is such a path; by iteration, it also encodes many word paths from w to other words w', where some words are likelier to follow w than others. The representations in the stack of attention and feedforward layers allows it to generate text much more better than eg old good Markov chain. And "self-attending" to some higher-level representation that allows it generate text in particular prose style seems a lot of like a kind of plan. And GPT generating text that it used as input to it, to which it again can selectively "attend to", this all seems like as a kind of working memory, which will trigger self-attention mechanism to take certain paths, and so on.

I also want highlight oceainthemiddleofanisland's comment in other thread: Breaking complicated generation tasks into smaller chunks getting GPT to output intermediate text from initial input, which is then given as input to GPT to reprocess, enabling it finally to output desired output, sounds quite compatible to this view.

(On this note, I am not sure what to think of the role of human in the loop here, or in general, how it apparently requires non-trivial work to find a "working" prompt that seeds GPT obtain desired results for some particularly difficult tasks. That there are useful, rich world models "in there somewhere" in GPTs weights, but it is difficult to activate them? And are these difficulties because it is humans are bad at prompting GPT to generate text that accesses the good models, or because GPTs all-together model is not always so impressive as it easily turns into building answers based on gibberish models instead of the good ones, or maybe GPT having a bad internal model of humans attempting to use GPT? Gwern's example concerning bear attacks was interesting here.)

This would be "implicit planning". Is it "planning" enough? In any case, the discussion would be easier if we had a clearer definition what would constitute planning and what would not.

Finally, a specific response to gwerns comment.

During each forward pass, GPT-3 probably has plenty of slack computation going on as tokens will differ widely in their difficulty while GPT-3's feedforward remains a fixed-size computation; just as GPT-3 is always asking itself what sort of writer wrote the current text, so it can better imitate the language, style, format, structure, knowledge limitations or preferences* and even typos, it can ask what the human author is planning, the better to predict the next token. That it may be operating on its own past completions and there is no actual human author is irrelevant - because pretending really well to be an author who is planning equals being an author who is planning! (Watching how far GPT-3 can push this 'as if' imitation process is why I've begun thinking about mesa-optimizers and what 'sufficiently advanced imitation' may mean in terms of malevolent sub-agents created by the meta-learning outer agent.)

Using language how GPT-3 is "pretending" and "asking itself what a human author would do" can be maybe justified as metaphors, but I think it is a bit fuzzy and may obscure differences between what transformers do when we say they "plan" or "pretend", and what people would assume of beings who "plan" or "pretend". For example, using a word like "pretend" easily carries over an implication that there is something true, hidden, "unpretense" thinking or personality going on underneath. This appears quite unlikely given a fixed model, and generation mechanism that starts anew from each seed prompt. I would rather say that GPT has a model (is a model?) that is surprisingly good at natural language extrapolation and also, it is surprising at what can be achieved by extrapolation.


[1] http://jalammar.github.io/illustrated-gpt2/ , http://peterbloem.nl/blog/transformers and https://amaarora.github.io/2020/02/18/annotatedGPT2.html in addition to skimming original OpenAI papers

Developmental Stages of GPTs

I contend it is not an *implementation* in a meaningful sense of the word. It is more a prose elaboration / expansion of the first generated bullet point list (an inaccurate one: "plan" mentions chopping vegetables, putting them in a fridge and cooking meat; prose version tells of chopping a set of vegetables, skips the fridge and then cooks beef, and then tells an irrelevant story where you go to sleep early and find it is a Sunday and no school).

Mind, substituting abstract category words with sensible more specific ones (vegetables -> carrots, onions and potatoes) is an impressive NLP task for an architecture where the behavior is not hard-coded in (because that's how some previous natural language generators worked), and even more impressive that it can produce the said expansion with a NLP input prompt, but hardly a useful implementation of a plan.

An improved experiment of "implementing plans" that could be within capabilities of GPT-3 or similar system: get GPT-3 to first output a plan of doing $a_thing and then the correct keystroke sequence input for UnReal World, DwarfFortress or Sims or some other similar simulated environment to produce it.

Self-sacrifice is a scarce resource

At the risk of stating very much the very obvious:

Trolley problem (or the fat man variant) is a wrong metaphor for near any ethical decision, anyway, as there are very few real life ethical dilemmas that are as visceral and require immediate action from very few limited set of options and whose consequences are nevertheless as clear.

Here is a couple of a bit more realistic matter of life and death. There are many stories (probably I could find factual accounts, but I am too lazy to search for sources) of soldiers who make the snap decision to save the lives of rest of their squad by jumping on a thrown hand grenade. Yet I doubt very few would cast much blame on anyone who had a chance of taking cover, and did that instead. (I wouldn't.) Moreover, the generals who demand prisoners (or agitate impressionable recruits) to clear a minefield without proper training or equipment are to be much frowned upon. And of course, there are untold possibilities to commit a dumb self-sacrifice that achieves nothing.

It general, a military force can not be very effective without people willing to put themselves in danger: if one finds oneself agreement with existence of states and armies, some amount of self-sacrifice follows naturally. For this reason, there are acts of valor who are viewed positively and to be cultivated. Yet, there are also common Western moral sentiments which dictate that it is questionable or outright wrong to require the unreasonable of other people, especially if the benefactors or the people doing the requiring are contributing relatively little themselves (sentiment demonstrated here by Blackadder Goes Forth). And in some cases drawing a judgement is generally considered difficult.

(What one should make of the Charge of the Light Brigade? I am not a military historian, but going by the popular account, the order to charge was stupid, negligent, mistake, or all of the three. Yet to some people, there is something inspirational in the foolishness of soldiers fulfilling the order; others would see such vies as abhorrent legend-building propaganda that devalues human life.)

In summary, I have not much concrete conclusions to offer, and anyway, details from one context (here, military) do not translate necessarily very well into other aspects of life. In some situations, (some amount of) self-sacrifice may be a good option, maybe even the best or only option for obtaining some outcomes, and it can be good thing to have around. On the other hand, in many situations it is wrong or contentious to require large sacrifices from others, and people who do so (including also extreme persuasion leading to voluntary self-sacrifice) are condemned as taking unjust advantage of others. Much depends on the framing.

As reader may notice, I am not arguing from any particular systematic theory of ethics, but rehashing my moral intuitions what is considered acceptable in West, assuming there is some signal of ethics in there.

Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid

"Non-identifiability", by the way, is the search term that does the trick and finds something useful. Please see: Daly et al. [1], section 3. They study indentifiability characteristics of logistic sigmoid (that has rate r and goes from zero to carrying capacity K at t=0..30) via Fisher information matrix (FIM). Quote:

When measurements are taken at times t ≤ 10, the singular vector (which is also the eigenvector corresponding to the single non-zero eigenvalue of the FIM) is oriented in the direction of the growth rate r in parameter space. For t ≤ 10, the system is therefore sensitive to changes in the growth rate r, but largely insensitive to changes in the carrying capacity K. Conversely, for measurements taken at times t ≥ 20, the singular vector of the sensitivity matrix is oriented in the direction of the growth rate K[sic], and the system is sensitive to changes in the carrying capacity K but largely insensitive to changes in the growth rate r. Both these conclusions are physically intuitive.

Then Daly et al. proceed with MCMC scheme to numerically show that samples at different parts of time domain result in different identifiability of rate and carrying capacity parameters (Figure 3.)

[1] Daly, Aidan C., David Gavaghan, Jonathan Cooper, and Simon Tavener. “Inference-Based Assessment of Parameter Identifiability in Nonlinear Biological Models.” Journal of The Royal Society Interface 15, no. 144 (July 31, 2018): 20180318. https://doi.org/10.1098/rsif.2018.0318

EDIT.

To clarify, because someone might miss it: this is not only a reply to shminux. Daly et al 2018 is (to some extent) the paper Stuart and others are looking for, at least if you are satisfied with their approach by looking what happens to effective Fisher information of logistic dynamics before and after inflection, supported by numerical inference methods showing that identifiability is difficult. (Their reference list also contains a couple of interesting articles about optimal design for logistic, harmonic models etc.)

Only thing missing that one might want AFAIK is a general analytical quantification of the amount of uncertainty, and comparison to specifically exponential (maybe along the lines Adam wrote there), and maybe writing it up in easy to digest format.

Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid

Was momentarily confused what is k (sometimes denotes carrying capacity in the logistic population growth model), but apparently it is the step size (in numerical integrator)?

I have not enough expertise here to speak like an expert, but it seems that stiffness would be related in a roundabout way. It seems to describe difficulties of some numerical integrators with systems like this: the integrator can veer much off of true logistic curve with insufficiently small steps because the differential changes fast.

The phenomenon seems to be more about non-sensitivity than sensitivity of solution to parameters (or to be precise, non-identifiability of parameters): part of the solution before inflection seems to change very little to changes in "carrying capacity" (curve maximum) parameter.

Load More