Oliver Sourbut

Oliver - or call me Oly: I don't mind which!

Currently based in London, I'm in my early career working as a software engineer ('minoring' as a data scientist). I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.

I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! Recently I've enjoyed

  • Ord - The Precipice
  • Pearl - The Book of Why
  • Bostrom - Superintelligence
  • McCall Smith - The No. 1 Ladies' Detective Agency
  • Abelson & Sussman - Structure and Interpretation of Computer Programs
  • Stross - Accelerando

Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites

  • Hanabi (can't recommend enough; try it out!)
  • Pandemic (ironic at time of writing...)
  • Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
  • Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)

People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.

Posts

Sorted by New

Comments

There’s no such thing as a tree (phylogenetically)

I identify strongly with the excitement of discovery and enquiry in this post!

OP or readers may enjoy some additional examples of extinct or living-fossil tree-strategizing clades: https://en.m.wikipedia.org/wiki/Cycad (extant, includes larger extinct tree species) https://en.m.wikipedia.org/wiki/Glossopteris (extinct 'seed fern' tree group) https://en.m.wikipedia.org/wiki/Tree_fern (a few extant, includes larger extinct tree species) https://en.m.wikipedia.org/wiki/Lepidodendrales (extinct tree 'club mosses' - not really mosses) https://en.m.wikipedia.org/wiki/Prototaxites (not even a plant probably!)

When I came across these facts, upon a little wider reading I had a similar additional mind-blowing moment around the whole set of circumstances of the 'alternation of generations' (https://en.m.wikipedia.org/wiki/Alternation_of_generations) exhibited by plants, fungi and a few other groups. For me, this exploded my conception of what reproduction strategies can look like (and my conception was probably already not even that narrow by most standards). Wait til you read about seed development and ploidy! https://en.m.wikipedia.org/wiki/Seed

AMA: Paul Christiano, alignment researcher

I'm taking about relationships like

AGI with explicitly represented utility function which is a reified part of its world- and self- model

or

sure, it has some implicit utility function, but it's about as inscrutable to the agent itself as it is to us

AMA: Paul Christiano, alignment researcher

What kind of relationships to 'utility functions' do you think are most plausible in the first transformative AI?

How does the answer change conditioned on 'we did it, all alignment desiderata got sufficiently resolved' (whatever that means) and on 'we failed, this is the point of no return'?

Your Cheerful Price

Interesting thought. Could I crudely summarize the above contribution like this?

If the mutual willing price range includes $0 for both parties, in some situations there is a discrete cheerfulness downside to settling on $nonzero

It has the interesting corollary that

Even if there exists a mutual cheerful price range excluding $0, in some situations it might be more net cheerful to settle on $0

Where does the discrete downside come from?

The following is pure speculation and introspection.

I guess we have 'willing price ranges' (our executive would agree in this range) and 'cheerful price ranges' (our whole being would agree in this range).

If we all agree (perhaps implicitly) that some collective fun thing should entail $0 transaction, then (even if we all say it's a cheerful price) some of us may be cheerful and others merely willing. It's a shame but not too socially damaging if someone is willing but pretending to be cheerful. There is at least common knowledge of a reasonable guarantee that everyone partaking (executively) agrees that the thing is intrinsically fun and worth doing which is a socially safe state.

On the other hand, if we agree that some alleged 'collective fun thing' should entail $nonzero transaction, similarly (even if we all say it's a cheerful price) some of us may be cheerful and others merely willing at that price point. But while it's still consistent that we all executively agree the thing is intrinsically fun and worthwhile it's no longer guaranteed (because it's consistent to believe that someone's willing price excludes $0 and they are only coming along because of the fee). Perhaps even bringing up the question of a fee raises that possibility? And countenancing that possibility can be socially/emotionally harmful? (Because it entails disagreement about preferences? Especially if the collective fun thing is an explicitly social activity, like your party example.)

Further speculative corollary

More cheerful outcomes can expected if the mutual willing price range obviously (shared knowledge) excludes $0 than if it ambiguously excludes $0. So be careful about feeding your guests ambiguously-expensive pizza?

Great minds might not think alike

Good point. I guess a good manager in the right context might reduce that conflict by observing that having both a Constance and a Shor can, in many cases, be best of all? And working well together, such a team might 'grow the pie' such that salary isn't so zero-sum...?

In that model, being a Constance (or Shor) who is demonstrably good at working with Shors (Constances) might be a better strategy than being a Constance (or Shor) who is good at convincing managers that the other is a waste of money.

Is Success the Enemy of Freedom? (Full)

This resonated a lot with me! (And I'm far from as successful as I would 'like' to be - or would I??? :angst:)

Speculative and fuzzy comparison-drawing

I was reminded, I'm not sure exactly why, of this interesting entry I recently came across (I recall I was led there by a link buried in a comment in Slate Star Codex somewhere...) https://plato.stanford.edu/entries/capability-approach/

While I wouldn't necessarily endorse all of it, it's an interesting read. As I understand it, the capability approach advocates a certain way of drawing lines in practical policy-making. Its emphases are on

  • ‘functionings’ ('beings' and 'doings')

    various states of human beings and activities that a person can undertake

    example beings: ...being well-nourished, being undernourished, being housed in a pleasantly warm but not excessively hot house, being educated, being illiterate, being part of a supportive social network, being part of a criminal network, and being depressed

    example doings: ...travelling, caring for a child, voting in an election, taking part in a debate, taking drugs, killing animals, eating animals, consuming lots of fuel in order to heat one's house, and donating money to charity

  • 'capabilities'

    a person's real freedoms or opportunities to achieve functionings

Here's a passage which we can hold alongside the premise of the original post

The ends of well-being freedom, justice, and development should be conceptualized in terms of people's capabilities. Moreover, what is relevant is not only which opportunities are open to me each by themselves, hence in a piecemeal way, but rather which combinations or sets of potential functionings are open to me.

For example, suppose I am a low-skilled poor single parent who lives in a society without decent social provisions. Take the following functionings: (1) to hold a job...to properly feed myself and my family; (2) to care for my children at home and give them all the attention, care and supervision they need. ... (1) and (2) are opportunities open to me, but they are not both together open to me... forced to make some hard, perhaps even tragic choices between two functionings which both reflect basic needs and basic moral duties?

[emphases in source]

Although that summary of the approach and the excerpt I've copied don't articulate this 'success as enemy of freedom' idea, I wonder if it would be helpful to consider the idea with the lens of the capability approach? There's a certain paradoxical symmetry of the examples, which I think is what the OP is drawing attention to. The challenge would be to draw out whether the mechanism is societal or part of human nature or some combination thereof or (...) and what measures we might take (individually or collectively) to mitigate it!

Probability vs Likelihood

I like the emphasis on a type distinction between likelihoods and probabilities, thanks for articulating it!

You seem to ponder a type distinction between prior and posterior probabilities (and ask for English language terminology which might align with that). I can think of a few word-pairings which might be relevant.

Credibility ('credible'/'incredible')

Could be useful for talking about posterior, since it nicely aligns with the concept of a credible interval/region on a Bayesian parameter after evidence.

After gathering evidence, it becomes credible that...

...strongly contradicts our results, and as such we consider it incredible...

Plausibility ('plausible'/'implausible')

Not sure! To me it could connote a prior sort of estimate

It seems implausible on the face of it that the chef killed him. But let's consult the evidence.'

The following are plausible hypotheses:...

But perhaps unhelpfully I think it could also connote the relationship between a model and an evidence, which I think would correspond to likelihood.

Ah, that's a more plausible explanation of what we're seeing!

Completely implausible: the chef would have had to pass the housekeeper in the narrow hallway without her noticing...

Aleatoric and epistemic uncertainty

There's also a probably-meaningful type distinction between aleatoric uncertainty (aka statistical uncertainty) and epistemic uncertainty, where aleatoric uncertainty refers to things which are 'truly' random (at the level of abstraction we are considering them), even should we know the 'true underlying distribution' (like rolling dice), and epistemic uncertainty refers to aspects of the domain which may in reality be fixed and determined, but which we don't know (like the weighting of a die).

I find it helpful to try to distinguish these, though in the real world the line is not necessarily clear-cut and it might be a matter of level of abstraction. For example it might in principle be possible to compute the exact dynamics of a rolling die in a particular circumstance, reducing aleatoric uncertainty to epistemic uncertainty about its exact weighting and starting position/velocity etc. The same could be said about many chaotic systems (like weather).

When Money Is Abundant, Knowledge Is The Real Wealth

Great summary! A nit:

our lizard-brains love politics

it's more likely our monkey (or ape) brains that love politics. e.g. https://www.bbc.co.uk/news/uk-politics-41612352

On the note of monkey-business - what about investments in collective knowledge and collaboration? If you've not come across this, you might like it https://80000hours.org/articles/coordination/

EDIT to add some colour to my endorsement of the 80000hours link: I've personally found it beneficial in a few ways. One such is that although the value of coordination is 'obvious', I nevertheless have recognised in myself some of the traits of 'single-player thinking' described.