Alexander Gietelink Oldenziel

(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
                                                                                                           - Saharon Shelah

 

As a true-born Dutchman I endorse  Crocker's rules.

For my most of my writing see my short-forms (new shortform, old shortform)

Twitter: @FellowHominid

Personal website: https://sites.google.com/view/afdago/home

Sequences

Singular Learning Theory

Wiki Contributions

Comments

Trivial but important

Aumann agreement can fail for purely epistemic reasons because real-world minds do not do Bayesian updating. Bayesian updating is intractable so realistic minds sample from the prior. This is how e.g. gradient descent works and also how human minds work.

In this situation a two minds can end in two different basins with similar loss on the data. Because of computational limitations. These minds can have genuinely different expectation for generalization.

(Of course this does not contradict the statement of the theorem which is correct.)

Nothing to add. Just wanted to say it's great to see this is moving forward!

Optimal Forward-chaining versus backward-chaining.

In general, this is going to depend on the domain. In environments for which we have many expert samples and there are many existing techniques backward-chaining is key.  (i.e. deploying resources & applying best practices in business & industrial contexts)

In open-ended environments such as those arising Science, especially pre-paradigmatic fields backward-chaining and explicit plans breakdown quickly. 

 

Incremental vs Cumulative

Incremental: 90% forward chaining 10% backward chaining from an overall goal. 

Cumulative: predominantly forward chaining (~60%) with a moderate amount of backward chaining over medium lengths (30%) and only a small about of backward chaining (10%) over long lengths. 

I would argue additionally that the chief issue of AI alignment is not that AIs won't know what we want. 

Getting to know what you want is easy, getting them to care is hard.

A superintelligent AI will understand what humans want at least as well as humans, possibly much better. They might just not - truly, intrinsically - care. 

I have no regrets after reading your post. Thank you namebro

I mostly agree with this.

I should have said 'prestige within capabilities research' rather than ML skills which seems straightforwardly useful. The former is seems highly corruptive.

Corrupting influences

The EA AI safety strategy has had a large focus on placing EA-aligned people in A(G)I labs. The thinking was that having enough aligned insiders would make a difference on crucial deployment decisions & longer-term alignment strategy. We could say that the strategy is an attempt to corrupt the goal of pure capability advance & making money towards the goal of alignment. This fits into a larger theme that EA needs to get close to power to have real influence. 

[See also the large donations EA has made to OpenAI & Anthropic. ]

Whether this strategy paid off...  too early to tell.

What has become apparent is that the large AI labs & being close to power have had a strong corrupting influence on EA epistemics and culture. 

  • Many people in EA now think nothing of being paid Bay Area programmer salaries for research or nonprofit jobs.
  •  There has been a huge influx of MBA blabber being thrown around. Bizarrely EA funds are often giving huge grants to for profit organizations for which it is very unclear whether they're really EA-aligned in the long-term or just paying lip service. Highly questionable that EA should be trying to do venture capitalism in the first place. 
  • There is a questionable trend to equate ML skills prestige within capabilities work with the ability to do alignment work. 
  • For various political reasons there has been an attempt to put x-risk AI safety on a continuum with more mundance AI concerns like it saying bad words. This means there is lots of 'alignment research' that is at best irrelevant, at worst a form of rnsidiuous safetywashing. 

The influx of money and professionalization has not been entirely bad. Early EA suffered much more from virtue signalling spirals, analysis paralysis. Current EA is much more professional, largely for the better. 

The canonical examples are NP problems.

Another interesting class are problems that are easy to generate but hard to verify.

John Wentworth told me the following delightfully simple example Generating a Turing machine program that halts is easy, verifying that an arbitrary TM program halts is undecidable.

Load More