Hoagy

Comments

Big picture of phasic dopamine

Cheers for the post, I find the whole series fascinating.

One thing I was particularly curious about is how these 'proposals' are made. Do you have a picture of what kind of embedding is used to present a potential action? 

For example, is a proposal encoded in the activations of set of neurons that are isomorphic to the motor neurons and it could then propose tightening a set of finger muscles through specific neurons? Or is the embedding jointly learned between the two in some large unstructured connection, or smaller latent space, or something completely different?

Testing The Natural Abstraction Hypothesis: Project Intro

Another little update, speed issue solved for now by adding SymPy's fortran wrappers to the derivative calculations - calculating the SVD isn't (yet?) the bottleneck. Can now quickly get results from 1,000+ step simulations of 100s of particles. 

Unfortunately, even for the pretty stable configuration below, the values are indeed exploding. I need to go back through the program and double check the logic but I don't think it should be chaotic, if anything I would expect the values to hit zero.

It might be that there's some kind of quasi-chaotic behaviour where the residual motion of the particles is impossibly sensitive to the initial conditions, even as the macro state is very stable, with a nicely defined derivative wrt initial conditions. Not yet sure how to deal with this.

wheels are the best object I've been able to make so far - they bounce against each other quite nicely. video at imgur.com/QxddkZK
Testing The Natural Abstraction Hypothesis: Project Intro

Been a while but I thought the idea was interesting and had a go at implementing it. Houdini was too much for my laptop, let alone my programming skills, but I found a simple particle simulation in pygame which shows the basics, can see below.

exponents of the Jacobian of a 5 particle, 200 step simulation, with groups of 3 and 2 connected by springs

Planned next step is to work on the run-time speed (even this took a couple of minutes run, calculating the frame-to-frame Jacobian is a pain, probably more than necessary) and then add some utilities for creating larger, densely connected objects, will write up as a fuller post once done.

Curious if you've got any other uses for a set-up like this.

Testing The Natural Abstraction Hypothesis: Project Intro

Reading this after Steve Byrnes' posts on neuroscience gives a potentially unfortunate view on this.

The general impression is that the a lot of our general understanding of the world is carried in the neocortex which is running a consistent statistical algorithm and the fact that humans converge on similar abstractions about the world could be explained by the statistical regularities of the world as discovered by this system. At the same time, the other parts of the brain have a huge variety of structures and have functions which are the products of evolution at a much more precise level, and the brain is directly exposed to, and working in response to, this higher level of complexity. Of course, it doesn't mean these systems can't be reliably compressed, and presumably have structure of their own, but it may be very complex, not be discoverable without high definition and so progress on values wouldn't follow easily from progress in understanding world-modelling abstractions.

This would suggest that successes in reliably measuring abstractions would be of greater use to general capability and world modelling than to understanding human values. It would also potentially give some scientific backing to the impression from introspection and philosophy that the core concepts of human values are particularly difficult concepts to point at.

I guess one lesson would be to try and put a focus on this case where at least part of the complexity of the goal of a system is in a system directly in contact with the cognitive system rather than observed at a distance.

Also interested in helping on this - if there's modelling you'd want to outsource.

How can I bet on short timelines?

I roughly have similar beliefs and I've thought about the same question before.

The hope is that you could make more specific bets based on trends which are not currently clear to the world as a whole but will become apparent relatively soon. For example, I think I remember Gwern asking whether, if the scaling power of larger NNs continues, Nvidia will become the most valuable company in the world as the power of truly massive models/training volumes becomes apparent and they're in prime position to profit.

The problem is that shares on the frontier of AI developments are already subject to a lot of hype from somewhat similar beliefs (e.g. anyone who is a major blockchain believer, or a big AI believer but in a purely positive sense). These stocks are therefore already significantly overvalued by traditional metrics and it's not obvious whether NN progress is enough to generate major share price growth, at least with high enough probability to overcome the presumably very high discount rates that you have, even within the next 10 years (e.g. Nvidia market cap is $360B, so even becoming the largest company in the world only implies a ~6x price increase and it's hard to give this more than 15% credence in the next decade).

It seems that if you believe specifically in short timelines then there may be companies who are particularly likely to succeed given the importance of massive models (if indeed that's the way you expect things to play out). At the moment though, most of those in position to take advantage seem to either be embedded in larger companies (DeepMind, big tech AI divisions) or just not public (OpenAI, most startups). 

Ideally I guess there would be a venture capital fund which you could place money into which would invest in the most promising companies which themselves are betting on being in position to take commercial advantage of ML breakthroughs. I'm not sure I'm aware of any such fund but I'd certainly be interested if one exists/is being created.

Hoagy's Shortform

Question about error-correcting codes that's probably in the literature but I don't seem to be able to find the right search terms:

How can we apply error-correcting codes to logical *algorithms*, as well as bit streams?

If we want to check that bit-stream is accurate, we know how to do this for a manageable overhead - but what happens if there's an error in the hardware that does the checking? It's not easy for me to construct a system that has no single point of failure - you can run the correction algorithm multiple times but how do you compare the results without ending up back with a single point of failure?

Anyone know any relevant papers or got a cool solution?

Interested for the stability of computronium-based futures!

Developmental Stages of GPTs

I agree that this is the biggest concern with these models, and the GPT-n series running out of steam wouldn't be a huge relief. It looks likely that we'll have the first human-scale (in terms of parameters) NNs before 2026 - Metaculus, 81% as of 13.08.2020.

Does anybody know of any work that's analysing the rate at which, once the first NN crosses the n-parameter barrier, other architectures are also tried at that scale? If no-one's done it yet, I'll have a look at scraping the data from Papers With Code's databases on e.g. ImageNet models, it might be able to answer your question on how many have been tried at >100B as well.

Preparing for "The Talk" with AI projects

Hey Daniel, don't have time for a proper reply right now but am interested in talking about this at some point soon. I'm currently in UK Civil Service and will be trying to speak to people in their Office for AI at some point soon to get a feel for what's going on there, perhaps plant some seeds of concern. I think some similar things apply.

Predicted Land Value Tax: a better tax than an unimproved land value tax

As I understand it, one of the biggest issues with a land value tax is that the existence of the tax instantly makes owning land much less desirable - reduced by the net present value of the total future taxation. This is obviously in some sense part of the plan but it causes some pretty large sudden shifts in wealth - in particular away from anyone who has a mortgage but also just from home owners in general.

Implementing it in a fair/politically acceptable way then seems to require either a far-off starting date, a very slow taper in or a very large series of handouts to compensate, and all of these are difficult for a government to implement given the time horizon of elections and a large, wealthy group who will be opposed to this, likely including inside the governing party.

This isn't especially relevant to your variant but if you're thinking about how to get efficient taxation then this is something to think about trying to find a solution to :)

162 benefits of coronavirus

On the numbers from The Precipice - I think the point is that the next 100 years have an estimated 1/6 chance of extinction, but also contain the power to protect us from future harm and facilitate the human race flourishing across the universe. Extrapolating risk from next 100 years to an expected 600 year lifespan, and using current population forecasts as the number of humans involved therefore seems not in the spirit of his model.

Load More