Donald Hobson

MMath Cambridge. Currently studying postgrad at Edinburgh.

Sequences

Neural Networks, More than you wanted to Show
Logical Counterfactuals and Proposition graphs
Assorted Maths

Wiki Contributions

Comments

Gold is high value per mass, but has a lot of price transparency and competition. 

Also, there are big problems with the idea of patents in general. 

If Alice and Bob each invent and patent something, and you need both ideas to be a useful product, then if Alice and Bob can't cooperate, nothing gets made. This becomes worse the more ideas are involved. 

It's quite possible for a single person to patent something, and to not have the resources to make it (at least not at scale) themselves, but also not trust anyone else with the idea.

Patents (and copyright) ban a lot of productive innovation in the name of producing incentives to innovate. 

Arguably the situation where innovators have incentive to keep their idea secret and profit off that is worse. But the incentives here are still bad. 

How about 

  1. When something is obviously important with hindsight, pay out the inventors. (Innovation prize type structure. Say look at all the companies doing X, and split some fraction of their tax revenue between inventors of X) This is done by tracing backwards from the widely used product. Not tracing forwards from the first inventor. If you invent something, but write it up in obscure language and it gets generally ignored, and someone else reinvents and spreads the idea, that someone gets most of the credit. 
  2. Let inventors sell shares that are 1% of any prize I receive for some invention.

Do x-rays only interact with close in electrons? 

I would expect there to be some subtle effect where the xray happened to hit an outer electron and knock it in a particular way. 

For that matter, xray diffraction can tell you all sorts of things about crystal structure. I think you can detect a lot, with enough control of the xrays going in and out.

make the AI produce the AI safety ideas which not only solve alignment, but also yield some aspect of capabilities growth along an axis that the big players care about, and in a way where the capabilities are not easily separable from the alignment.

 

So firstly, in this world capability is bottlenecked by chips. There isn't a runaway process of software improvements happening yet. And this means there probably aren't large easy capabilities software improvements lying around. 

Now "making capability improvements that are actively tied to alignment somehow" sounds harder than making any capability improvement at all. And you don't have as much compute as the big players. So you probably don't find much.

What kind of AI research would make it hard to create a misaligned AI anyway?

A new more efficient matrix multiplication algorithm that only works when it's part of a CEV maximizing AI? 

The big players do care about having instruction-following AIs,

Likely somewhat true. 

and if the way to do that is to use the AI safety book, they will use it. 

Perhaps. Don't underestimate sheer incompetence. Someone pressing the run button to test the code works so far, when they haven't programmed the alignment bit yet. Someone copying and pasting in an alignment function but forgetting to actually call the function anywhere. Misspelled variable names that are actually another variable. Nothing is idiot proof.  

I mean presumably alignment is fairly complicated and it could all go badly wrong because of the equivalent of one malfunctioning o-ring.  Or what if someone finds a much more efficient approach that's harder to align. 

Possible alternatives. 

  1. AI can make papers as good as the average scientist, but wow is it slow. Total AI paper output is less than total average scientist output, even with all available compute thrown at it. 
  2. AI can write papers as good as the Average scientist. But a lot of progress is driven by the most insightful 1% of scientists. So we get ever more mediocre incremental papers without any revolutionary new paradigms. 
  3. AI can make papers as good as the average scientist. For AI safety reasons, this AI is kept rather locked down and not run much. Any results are not trusted in the slightest. 
  4. AI can make papers as good as the average scientist. Most of the peer review and journal process is also AI automated. This leads to a goodhearting loop. All the big players are trying to get papers "published" by the million. Almost none of these papers will ever be read by a human. There may be good AI safety ideas somewhere in that giant pile of research. But good luck finding them in the massive piles of superficially plausible rubbish. If making a good paper becomes 100x easier, but making a rubbish paper becomes a million times easier, and telling the difference becomes 2x easier, the whole system get's buried in mountains of junk papers. 
  5. AI's can do and have done AI safety research. There are now some rather long and technical books that present all the answers. Capabilities is now a question of scaling up chip production. (Which has slow engineering bottlenecks) We aren't safe yet. When someone has enough chips, will they use that AI safety book or ignore it? What goal will they align their AI to?

There are probably highly effective anti-cancer methods which have a modest performance overhead. 

The world contains a huge number of cameras, and a lot of credulous people.

If you search for any weird blip you can't explain, you find a lot of them. 

The "UFO" videos are all different sizes and characteristics. 

If you think most of the videos have a non-aliens explanation, the number of videos offers almost no evidence. 

Physics Myths vs reality.

Myth: Ball bearings are perfect spheres. 

Reality: The ball bearings have slight lumps and imperfections due to manufacturing processes.

Myth: Gravity pulls things straight down at 9.8 m/s/s.

Reality: Gravitational force varies depending on local geology.

 

You can do this for any topic. Everything is approximations. The only question is if they are good approximations.

I'm not sure what that's supposed to mean.

 

Why lift dirt when you can push it sideways.

Load More