Logan Zoellner

Comments

The EMH Aten't Dead
I'm still curious if you would be willing to bet against a fund run exclusively by founders of the S&P500Those do underperform the S&P 500.

Oh yeah, I definitely agree that mutual funds are terrible. Pretty sure they're optimizing for management fees, though, not to actually outperform the market.


I'm still curious if you would be willing to bet against a fund run exclusively by founders vs the S&P 500. Saying the management fee for such a fund would be ridiculously high seems like a reasonable objection though.

For that matter, would you be willing to bet against SpaceX vs the S&P 500?

The EMH Aten't Dead
No, you would only assume that if you bill the capacity of that founder to work at zero. Successful founders have skill at managing companies that distinct from having access to private information.

Care to elucidate the difference between "skilled at managing companies" and "skilled at investing". Do you really claim that if I restricted the same set of people to buying/selling publicly tradable assets they would underperform the S&P 500?

The EMH Aten't Dead
Plenty of Venture Capitalist underform the market. Saying that everyone of them beats the market is not based on real data.

I didn't say every Venture Capitalist beats the market. Venture Capital in particular seems like a hobby for people who are already rich. I said every founder of a $1B startup beat the market.

I propose the following bet: take any founder of a $1B startup that you please, strip them of all of their wealth, give them $1M cash. What percent of them do you think would see their net-worth grow by more than the S&P 500 over the next 10 years? If the EMH is true, the answer should be 50%. Would you really be willing to bet 50% of them will under preform the market?

The EMH Aten't Dead
Private information should be very hard to come by, it is not something that can be learned in a few minutes from an internet search.

I think we have different definitions of private information.

I have private information if I disagree with the substantial majority of people, even if everything I know is in principle freely available. The market is trading on the consensus expectation of the future. If that consensus is wrong and I know so, I have private information.

Specifically, when Tesla was trading at $600 or so, it was publicly available that they were building cars in a way that no other company could, but the public consensus was not that they were therefore the most valuable car company in the world.

Similarly, SpaceX is currently valued at $44B according to the public consensus. But I would be willing to be a substantial sum of money that they are worth 5-10x that and people just haven't fully grasped the implications of Starlink and Starship.

When you think about private information this way, in order to have private information all you have to do is:

1) Disagree with the general consensus

2) Be right

Incidentally, those are precisely the skills that rationality is training you for. Most people aren't optimizing for the truth, they're optimizing for fitting in with their peers.


To me it doesn't look trivial/nor easy at all: there are orders of magnitude more intelligence people than rich intelligence people.

Very few intelligent people are optimizing for "make as much money as possible". A trivial example of this, almost anyone working in academia could get a massive pay raise by switching to private industry. In addition, people can be very intelligent without being rational, so even if they claim to be optimizing for wealth they might not be doing a very good job of it. There are hordes of very intelligent people who are goldbugs or young earth creationists or global warming deniers. Why should we expect these people to behave rationally when it comes to financial self-interest when they so blatantly fail to do so in other domains?

I'm not even sure I buy the idea that there are more intelligent people than rich people. The 90% percentile for wealth in the USA is north of $1M. Going by the "MENSA" definition of highly intelligent, only 2% of people qualify. That means there are 5x as many millionaires as geniuses.

The EMH Aten't Dead

I think you're understating the amount of private information available to anyone with a reasonable level of intelligence. If you have a decent level of curiosity, chances are that you know some things that the rest of the world hasn't "caught onto" yet. For example, most fans of Tesla probably realized that EVs are going to kill ICEs and that Telsa is at least 4 years ahead of anyone else in terms of building EVs long before the sudden rise in Tesla stock in Jan 2020. Similarly, people who nerd out about epidemics predicted the scale of COVID-19 before the general public.

The extreme example of this is Venture Capital. People who are a bit "weird" and follow their hunches routinely start companies worth millions or billions of dollars. Every single one of them "beat the market" by tapping private information.

None of this invalidates the EMH (which as you pointed out is unfalsifiable). The key is figuring out how to take your personal unique insights and translate them into meaningful investments (with reasonable amounts of leverage and appropriate stop-losses). Of course, the easier it is to trade something, the more likely someone has "already had that idea", so predicting the S&P500 is harder than predicting an individual stock. But starting your own company is a power move so difficult that it's virtually unbeatable.

AI Boxing for Hardware-bound agents (aka the China alignment problem)
You are still being stupid, because you are ignoring effective tools and making the problem needlessly harder for yourself.

I think this is precisely where we disagree. I believe that we do not have effective tools for writing utility functions and we do have effective tools for designing at least one Nash Equilibrium that preserves human value, namely:

1) All entities have the right to hold and express their own values freely

2) All entities have the right to engage in positive-sum trades with other entities

3) Violence is anathema.

Some more about why I think humans are bad at writing utility functions:

I am the extremely skeptical about anything of the form: We will define a utility function that encodes human values. Machine learning is really good at misinterpreting utility functions written by humans. I think this problem will only get worse with a super-intelligence AI.

I am more optimistic about goals of the form "Learn to ask what humans want". But I still think these will fail eventually. There are lots of questions even ardent utilitarians would have difficulty answering. For example, "Torture 1 person or give 3^^^3 people a slight headache?".

I'm not saying all efforts to design friendly AIs are pointless, or that we should willingly release paperclip maximizes on the world. Rather, I believe we boost our chances of preserving human existence and values by encouraging a multi-polar world with lots of competing (but non-violent) AIs. The competing plan of "don't create AI until we have designed the perfect utility function and hope that our AI is the dominant one" seems like it has a much higher risk of failure, especially in a world where other people will also be developing AI.

Importantly, we have the technology to deploy "build a world where people are mostly free and non-violent" today, and I don't think we have the technology to "design a utility function that is robust against misinterpretation by a recursively improving AI".


One additional aside

Suppose the AI has developed the tech to upload a human mind into a virtual paradise, and is deciding whether to do it or not.

I must confess the goals of this post are more modest than this. The Nash equilibrium I described is one that preserves human existence and values as they are it does nothing in the domain of creating a virtual paradise where humans will enjoy infinite pleasure (and in fact actively avoids forcing this on people).

I suspect some people will try to build AIs that grant them infinite pleasure, and I do not grudge them this (so long as they do so in a way that respects the rights of others to choose freely). Humans will fall into many camps. Those who just want to be left alone, those who wish to pursue knowledge, those who wish to enjoy paradise. I want to build a world where all of those groups can co-exist without wiping out one-another or being wiped out by a malevolent AI.

What does a positive outcome without alignment look like?
You clearly have some sort of grudge against or dislike of china. In the face of a pandemic, they want basically what we want, to stop it spreading and someone else to blame it on. Chinese people are not inherently evil.

I certainly don't think the Chinese are inherently evil. Rather I think that from the view of an American in the 1990's a world dominated by a totalitarian China which engages in routine genocide and bans freedom of expression would be a "negative outcome to the rise of China".

This is a description of a Nash equilibria in human society. Their stability depends on humans having human values and capabilities.

Yes. Exactly. We should be trying to find a Nash equilibrium in which humans are still alive (and ideally relatively free to pursue their values) after the singularity. I suspect such a Nash equilibrium involves multiple AIs competing with strong norms against violence and focus on positive-sum trades.

But I don't see why any of the Nash equilibria between superintelligences will be friendly to humans.

This is precisely what we need to engineer! Unless your claim is that there is no Nash equilibrium in which humanity survives, which seems like a fairly hopeless standpoint to assume. If you are correct, we all die. If you are wrong, we abandon our only hope of survival.

Why would one AI start shooting because the other AI did an action that benefited both equally?

Consider deep seabed mining. I would estimate the percent of humans who seriously care (are are aware of the existence of) the sponges living at the bottom of the deep ocean at <1%. Moreover, there are substantial positive economic gains that could potentially be split among multiple nations from mining deep sea nodules. Nonetheless, every attempt to legalize deep sea mining has run unto a hopeless tangle of legal restrictions because most countries view blocking their rivals as more useful than actually mining the deep sea.

If you have several AI's and one of them cares about humans, it might bargain for human survival with the others. But that implies some human managed to do some amount of alignment.

I would hope that some AIs have an interest in preserving humans for the same reason some humans care about protecting life on the deep seabed, but I don't think this is a necessary condition for ensuring humanity's survival in a post-singularity world. We should be trying to establish a Nash equilibrium in which even insignificant actors have their values and existence preserved.

My point is, I'm not sure that aligned AI (in the narrow technical sense of coherently extrapolated values) is even a well-defined term. Nor do I think it is an outcome to the singularity we can easily engineer, since it requires us to both engineer such an AI and to make sure that it is the dominant AI in the post-singularity world.

AI Boxing for Hardware-bound agents (aka the China alignment problem)
A lot of the approaches to the "China alignment problem" rely on modifying the game theoretic position, given a fixed utility function. Ie having weapons and threatening to use them. This only works against an opponent to which your weapons pose a real threat. If, 20 years after the start of Moof, the AI's can defend against all human weapons with ease, and can make any material goods using less raw materials and energy than the humans use, then the AI's lack a strong reason to keep us around.

If the AIs are a monolithic entity whose values are universally opposed to those of humans then, yes, we are doomed. But I don't think this has to be the case. If the post-singularity world consists of an ecosystem of AIs whose mutually competing interests causes them to balance one-another and engage in positive sum games then humanity is preserved not because the AI fears us, but because that is the "norm of behavior" for agents in their society.

Yes, it is scary to imagine a future where humans are no longer at the helm, but I think it is possible to build a future where our values are tolerated and allowed to continue to exist.

By contrast, I am not optimistic about attempts to "extrapolate" human values to an AI capable of acts like turning the entire world into paperclips. Humans are greedy, superstitious and naive. Hopefully our AI descendants will be our better angels and build a world better than any that we can imagine.

AI Boxing for Hardware-bound agents (aka the China alignment problem)

I really like this this response! We are thinking about some of the same math.

Some minor quibbles, and again I think "years" not "weeks" is an appropriate time-frame for "first Human AI -> AI surpasses all humans"

Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.

A three-year-old child does not take 20 subjective years to train. Even a 20-year-old adult human does not take 20 subjective years to train. We spend an awful lot of time sleeping, watching TV, etc. I doubt literally every second of that is mandatory for reaching the intelligence of an average adult human being.

At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal.

I think just the opposite. A synapse is not a FLOP. My estimate is closer to 10^19. Moreover most of the top slots in the TOP500 list are vanity projects by governments or used for stuff like simulating nuclear explosions.

Although, to be fair, once this curve collides with Moore's law, that 2nd objection will no longer be true.

Load More