Self-Embedded Agent

Wiki Contributions


Discussion with Eliezer Yudkowsky on AGI interventions

People disagree about to what degree formal methods will be effective/quick enough to arrive. I'd like to point out that Paul Christiano, one of the most well-known proponents of  more non-formal thinking & focus on existing ML-methods, still has a very strong traditional math/CS background - (i.e. Putnam Fellow, a series of very solid math/CS papers). His research methods/thinking is also very close to how theoretical physicists might think about problems.

Even a nontraditional thinker like EY did very well on math contests in his youth.

Discussion with Eliezer Yudkowsky on AGI interventions

I'm open to have a double-crux high-bandwitth talk about this. Would you be up for that?


I think 

  1. you are underestimating how much Very Smart Conventional People in Academia are Generically Smart and how much they know about philosophy/big picture/many different topics. 
  2. overestimating how novel some of the insights due to prominent people in the rationality community are; how correlated believing and acting on Weirdo Beliefs is with ability to find novel solutions to (technical) problems - i.e. the WeirdoPoints=g-factor belief prevalent in Rationalist circles. 
  3. underestimating how much better a world-class mathematician is than the average researcher, i.e. there is the proverbial 10x programmer. Depending on how one measures this, some of the top people might easily be >1000x.
  4. "By contrast, mathematical cognition is about exploring an already known domain. Maybe forcasting, especially mid-range political forecasting during times of change, comes closer to measuring the skill. " this jumps out to me. The most famous mathematicains are famous precisely because they came up with novel domains of thought. Although good forecasting is an important skill and an obvious sign of intelligence & competence it is not necessarily a sign of a highly creative researcher. Much of forecasting is about aggregating data and expert opinion; being "too creative" may even be a detriment. Similarly, many of the famous mathematical minds of the past century often had rather naive political views; this is almost completely, even anti-correlated, with their ability to come up with novel solutions to technical problems.
  5. "test-of-fit trial project" also jumps out to me. Nobody has succesfully aligned a general artificial intelligence. The field of AGI safety is in its infancy, many people disagree on the right approach. It is absolutely laughable to me that in the scenario where after much work we get on Terry Tao on board, some group of AI safety researchers (who?) decide he's not "a good fit for the team", or even that the research time of existing AGI safety researchers is so valuable that they couldn't find the time to evaluate his output.
Discussion with Eliezer Yudkowsky on AGI interventions

I disagree. Predicting who will make the most progress on AI safety is hard. But the research is very close to existing mathematical/theoretical CS/theoretical physics/AI research. Getting the greatest mathematical minds on the planet to work on this problem seems like an obvious high EV bet. 

I might also add that Eliezer Yudkowsky, despite his many other contributions, has made only minor direct contributions to technical AI Alignment research. [His indirect contribution by highlighting & popularising the work of others is high EV impact]

Discussion with Eliezer Yudkowsky on AGI interventions

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is crucial to my mind. Despite LW/Rationalist dogma equating IQ with weirdo-points, the vast majority of brilliant (mathematical) minds are fairly conventional - see Tao, Euler, Gauss. 

EA cause area?

Discussion with Eliezer Yudkowsky on AGI interventions

There will be no partial credit on Humanity's AI Alignment Exam. I like that!

Discussion with Eliezer Yudkowsky on AGI interventions

[I am a total noob on history of deep learning & AI] 

From a cursory glance I find Schmidhuber's take convincing. 

He argues that the (vast) majority of conceptual & theoretical advances in deep learning have been understood decades before - often by Schmidhuber and his collaborators. 

Moreover, he argues that many of the current leaders in the field improperly credit previous discoveries

It is unfortunate that the above poster is anonymous. It is very clear to me that there is a big difference between theoretical & conceptual advances and the great recent practical advances due to stacking MOAR layers. 

It is possible that remaining steps to AGI consists of just stacking MOAR layers: compute + data + comparatively small advances in data/compute efficiency + something something RL Metalearning will produce an AGI. Certainly, not all problems can be solved [fast] by incremental advances and/or iterating on previous attempts. Some can.  It may be the unfortunate reality that creating [but not understanding!] AGI is one of them. 

Successful Mentoring on Parenting, Arranged Through LessWrong

Sorry to be a buzzkill, but what are you trying to achieve here?

It is my impression from the literature that once controlled for genetic confounders the long-term effect of parents on cognition [and a host of other factors] is 0.  Why spent so much effort if the net effect is nil?

All Possible Views About Humanity's Future Are Wild

We can't build Von Neumann probes in the real world - though we can in the digital world. 
What kind of significant (!) new information have we obtained about the feasibility of galaxywide colonization through Von Neumann probes?

Self-Embedded Agent's Shortform

Failure of convergence to social optimum in high frequency trading with technological speed-up

Possible market failures in high-frequency trading are of course a hot topic recently with various widely published Flash Crashes. There has a loud call for a reign in of high frequency trading and several bodies are moving towards heavier regulation. But it is not immediately clear whether or not high-frequency trading firms are a net cost to society. For instance, it is sometimes argued that High-Frequency trading firms as simply very fast market makers. One would want a precise analytical argument for a market failure.

There are two features that make this kind of market failure work: the first is a first-mover advantage in arbitrage, the second is the possibility of high-frequency trading firms to invest in capital, technology, or labor that increases their effective trading speed.

The argument runs as follows.

Suppose we have a market without any fast traders. There are many arbitrage opportunities open for very fast traders. This inaccurate pricing inflicts a dead-weight loss D on total production P. The net production N equals P-D. Now a group of fast traders enter the market. At first they provide for arbitrage which gives more accurate pricing and net production rises to N=P. 

Fast traders gain control of a part of the total production S.  However there is a first-mover advantage in arbitrage so any firm will want to invest in technology, labor, capital that will speed up their ability to engage in arbitrage. This is a completely unbounded process, meaning that trading firms are incentived to trade faster and faster beyond what is beneficial to real production. There is a race to the bottom phenomenon. In the end a part A of S is invested in 'completely useless' technology, capital and labor. The new net production is N=P-A and the market does not achieve a locally maximal Pareto efficient outcome.

As an example suppose the real economy R consults market prices every minute. Trading firms invest in technology, labor and capital and eventually reach perfect arbitrage within one minute of any real market movement or consult (so this includes any new market information, consults by real firms etc). At this point the real economy R clearly benefits from more accurate pricing. But any one trading firm is incentivized to be faster than the competition. By investing in tech, capital, suppose trading firms can achieve perfect arbitrage within 10 microseconds of any real market movement. This clearly does not help the real economy R in achieving any higher production at all since it does not consult the market more than once every minute but there is a large attached cost.

Load More