Wiki Contributions

Comments

This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the "immune system" of society. 

Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting. 

And yes I was expecting not to find much agreement here, but that's what makes it interesting :) 

A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.

See https://philpapers.org/rec/PIETSA-6  (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)

This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation. 

It is hard to argue that a similar general principle can be found for something being "mundane" since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?    

Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.

Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) "success".  The environment is always changing (tech, knowledge base, tools), so many learnings will not apply.  Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey. 

I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.

Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program. 

See here for more info on the latter assumption.

This is also known as Simplicity Assumption: "If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation."

In a nutshell, the amount  of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long  term simple simulations will dominate the space of sims.

See here for more info.

Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).

I would suggest to remove "I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. " and present your argument, without speaking for the whole community. 

Very interesting division, thanks for your comment. 

Paraphrasing what you said,  in the informational domain we are very close to post scarcity already (minimal effort to distribute high level education and news globally), while in the material and human attention domain we likely still need advancements in robotics and AI to scale.

Load More