Capybasilisk

Comments

Prediction = Compression [Transcript]

The Arbital entry on Unforeseen Maximums [0] says:

"Juergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, gave a talk proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.

Yudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI."

[0] https://arbital.greaterwrong.com/p/unforeseen_maximum/

My (Mis)Adventures With Algorithmic Machine Learning

Thanks for sharing this.

Would there be any advantages to substituting brute force search with metaheuristic algorithms like Ant Colony Optimization?

The universality of computation and mind design space

Of possible interest, Roman Yampolskiy's paper, "The Universe Of Minds".

https://arxiv.org/pdf/1410.0369.pdf

The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some interesting properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology. A list of open problems for this new field is presented.

Charting Is Mostly Superstition

Are random trading strategies more successful than technical ones?

In this paper we explore the specific role of randomness in financial markets, inspired by the beneficial role of noise in many physical systems and in previous applications to complex socioeconomic systems. After a short introduction, we study the performance of some of the most used trading strategies in predicting the dynamics of financial markets for different international stock exchange indexes, with the goal of comparing them with the performance of a completely random strategy. In this respect, historical data for FTSE-UK, FTSE-MIB, DAX, and S&P500 indexes are taken into account for a period of about 15-20 years (since their creation until today).

...

Our main result, which is independent of the market considered, is that standard trading strategies and their algorithms, based on the past history of the time series, although have occasionally the chance to be successful inside small temporal windows,on a large temporal scale perform on average not better than the purely random strategy, which, on the other hand, is also much less volatile.In this respect, for the individual trader, a purely random strategy represents a costless alternative to expensive professional financial consulting, being at the same time also much less risky, if compared to the other trading strategies.

Search versus design

But a lot of that feeling depends on which animal's insides you're looking at.

A closely related mammal's internal structure is a lot more intuitive to us than, say, an oyster or a jellyfish.

AI safety as featherless bipeds *with broad flat nails*

For example, take the idea that an AI should maximise “complexity”. This comes, I believe, from the fact that, in our current world, the category of “is complex” and “is valuable to humans” match up a lot.

The Arbital entry on Unforeseen Maximums elaborates on this:

Juergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, gave a talk proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.

Yudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI.

What are some Civilizational Sanity Interventions?

Robin Hanson posits that the reason why there isn’t wider adoption of prediction markets is because they are a threat to the authority of existing executives.

Before we reach for conspiracies, maybe we should investigate just how effective prediction markets actually are. I'm generally skeptical of arguments in the mold of "My pet project x isn't being implemented due to the influence of shadowy interest group y."

As someone unfamiliar with the field, are there any good studies on the effectiveness of PM?

What are some Civilizational Sanity Interventions?

This would just greatly increase the amount of credentialism in academia.

I.e., unless you're affiliated with some highly elite institution or renowned scholar, no one's even gonna look at your paper.

Seeing the Smoke

Look on the bright side. If it turns out to be a disaster of Black Death proportions, the survivors will be in a much stronger bargaining position in a post-plague labour market.

Since figuring out human values is hard, what about, say, monkey values?

Consider the trilobites. If there had been a trilobite-Friendly AI using CEV, invincible articulated shells would comb carpets of wet muck with the highest nutrient density possible within the laws of physics, across worlds orbiting every star in the sky. If there had been a trilobite-engineered AI going by 100% satisfaction of all historical trilobites, then trilobites would live long, healthy lives in a safe environment of adequate size, and the cambrian explosion (or something like it) would have proceeded without them.

https://www.lesswrong.com/posts/cmrtpfG7hGEL9Zh9f/the-scourge-of-perverse-mindedness?commentId=jo7q3GqYFzhPWhaRA

Load More