DanielFilan

Comments

DanielFilan's Shortform Feed

Homework: come up with a model of this.

What Does "Signalling" Mean?

Indeed, to me 'signalling' is doing some action which is differentially costly depending on whether some fact is or isn't true - so mere assertion doesn't count, even if it conveys information.

DanielFilan's Shortform Feed

Models and considerations.

There are two typical ways of deciding whether on net something is worth doing. The first is to come up with a model of the relevant part of the world, look at all the consequences of doing the thing in the model, and determine if those consequences are net positive. When this is done right, the consequences should be easy to evaluate and weigh off against each other. The second way is to think of a bunch of considerations in favour of and against doing something, and decide whether the balance of considerations supports doing the thing or not.

I prefer model-building to consideration-listing, for the following reasons:

  • By building a model, you're forcing yourself to explicitly think about how important various consequences are, which is often elided in consideration-listing. Or rather, I don't know how to quantitatively compare importances of considerations without doing something very close to model-building.
  • Building a model lets you check which possible consequences are actually likely. This is an improvement on considerations, which are often of the form "such-and-such consequence might occur".
  • Building a model lets you notice consequences which you might not have immediately thought of. This can either cause you to believe that those consequences are likely, or look for a faulty modelling assumption that is producing those assumptions within the model.
  • Building a model helps you integrate your knowledge of the world, and explicitly enforces consistency in your beliefs about different questions.

However, there are also upsides to consideration-listing:

  • The process of constructing a model is pretty similar to consideration-listing: specifically, the part where one uses one's judgement to determine which aspects of reality are important enough to include.
  • Consideration-listing is much easier to do, which is why it's the form that this hastily-written shortform post takes.
The Best Virtual Worlds for "Hanging Out"

Nah, online town still has the URL theonline.town, gather.town is just their upscale version.

Introduction To The Infra-Bayesianism Sequence

Therefore, there is a real sense in which its hypothesis class includes things as difficult to compute as it is. That being said, my guess is that halting oracles would indeed let you compute more than just the lower semi-computable functions, and it's also true that being able to run Solomonoff induction would also let you build a halting oracle.

I guess the way to reconcile this is to think that there's a difference between what you can lower semi-compute, and what you could compute if you could compute lower semi-computable things? But it's been a while since I had a good understanding of this type of thing.

Introduction To The Infra-Bayesianism Sequence

much like how halting oracles (which you need to run Solomonoff Induction) are nowhere in the hypotheses which Solomonoff considers

The Solomonoff prior is a mixture over semi-measures[*] that are lower semi-computable: that is, you can compute increasingly good approximations of the semi-measure from below that converge eventually to the actual semi-measure, but at finite time you don't know how close you are to the right answer. The Solomonoff prior itself is also a lower semi-computable semi-measure. Therefore, there is a real sense in which its hypothesis class includes things as difficult to compute as it is. That being said, my guess is that halting oracles would indeed let you compute more than just the lower semi-computable functions, and it's also true that being able to run Solomonoff induction would also let you build a halting oracle.

[*] semi-measures are probability distributions that have 'missing density', where the probability of a 0 and then a 0, plus the probability of a 0 and then a 1, is less than or equal to the probability of a 0, even though there aren't any other options in the space for what happens next.

Superintelligence 18: Life in an algorithmic economy

The Caplanian story about education is that a lot of what people are signalling is conformism, and it's inherently hard to move to a new better way of signalling conformism.

Matt Botvinick on the spontaneous emergence of learning algorithms

it's not clear that the handyman would have remembered to give the advice "turn clockwise to loosen, and counterclockwise to tighten"

It's the other way around, right?

ricraz's Shortform

If you get an 'external' randomness oracle, then you could define the utility function pretty simply in terms of the outputs of the oracle.

If the agent has a pseudo-random number generator (PRNG) inside it, then I suppose I agree that you aren't going to be able to give it a utility function that has the standard set of convergent instrumental goals, and PRNGs can be pretty short. (Well, some search algorithms are probably shorter, but I bet they have higher Kt complexity, which is probably a better measure for agents)

Load More