Dach

Complete amateur.

Comments

What are good election betting opportunities?

I can confirm that this still works. Sum of the price of all Nos is $14.77, payoff is $15.

As a Washed Up Former Data Scientist and Machine Learning Researcher What Direction Should I Go In Now?
Answer by DachOct 20, 20201

So, I guess the question boils down to, how seriously should I consider switching into the field of AI Alignment, and if not, what else should I do instead?

I think you should at least take the question seriously. You should consider becoming in involved in AI Alignment to the extent that you think doing so will be the highest value strategy, accounting for opportunity costs. An estimate for this could be derived using the interplay between your answers to the following basic considerations:

  • What are your goals?
  • What are the most promising methods for pursuing your various goals?
    • What resources do you have, and how effective would investing those resources be, on a method by method and goal by goal basis?

An example set of (short and incomplete) answers which would lead you to conclude "I should switch to the field of AI Alignment" is:

Like should I avoid working on AI at all and just do something fun like game design, or is it still a good idea to push forward ML despite the risks?

If you're not doing bleeding edge research (and no one doing bleeding edge research is reading your papers), your personal negative impact on AI Alignment efforts can be more effectively offset by making more money and then donating e.g. $500 to MIRI (or related) than changing career.

And if switching to AI Alignment should be done, can it be a career or will I need to find something else to pay the bills with as well?

AI Alignment is considered by many to be literally the most important problem in the world. If you can significantly contribute to AI Alignment, you will be able to find someone to give you money.

If you can't significantly personally contribute to AI Alignment but still think the problem is important, I would advise advancing some other career and donating money to alignment efforts, starting a youtube channel and spreading awareness of the problem, etc.

I am neither familiar with you nor an alignment researcher, so I will eschew giving specific career advice.

Industrial literacy

You were welcome to write an actual response, and I definitely would have read it. I was merely announcing my advanced intent to not respond in detail to any following comments, and explaining why in brief, conservative terms. This is seemingly strictly better- it gives you new information which you can use to decide whether or not you want to respond. If I was being intentionally mean, I would have allowed you to write a detailed comment and never responded, potentially wasting your time.

If your idea of rudeness is constructed in this (admittedly inconvenient) way, I apologize.

Industrial literacy

Why, exactly, is this our only job (or, indeed, our job at all)? Surely it’s possible to value present-day things, people, etc.?

The space that you can affect is your light cone, and your goals can be "simplified" to "applying your values over the space that you can affect", therefore your goal is to apply your values over your light cone. It's you're "only job".

There is, of course, a specific notion that I intended to evoke by using this rephrasing: the idea that your values apply strongly over humanity's vast future. It's possible to value present-day things, people, and so on- and I do. However... whenever I hear that fact in response to my suggestions that the future is large and it matters more than today, I interpret it as playing defense for their preexisting strategies. Everyone was aware of this before the person said it, and it doesn't address the central point- it's...

"There are 4 * 10^20 stars out there. You're in a prime position to make sure they're used for something valuable to you- as in, you're currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?" 

"No, I think I'll spend most of my effort doing the things I was already going to do."

Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?

Seeing as how future humanity (with capital letters or otherwise) does not, in fact, currently exist, it makes very little sense to say that ensuring their existence is something that we would be doing “for” them.

"Seeing as how a couple's baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing 'for' them." No, wait, that's ridiculous- It does make sense to say that you're doing things "for" people who don't exist.

We could rephrase these things in terms of doing them for yourself- "you're only saving for their clothes and crib because you want them to get what they want". But, what are we gaining from this rephrasing? The thing you want is for them to get what they want/need. It seems fair to say that you're doing it for them.

There's some more complicated discussion to be had on the specific merits of making sure that people exist, but I'm not (currently) interested in having that discussion. My point isn't really related to that- it's that we should be spending most of our effort on planning for the long term future.

Also, in the context of artificial intelligence research, it's an open question as to what the border of "Future Humanity" is. "Existing humans" and "Future Humanity" probably have significant overlap, or so the people at MIRI, DeepMind, OpenAI, FHI, etc. tend to argue- and I agree.

Not Even Evidence

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up.

Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it's fair to infer that it will become a black hole and not a white dwarf.

If you can send a probe to a location, radiation, gravitational waves, etc. from that location will also (in normal conditions) be intercepting you, allowing you to theoretically make pretty solid inferences about certain future phenomena at that location. However, we let the probe fall out of our cosmological horizon- information is reaching it that couldn't/can't have reached the other probes, or even the starting position of that probe.

In this setup, you're gaining information about arbitrary phenomena. If you send a probe out beyond your cosmological horizon, there's no way to infer the results of, for example, non-entangled quantum experiments.

I think we may eventually determine the complete list of rules and starting conditions for the universe/multiverse/etc. Using our theory of everything and (likely) unobtainable amounts of computing power, we could (perhaps) uniquely locate our branch of the universal wave function (or similar) and draw conclusions about the outcomes of distant quantum experiments (and similar). That's a serious maybe- I expect that a complete theory of everything would predict infinitely many different instances of us in a way that doesn't allow for uniquely locating ourselves.

However... this type of reasoning doesn't look anything like that. If SSA/SSSA require us to have a complete working theory of everything in order to be usable, that's still invalidating for my current purposes.

For the record, I ran into a more complicated problem which turns out to be incoherent for similar reasons- namely, information can only propagate in specific ways, and it turns out that SSA/SSA allows you to draw conclusions about what your reference class looks like in ways that defy the ways in which information can propagate. 

You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA.

This specific hypothetical doesn't directly apply to the SIA- it relies on adjusting the relative frequencies of different types of observers in your reference class, which isn't possible using SIA. SIA still suffers from the similar problem of allowing you to draw conclusions about what the space of all possible observers looks like.

Not Even Evidence

I don't understand why you're calling a prior "inference". Priors come prior to inferences, that's the point.

SIA is not isomorphic to "Assign priors based on Kolmogorov Complexity". If what you mean by SIA is something more along the lines of "Constantly update on all computable hypotheses ranked by Kolmogorov Complexity", then our definitions have desynced.

Also, remember: you need to select your priors based on inferences in real life. You're a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.

Regardless of whether your probabilities entered through your brain under the name of a "prior" or an "update", the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.

SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you're using SIA to gesture to the types of conclusions that you'd draw using Solomonoff Induction, I claim definition mismatch.

It clearly is unnecessary - nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating. 

I specifically listed the point of the tiling in the paragraph that mentions tiling:

for you to agree that the fact you don't see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>

The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.

I don't see any such implications. You need to simplify and more fully specify your model and example. 

There's phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you're randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.

I don't see any such implications. You need to simplify and more fully specify your model and example. 

Just to reiterate, my post isn't particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.

Industrial literacy

That's surprisingly close, but I don't think that counts. That page explains that the current dynamics behind phosphate recycling are bad as a result of phosphate being cheap- if phosphate was scarce, recycling (and potentially the location of new phosphate reserves, etc.) would become more economical.

Not Even Evidence

My formulation of those assumptions, as I've said, is entirely a prior claim. 

You can't gain non-local information using any method, regardless of the words or models you want to use to contain that information. 

If you agree with those priors and Bayes, you get those assumptions. 

You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.

You can't say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you're just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones. 

It's not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.

This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.

I'm asking for your prior in the specific scenario I gave. 

My estimate is 2/3rds for the 2-Observer scenario. Your claims that "priors come before time" makes me want to use different terminology for what we're talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by "priors coming before time" isn't clearly relevant to the physical configuration of the particles in your brain.

The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I "get" SIA- SIA has additional physically incoherent implications.

Not Even Evidence

The version of the post I responded to said that all probes eventually turn on simulations. 

The probes which run the simulations of you without the pop-up run exactly one. The simulation is run "on the probe."

Let me know when you have an SIA version, please.

I'm not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.

The up until now part of this is nonsense - priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?

Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).

The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.

Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.

I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don't see the implication, I'm not going to further elaborate on it, sorry.

All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)

SIA is asserting more than events A, B, and C are equal prior probability.

Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can't update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.

You can't change the number of possible observers, so you can't turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though. 

If you knew for a fact that something couldn't have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn't. It's a perfectly valid update.

There really couldn't have been an impact. The versions of you that wake up and don't see pop-ups (and their brains) could not have been affected by what's going on with the other probes- they are outside of one another's cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you're still updating prematurely.

I told you the specific types of updates that you'd be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you've made. If you don't see a pop-up, not all of the probes saw <event x>, your probe didn't see <event x>, you're a person who didn't see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.

However, you can't do anything remotely looking like the update mentioned in the first hypothetical. You're only learning information about your specific probe's fate, and what type of copy you ended up being.

You should simplify to having exactly one clone created. In fact, I suspect you can state your "paradox" in terms of Sleeping Beauty - this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect - one learns that one has woken in the SB scenario, which on SIA's priors leads one to update to the thirder position.

You can't simplify to having exactly one clone created. 

There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you're least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don't agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you're doing, and why you're doing it.

Load More