LESSWRONG
LW

IlyaShpitser
7136Ω5018230
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Histograms are to CDFs as calibration plots are to...
IlyaShpitser1mo20

forall p from 0 to 1, E[ Loss(p-hat(Y | X),p*(Y | X)) | X, p*(Y | X) = p]

p-hat is your predictor outputting a probability, p* is the true conditional distribution.  It's expected loss for the predicted vs true probability for every X w/ a given true class probability given by p, plotted against p.  Expected loss could be anything reasonable, e.g. absolute value difference, squared loss, whatever is appropriate for the  end goal.

Reply
Related Discussion from Thomas Kwa's MIRI Research Experience
[+]IlyaShpitser2y-27-16
If influence functions are not approximating leave-one-out, how are they supposed to help?
IlyaShpitser2y10

Influence functions are for problems where you have a mismatch between the loss of the target parameter you care about and the loss of the nuisance function you must fit to get the target parameter.

Reply
My tentative best guess on how EAs and Rationalists sometimes turn crazy
[+]IlyaShpitser2y-22-23
"Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities)
IlyaShpitser2yΩ251

It's important to internalize that the intellectual world lives in the attention economy, like eveything else.

Just like "content creators" on social platforms think hard about capturing and keeping attention, so do intellectuals and academics.  Clarity and rigor is a part of that.


No one has time, energy, (or crayons, as the saying goes) for half-baked ramblings on a blog or forum somewhere.

Reply
On Investigating Conspiracy Theories
IlyaShpitser2y1-8

If you think you can beat the American __ Association over a long run average, that's great news for you!  That means free money!

Being right is super valuable, and you should monetize it immediately.

---

Anything else is just hot air.

Reply
The role of Bayesian ML in AI safety - an overview
IlyaShpitser2y10

Lots of Bayes fans, but can't seem to define what Bayes is.

Since Bayes theorem is a reformulation of the chain rule, anything that is probabilistic "uses Bayes theorem" somewhere, including all frequentist methods.

Frequentists quantify uncertainty also, via confidence sets, and other ways.

Continuous updating has to do with "online learning algorithms," not Bayes.

---

Bayes is when the target of inference is a posterior distribution.  Bonus Bayes points: you don't care about frequentist properties like consistency of the estimator.

Reply
Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence?
IlyaShpitser3y118

Does your argument fail for https://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture?

If so, can you explain why?  If not, it seems your argument is no good, as a good proof of this (weaker) claim exists.

Not that you asked my advice, but I would stay away from number theory unless you get a lot of training.

Reply
What is causality to an evidential decision theorist?
IlyaShpitser3y50

For the benefit of other readers: this post is confused.

Specifically on this (although possibly also on other stuff): (a) causal and statistical DAGs are fundamentally not the same kind of object, and (b) no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.

---

"So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure."

A -> B -> C and A <- B <- C reflect the same statistical beliefs about the world.

Reply
[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.
IlyaShpitser3y20

If you think it's a hard bet to win, you are saying you agree that nothing bad will happen.  So why worry?

Reply
Load More
No wikitag contributions to display.
No posts to display.