I've heard a rule of thumb that if you can avoid buying power off the California power grid's spot pricing in the 1% most expensive times, you can roughly get a 50% discount on power.
Making no claim about the actual value of each, but can't I counter your specific argument by saying, marriage is a socially enforced cartel for sex, and if they could do so without being punished, a lot more men would rather not get sex without getting married?
When you 1-bit quantize all your probability estimates
This thread might be fun for you, where Reddit talks about some papers that draw connections between NNs and decision trees. https://www.reddit.com/r/MachineLearning/comments/y2pi2a/r_neural_networks_are_decision_trees/
In particular, look for the comment that goes
I think your work in this paper is pretty much entirely subsumed by the following work showing that neural networks with piecewise linear activations are equivalent to max-affine spline operators: https://arxiv.org/abs/1805.06576
They seem to cover everything you do and more, although they don't take a specifically tree-oriented viewpoint. Unfortunately, like many of the others in this thread, I don't find results like this particularly compelling.I think your work in this paper is pretty much entirely subsumed by the following work showing that neural networks with piecewise linear activations are equivalent to max-affine spline operators: https://arxiv.org/abs/1805.06576
They seem to cover everything you do and more, although they don't take a specifically tree-oriented viewpoint. Unfortunately, like many of the others in this thread, I don't find results like this particularly compelling.
The phrase "was predictable" sets off alarm bells for post facto wiggling .
If it's predicted, I would expect you to say "was predicted". If it wasn't predicted due to somebody applying the model wrong, then I would expect you to say "should have been predicted".
Lun (the account reposting this to LW) is also a very new account with no other activity.
LessWrong scrape dataset on Hugging face, by NousResearch
https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct
I personally would like to see less talk about / with Gary Marcus, and more betting with Gary Marcus, like [here](https://garymarcus.substack.com/p/where-will-ai-be-at-the-end-of-2027).
But I understand that people don't wanna do it because it's a pretty bad bet if you win money in futures where money is likely going to become worthless.
I am wary of recommending extreme interventions to people without having a lot more context of their situation
Fwiw I think your recommendation just puts steroids onto my radar, and I still feel fully responsible for researching the associated risks. I think it's fine to push risky interventions, even to push them really hard, fram ng them as a way for people to get something they really care about.
I think you can reasonably recommended testosterone and expect somebody to encounter the appropriate warnings on their way to acquire them.
https://www.alignmentforum.org/posts/wnzkjSmrgWZaBa2aC/self-preservation-or-instruction-ambiguity-examining-the
Yeah. Here's a case where, rather than an intrinsic self-preservation drive, task-accomplishment drives self-preservation behavior.
I'd imagine an AI given a task would similarly resist being vaulted because it would interfere with completing the task.