Today, I estimate a 30–50% chance of significantly reshaping education for nearly 700,000 students and 50,000 staff.
I get really worried when people seize this much power this easily. Especially in education. Education is rife with people reshaping education for hundreds of thousands or millions of students, in ways they believe will be positive, but end up being massively detrimental.
The very fact you can have this much of an impact after only a few years and no track record or proof of concept points to the system being seriously unmeritocratic. And people who gain power in unmeritocratic systems are unlikely to do a good job with that power.
Does this mean you, in particular, should drop your work? Well, I don't know you. I have no reason to trust you, but I also have no reason to trust the person who would replace you. What I would recommend is to find ways to make your system more meritocratic. Perhaps you can get your schools to participate in the AI Olympiad, and have the coaches for the best teams in the state give talks on what went well, and what didn't. Perhaps you can ask professors at UToronto's AI department to give a PD session on teaching AI. But, looking at the lineup from the 2024 NOAI conference, it looks like there's no correlation between what gets platformed and what actually works.
Cycling in GANs/self-play?
I think having all of this in mind as you train is actually pretty important. That way, when something doesn't work, you know where to look:
Weight-initialization isn't too helpful to think about yet (other than avoiding explosions at the very beginning of training, and maybe a little for transfer learning), but we'll probably get hyper neural networks within a few years.
I like this take, especially it's precision, though I disagree in a few places.
conductance-corrected Wasserstein metric
This is the wrong metric, but I won't help you find the right one.
the step-size effective loss potential critical batch size regime
You can lower the step-size and increase the batch-size as you train to keep the perturbation bounded. Like, sure, you could claim an ODE solver doesn't give you the exact solution, but adaptive methods let you get within any desired tolerance.
for the weight-initialization distribution
This is another "hyper"parameter to feed into the model. I agree that, at some point, the turtles have to stop, and we can call that the initial weight distribution, though I'd prefer the term 'interpreter'.
up to solenoidal flux corrections
Hmm... you sure you're using the right flux? Not all boundaries of boundaries are zero, and GANs (and self-play) probably use a 6-complex.
If you "want to stop smoking" or "want to donate more" but do not, you are either deluding yourself, lacking intelligence, or preferring ignorance. Deluding yourself can make you feel happier about yourself. "I'm the kind of person who wants to help out other people! Just not the kind who actually does [but let's not think about that]." Arguably, this is what you really prefer: to be happy, whether or not your thoughts are conistent with your behavior. If you are smart enough, and really want to get to the bottom of any inconsistencies you find yourself exhibiting, you will, and will no longer be inconsistent. You'll either bite the bullet and say you actually do prefer the lung cancer over the shakes, or actually quit smoking.
Are the majority of rationalists deluded or dishonest? Absolutely. As I said in my post, utilitarianism is not well-defined, but most rationalists prefer running with the delsuion.
There are also people that genuinely prefer others' well-being over a marginal increase in theirs—mostly wealthy or ascetic folks—and I think this is the target audience of EA evangelism. However, a lot of people don't genuinely prefer others' well-bing over a marginal increase in their own (or at least, the margin is pretty small), but these people still end up caught with Singer's thought experiment, not realizing that the conclusions it leads them to (e.g. that they should donate to GiveWell) are inconsistent with their more fundamental values.
The ellipsis is, "genuinely prefer others' well-being over a marginal increase in their own," from the previous sentence.
They have to be smarter to recognize their actual beliefs and investigate what is consistent with them. They have to be more honest, because there is social pressure to think things like, "oh of course I care about others," and hide how much or little they care.
I think the title is fine. The post mostly reads, "if you want a quantum analogue, here's the path to take".
Yeah, that was about the only sentence I read in the paper. I was wondering if you'd seen a theoretical justification (logos) rather than just an ethical appeal (ethos), but didn't want to comb through the maths myself. By the way, fidelity won't give the same posterior. I haven't worked through the maths whatsoever, but I'd still put >95% probability on this claim.
What if you wait to buy the same mask until the pandemic starts? Maybe the cost doubles, but rather than having to buy ten masks over a 100-year period, you only have to buy one.