Dalcy

Let the wonder never fade!

Aspiring alignment researcher with a keen interest in agent foundations. Studying math, physics, theoretical CS (Harvard 2027). Contact me via Discord: dalcy_me, email: dalcy.mail@gmail.com. They / Them, He / Him.

Wiki Contributions

Comments

I don't see any feasible way that gene editing or 'mind uploading' could work within the next few decades. Gene editing for intelligence seems unfeasible because human intelligence is a massively polygenic trait, influenced by thousands to tens of thousands of quantitative trait loci.

I think the authors in the post referenced above agree with this premise and still consider human intelligence augmentation via polygenic editing to be feasible within the next few decades! I think their technical claims hold up, so personally I'd be very excited to see MIRI pivot towards supporting their general direction. I'd be interested to hear your opinions on their post.

I am curious as to how often the asymptotic results proven using features of the problem that seem basically practically-irrelevant become relevant in practice.

Like, I understand that there are many asymptotic results (e.g., free energy principle in SLT) that are useful in practice, but i feel like there's something sus about similar results from information theory or complexity theory where the way in which they prove certain bounds (or inclusion relationship, for complexity theory) seem totally detached from practicality?

  • joint source coding theorem is often stated as why we can consider the problem of compression and redundancy separately, but when you actually look at the proof it only talks about possibility (which is proven in terms of insanely long codes) and thus not-at-all trivial that this equivalence is something that holds in the context of practical code-engineering
  • complexity theory talks about stuff like quantifying some property over all possible boolean circuits of a given size which seems to me considering a feature of the problem just so utterly irrelevant to real programs that I'm suspicious it can say meaningful things about stuff we see in practice
    • as an aside, does the P vs NP distinction even matter in practice? we just ... seem to have very good approximation to NP problems by algorithms that take into account the structures specific to the problem and domains where we want things to be fast; and as long as complexity methods doesn't take into account those fine structures that are specific to a problem, i don't see how it would characterize such well-approximated problems using complexity classes.
    • Wigderson's book had a short section on average complexity which I hoped would be this kind of a result, and I'm unimpressed (the problem doesn't sound easier - now how do you specify the natural distribution??)

Found an example in the wild with Mutual information! These equivalent definitions of Mutual Information undergo concept splintering as you go beyond just 2 variables:

    • interpretation: relative entropy b/w joint and product of margin
    • interpretation: joint entropy minus all unshared info
      • ... become bound information

... each with different properties (eg co-information is a bit too sensitive because just a single pair being independent reduces the whole thing to 0, total-correlation seems to overcount a bit, etc) and so with different uses (eg bound information is interesting for time-series).

'Symmetry' implies 'redundant coordinate' implies 'cyclic coordinates in your Lagrangian / Hamiltonian' implies 'conservation of conjugate momentum'

And because the action principle (where the true system trajectory extremizes your action, i.e. integral of Lagrangian) works in various dynamical systems, the above argument works in non-physical dynamical systems.

Thus conserved quantities usually exist in a given dynamical system.

mmm, but why does the action principle hold in such a wide variety of systems though? (like how you get entropy by postulating something to be maximized in an equilibrium setting)

Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn't help, and only makes Bella more anxious. She feels like even her therapist doesn't understand her.

How would one find a therapist in their local area who's aware of what's going on in the EA/rat circles such that they wouldn't find statements about, say, x-risks as being schizophrenic/paranoid?

I am very interested in this, especially in the context of alignment research and solving not-yet-understood problems in general. Since I have no strong commitments this month (and was going to do something similar to this anyways), I will try this every day for the next two weeks and report back on how it goes (writing this comment as a commitment mechanism!)

Have a large group of people attempt to practice problems from each domain, randomizing the order that they each tackle the problems in. (The ideal version of this takes a few months)

...

As part of each problem, they do meta-reflection on "how to think better", aiming specifically to extract general insights and intuitions. They check what processes seemed to actually lead to the answer, even when they switch to a new domain they haven't studied before.

Within this upper-level feedback loop (at the scale of whole problems, taking hours or days), I'm guessing a lower-level loop would involve something like cognitive strategy tuning to get real-time feedback as you're solving the problems?

I had something like locality in mind when writing this shortform, the context being: [I'm in my room -> I notice itch -> I realize there's a mosquito somewhere in my room -> I deliberately pursue and kill the mosquito that I wouldn't have known existed without the itch]

But, again, this probably wouldn't amount to much selection pressure, partially due to the fact that the vast majority of mosquito population exists in places where such locality doesn't hold i.e. in an open environment.

Makes sense. I think we're using the terms differently in scope. By "DL paradigm" I meant to encompass the kind of stuff you mentioned (RL-directing-SS-target (active learning), online learning, different architecture, etc) because they really seemed like "engineering challenges" to me (despite them covering a broad space of algorithms) in the sense that capabilities researchers already seem to be working on & scaling them without facing any apparent blockers to further progress, i.e. in need of any "fundamental breakthroughs"—by which I was pointing more at paradigm shifts away from DL like, idk, symbolic learning.

But the evolutionary timescale at which mosquitos can adapt to avoid detection must be faster than that of humans adapting to find mosquitos itchy! Or so I thought - my current boring guess is that (1) mechanisms for the human body to detect foreign particles are fairly "broad", (2) the required adaptation from the mosquitos to evade them are not-way-too-simple, and (3) we just haven't put enough selection pressure to make such change happen.

Load More