Dalcy Bremin

Let the wonder never fade!

Wiki Contributions

Comments

Having lived ~19 years, I can distinctly remember around 5~6 times when I explicitly noticed myself experiencing totally new qualia with my inner monologue going “oh wow! I didn't know this dimension of qualia was a thing.” examples:

  • hard-to-explain sense that my mind is expanding horizontally with fractal cube-like structures (think bismuth) forming around it and my subjective experience gliding along its surface which lasted for ~5 minutes after taking zolpidem for the first time to sleep (2 days ago)
  • getting drunk for the first time (half a year ago)
  • feeling absolutely euphoric after having a cool math insight (a year ago)
  • ...

Reminds me of myself around a decade ago, completely incapable of understanding why my uncle smoked, being "huh? The smoke isn't even sweet, why would you want to do that?" Now that I have [addiction-to-X] as a clear dimension of qualia/experience solidified in myself, I can better model their subjective experiences although I've never smoked myself. Reminds me of the SSC classic.

Also one observation is that it feels like the rate at which I acquire these is getting faster, probably because of increase in self-awareness + increased option space as I reach adulthood (like being able to drink).

Anyways, I think it’s really cool, and can’t wait for more.

i absolutely hate bureaucracy, dumb forms, stupid websites etc. like, I almost had a literal breakdown trying to install Minecraft recently (and eventually failed). God.

This shortform just reminded me to buy a CO2 sensor and, holy shit, turns out my room is at ~1500ppm.

While it's too soon to say for sure, this may actually be the underlying reason for a bunch of problems I noticed myself having primarily in my room (insomnia, inability to focus or read, high irritability, etc).

Although I always suspected bad air quality, it really is something to actually see the number with your own eyes, wow. Thank you so, so much for posting about this!!

One of the rare insightful lessons from high school: Don't set your AC to the minimum temperature even if it's really hot, just set it to where you want it to be.

It's not like the air released gets colder with lower target temperature, because most ACs (according to my teacher, I haven't checked lol) are just a simple control system that turns itself on/off around the target temperature, meaning the time it takes to reach a certain temperature X is independent of the target temperature (as long it's lower than X)

... which is embarrassingly obvious in hindsight.

God, I wish real analysis was at least half as elegant as any other math subject — way too much pathological examples that I can't care less about. I've heard some good things about constructivism though, hopefully analysis is done better there.

I think the point of having an explicit human-legible world model / simulation is to make desideratas formally verifiable, which I don't think would be possible with a blackbox system (like LLM w/ wrappers).

Also important to note:

The phenomenon you call by names like "goals" or "agency" is one possible shadow of the deep structure of optimization - roughly, preimaging outcomes onto choices by reversing a complicated transformation.

 - @esyudkowsky

i.e. if we were to pin-down something we actually care about, that'd be "a system exhibiting consequentialism", because those are the kind of systems that will end up shaping our lightcone and more. Consequentialism is convergent in an optimization process, i.e. the "deep structure of optimization". Terms like "goals" or "agency" are shadows of consequentialism, finite approximations of this deep structure.

And by the virtue of being finite approximations (eg they're embedded), these "agents" have a bunch of convergent properties that makes it easy for us to reason about the "deep structure" themselves, like eg modularity, having a world-model, etc (check johnswentworth's comment).

Edit: Also the following quote

it is relatively unimportant to understand agency for its own sake or intelligence for its own sake or optimization for its own sake. Instead we should remember that these are frames for understanding these patterns that exert influence over the future

re: reducing magic and putting bounds, I'm reminded of Cleo Nardo's Hodge Podge Alignment proposal.

moments of microscopic fun encountered while studying/researching:

  • Quantum mechanics call vector space & its dual bra/ket because ... bra-c-ket. What can I say? I like it - But where did the letter 'c' go, Dirac?
  • Defining cauchy sequences and limits in real analysis: it's really cool how you "bootstrap" the definition of Cauchy sequences / limit on real using the definition of Cauchy sequences / limit on rationals. basically:
    • (1) define Cauchy sequence on rationals
    • (2) use it to define limit (on rationals) using rational-Cauchy
    • (3) use it to define reals
    • (4) use it to define Cauchy sequence on reals
    • (5) show it's consistent with Cauchy sequence on rationals in both directions
      • a. rationals are embedded in reals hence the real-Cauchy definition subsumes rational-Cauchy definition
      • b. you can always find a rational number smaller than a given real number hence a sequence being rational-Cauchy means it is also real-Cauchy)
    • (6) define limit (on reals)
    • (7) show it's consistent with limit on rationals
    • (8) ... and that they're equivalent to real-Cauchy
    • (9) proceed to ignore the distinction b/w real-Cauchy/limit and their rational counterpart. Slick!

(will probably keep updating this in the replies)

That means the problem is inherently unsolvable by iteration. "See what goes wrong and fix it" auto-fails if The Client cannot tell that anything is wrong.

Not at all meant to be a general solution to this problem, but I think that a specific case where we could turn this into something iterable is by using historic examples of scientific breakthroughs - consider past breakthroughs to a problem where the solution (in hindsight) is overdetermined, train the AI on data filtered by date, and The Client evaluates the AI solely based on how close the AI approaches that overdetermined answer.

As a specific example: imagine feeding the AI historical context that led up to the development of information theory, and checking if the AI converges onto something isomorphic to what Shannon found (training with information cutoff, of course). Information theory surely seems like The Over-determined Solution for tackling the sorts of problems that it was motivated by, and so the job of the client/evaluator is much easier.

Of course this is probably still too difficult in practice (eg not enough high-quality historical data of breakthroughs, evaluation & data-curation still demanding great expertise, hope of "... and now our AI should generalize to genuinely novel problems!" not cashing out, scope of this specific example being too limited, etc).

But the situation for this specific example sounds somewhat better than that laid out in this post, i.e. The Client themselves needing the expertise to evaluate non-hindsight based supposed Alignment breakthroughs & having to operate on completely novel intellectual territory.

Load More