Any advice on reducing neck and shoulder pain while studying? For me that's my biggest blocker to being able to focus longer (especially for math, where I have to look down at my notes/book for a long period of time). I'm considering stuff like getting a standing desk or doing regular back/shoulder exercises. Would like to hear what everyone else's setups are.
I've used Pain Science in the past as a resource and highly, highly endorse it. Here is an article they have on neck pain.
(Quality: Low, only read when you have nothing better to do—also not much citing)
30-minute high-LLM-temp stream-of-consciousness on "How do we make mechanistic interpretability work for non-transformers, or just any architectures?"
I used to try out near-random search on ideaspace, where I made a quick app that spat out 3~5 random words from a dictionary of interesting words/concepts that I curated, and I spent 5 minutes every day thinking very hard on whether anything interesting came out of those combinations.
Of course I knew random search on exponential space was futile, but I got a couple cool invention ideas (most of which turned out to already exist), like:
You can probably do this smarter now if you wanted, with eg better GPT models.
Having lived ~19 years, I can distinctly remember around 5~6 times when I explicitly noticed myself experiencing totally new qualia with my inner monologue going “oh wow! I didn't know this dimension of qualia was a thing.” examples:
Reminds me of myself around a decade ago, completely incapable of understanding why my uncle smoked, being "huh? The smoke isn't even sweet, why would you want to do that?" Now that I have [addiction-to-X] as a clear dimension of qualia/experience solidified in myself, I can better model their subjective experiences although I've never smoked myself. Reminds me of the SSC classic.
Also one observation is that it feels like the rate at which I acquire these is getting faster, probably because of increase in self-awareness + increased option space as I reach adulthood (like being able to drink).
Anyways, I think it’s really cool, and can’t wait for more.
Sunlight scattered by the atmosphere on cloudless mornings during the hour before sunrise inspires a subtle feeling ("this is cool, maybe even exciting") that I never noticed till I started intentionally exposing myself to it for health reasons (specifically, making it easier to fall asleep 18 hours later).
More precisely, I might or might not have noticed the feeling, but if I did notice it, I quickly forgot about it because I had no idea how to reproduce it.
I have to get away from artificial light (streetlamps) (and from direct (yellow) sunlight) for the (blue) indirect sunlight to have this effect. Also, it is no good looking at a small patch of sky, e.g., through a window in a building: most or all of the upper half of my field of vision must be receiving this indirect sunlight. (The intrinsically-photosensitive retinal ganglion cells are all over the bottom half of the retina, but absent from the top half.)
Complaint with Pugh's real analysis textbook: He doesn't even define the limit of a function properly?!
It's implicitly defined together with the definition of continuity where , but in Chapter 3 when defining differentiability he implicitly switches the condition to without even mentioning it (nor the requirement that now needs to be an accumulation point!) While Pugh has its own benefits, coming from Terry Tao's analysis textbook background, this is absurd!
(though to be fair Terry Tao has the exact same issue in Book 2, where his definition of function continuity via limit in metric space precedes that of defining limit in general ... the only redeeming factor is that it's defined rigorously in Book 1, in the limited context of )
*sigh* I guess we're still pretty far from reaching the Pareto Frontier of textbook quality, at least in real analysis.
... Speaking of Pareto Frontiers, would anyone say there is such a textbook that is close to that frontier, at least in a different subject? Would love to read one of those.
Maybe you should email Pugh with the feedback? (I audited his honors analysis course in fall 2017; he seemed nice.)
As far as the frontier of analysis textbooks goes, I really like how Schröder Mathematical Analysis manages to be both rigorous and friendly: the early chapters patiently explain standard proof techniques (like the add-and-subtract triangle inequality gambit) to the novice who hasn't seen them before, but the punishing details of the subject are in no way simplified. (One wonders if the subtitle "A Concise Introduction" was intended ironically.)
What's a good technical introduction to Decision Theory and Game Theory for alignment researchers? I'm guessing standard undergrad textbooks don't include, say, content about logical decision theory. I've mostly been reading posts on LW but as with most stuff here they feel more like self-contained blog posts (rather than textbooks that build on top of a common context) so I was wondering if there was anything like a canonical resource providing a unified technical / math-y perspective on the whole subject.
The MIRI Research Guide recommends An Introduction to Decision Theory and Game Theory: An Introduction. I have read neither and am simply relaying the recommendation.
Yeah, real analysis sucks. But you have to go through it to get to delightful stuff— I particularly love harmonic and functional analysis. Real analysis is just a bunch of pathological cases and technical persnicketiness that you need to have to keep you from steering over a cliff when you get to the more advanced stuff. I’ve encountered some other subjects that have the same feeling to them. For example, measure-theoretic probability is a dry technical subject that you need to get through before you get the fun of stochastic differential equations. Same with commutative algebra and algebraic geometry, or point-set topology and differential geometry.
Constructivism, in my experience, makes real analysis more mind blowing, but also harder to reason about. My brain uses non-constructive methods subconsciously, so it’s hard for me to notice when I’ve transgressed the rules of constructivism.
Update: huh, nonstandard analysis is really cool. Not only are things much more intuitive (by using infinitesimals from hyperreals instead of using epsilon-delta formulation for everything), by the transfer principle all first order statements are equivalent between standard and nonstandard analysis!
There were various notions/frames of optimization floating around, and I tried my best to distill them:
I have several thoughts on these framings, but one trouble is the excessive usage of words to represent "clusters" i.e. terms to group a bunch of correlated variables. Selection vs Control, for example, doesn't have a clear definition/criteria but rather points at a number of correlated things, like internal structure, search, maps, control-like things, etc.
Sure, deconfusing and pointing out clusters is useful because clusters imply correlations and correlations perhaps imply hidden structure + relationships—but I think the costs from cluster-representing-words doing hidden inference is much greater than the benefits, and it would be better to explicitly lay out the features-of-clusters that the one is referring to instead of just using the name of the cluster.
This is similar to the trouble I had with "wrapper-minds," which is yet another example of a cluster pointing at a bunch of correlated variables, and people using the same term to mean different things.
Anyways, I still feel totally confused about optimization—and while these clusters/frames are useful, I think thinking in terms of them would ensue even more confusion within myself. It's probably better to take the useful individual parts within the cluster and start deconfusing from the ground-up using those as the building blocks.
Why haven't mosquitos evolved to be less itchy? Is there just not enough selection pressure posed by humans yet? (yes probably) Or are they evolving towards that direction? (they of course already evolved towards being less itchy while biting, but not enough to make that lack-of-itch permanent)
this is a request for help i've been trying and failing to catch this one for god knows how long plz halp
tbh would be somewhat content coexisting with them (at the level of houseflies) as long as they evolved the itch and high-pitch noise away, modulo disease risk considerations.
The reason mosquito bites itch is because they are injecting saliva into your skin. Saliva contains mosquito antigens, foreign particles that your body has evolved to attack with an inflammatory immune response that causes itching. The compound histamine is a key signaling molecule used by your body to drive this reaction.
In order for the mosquito to avoid provoking this reaction, they would either have to avoid leaving compounds inside of your body, or mutate those compounds so that they do not provoke an immune response. The human immune system is an adversarial opponent designed with an ability to recognize foreign particles generally. If it was tractable for organisms to reliably evolve to avoid provoking this response, that would represent a fundamental vulnerability in the human immune system.
Mosquitoe saliva does in fact contain anti-inflammatory, antihemostatic, and immunomodulatory compounds. So they're trying! But also this means that mosquitos are evolved to put saliva inside of you when they feed, which means they're inevitably going to expose the foreign particles they produce to your immune system.
There's also a facet of selection bias making mosquitos appear unsuccessful at making their bites less itchy. If a mosquito did evolve to not provoke (as much of) an immune response and therefore less itching, redness and swelling, you probably wouldn't notice they'd bitten you. People often perceive that some are prone to getting bitten, others aren't. It may be that some of this is that some people don't have as serious an immune response to mosquito bites, so they think they get bitten less often.
I'm sure there are several PhDs worth of research questions to investigate here - I'm a biomedical engineer with a good basic understanding of the immune system, but I don't study mosquitos.
Because they have no reproductive advantage to being less itchy. You can kill them while they’re feeding, which is why they put lots of evolutionary effort into not being noticed. (They have an anesthetic in their saliva so you are unlikely to notice the bite.) By the time you develop the itchy bump, they’ve flown away and you can’t kill them.
There’s also positive selection for itchiness. Mosquito spit contains dozens of carefully evolved proteins. We don’t know what they all are, but some of them are anticoagulants and anesthetics. Presumably they wouldn’t be there if they didn’t have a purpose. And your body, when it detects these foreign proteins, mounts a protective reaction, causing redness, swelling, and itching. IIRC, that reaction does a good job of killing any viruses that came in with the mosquito saliva. We’ve evolved to have that reaction. The itchiness is probably good for killing any bloodsuckers that don’t flee quickly. It certainly works against ticks.
Evolution is not our friend. It doesn’t give us what we want, just what we need.
People mean different things when they say "values" (object vs meta values)
I noticed that people often mean different things when they say "values," and they end up talking past each other (or convergence only happens after a long discussion). One of the difference is in whether they contain meta-level values.
Man, deviation arguments are so cool:
It's easy to get confused and think the circularity is a problem ("how can you define thermodynamics in terms of equilibriums, when equilibriums are defined using thermodynamics?"), but it's all about carving nature at the right joints—and a sign that you made the right carving is that the amount of corrections needed to be applied aren't too numerous, and they all seem "natural" (and of course, all of this while letting you make nontrivial predictions. that's what matters at the end of the day).
Then, it's often the case that those corrections also turn out to be meaningful and natural quantities of interest.
I wonder if the following is possible to study textbooks more efficiently using LLMs:
When I study textbooks, I spend a significant amount of time improving my mental autocompletion, like being able to familiarize myself with the terminologies, which words or proof-style usually come in which context, etc. Doing this seems to significantly improve my ability to read eg long proofs, since I can ignore all the pesky details (which I can trust my mental autocompletion to later fill in the details if needed) and allocate my effort in getting a high-level view of the proof.
Textbooks don't really admit this style of learning, because the students don't have prior knowledge of all the concept-dependencies of a new subject they're learning, and thus are forced to start at the lowest-level and make their way up to the high-level perspective.
Perhaps LLMs will let us reverse this direction, instead going from the highest to the lowest.
One of the rare insightful lessons from high school: Don't set your AC to the minimum temperature even if it's really hot, just set it to where you want it to be.
It's not like the air released gets colder with lower target temperature, because most ACs (according to my teacher, I haven't checked lol) are just a simple control system that turns itself on/off around the target temperature, meaning the time it takes to reach a certain temperature X is independent of the target temperature (as long it's lower than X)
... which is embarrassingly obvious in hindsight.
Well is he is right about some ACs being simple on/off units.
But there also exists units than can change cycle speed, its basically the same thing except the motor driving the compression cycle can vary in speed.
In case you where wondering, they are called inverters. And when buying new today, you really should get an inverter (efficiency).
moments of microscopic fun encountered while studying/researching:
(will probably keep updating this in the replies)
Just noticing that the negation of a statement exists is enough to make meaningful updates.
e.g. I used to (implicitly) think "Chatbot Romance is weird" without having evaluated anything in-depth about the subject (and consequently didn't have any strong opinions about it)—probably as a result of some underlying cached belief.
But after seeing this post, just reading the title was enough to make me go (1) "Oh! I just realized it is perfectly possible to argue in favor of Chatbot Romance ... my belief on this subject must be a cached belief!" (2) hence is probably by-default biased towards something like the consensus opinion, and (3) so I should update away from my current direction, even without reading the post.
(Note: This was a post, but in retrospect was probably better to be posted as a shortform)
(Epistemic Status: 20-minute worth of thinking, haven't done any builder/breaker on this yet although I plan to, and would welcome any attempts in the comment)
(This probably won't work, or at least isn't as simple as described above. Again, welcome any builder/breaker attempts!)
Quick thoughts on my plans:
Dunno, sounds pretty reasonable!
Useful perspective when thinking of mechanistic pictures of agent/value development is to take the "perspective" of different optimizers, consider their relative "power," and how they interact with each other.
E.g., early on SGD is the dominant optimizer, which has the property of (having direct access to feedback from U / greedy). Later on early proto-GPS (general-purpose search) forms, which is less greedy, but still can largely be swayed by SGD (such as having its problem-specification-input tweaked, having the overall GPS-implementation modified, etc). Much later, GPS becomes the dominant optimizing force "at run-time" which shortens the relevant time-scale and we can ignore the SGD's effect. This effect becomes much more pronounced after reflectivity + gradient hacking when the GPS's optimization target becomes fixed.
(very much inspired by reading Thane Ruthenis's value formation post)
This is a very useful approximation at the late-stage when the GPS self-modifies the agent in pursuit of its objective! Rather than having to meticulously think about local SGD gradient incentives and such, since GPS is non-greedy, we can directly model it as doing what's obviously rational from a birds-eye-perspective.
(kinda similar to e.g., separation of timescale when analyzing dynamical systems)
It seems like retrieval-based transformers like RETRO is "obviously" the way to go—(1) there's just no need to store all the factual information as fixed weights, (2) and it uses much less parameter/memory. Maybe mechanistic interpretability should start paying more attention to these type of architectures, especially since they're probably going to be a more relevant form of architecture.
They might also be easier to interpret thanks to specialization!
I've noticed during my alignment study that just the sheer amount of relevant posts out there is giving me a pretty bad habit of (1) passively engaging with the material and (2) not doing much independent thinking. Just keeping up to date & distilling the stuff in my todo read list takes up most of my time.
Okay, then, don't do that! Some directions to move towards:
I recently learned about metauni, and it looks amazing. TL;DR, a bunch of researchers give out lectures or seminars on Roblox - Topics include AI alignment/policy, Natural Abstractions, Topos Theory, Singular Learning Theory, etc.
I haven't actually participated in any of their live events yet and only watched their videos, but they all look really interesting. I'm somewhat surprised that there hasn't been much discussion about this on LW!
Is there a case for AI gain-of-function research?
(Epistemic Status: I don't endorse this yet, just thinking aloud. Please let me know if you want to act/research based on this idea)
It seems like it should be possible to materialize certain forms of AI alignment failure modes with today's deep learning algorithms, if we directly optimize for their discovery. For example, training a Gradient Hacker Enzyme.
A possible benefit of this would be that it gives us bits of evidence wrt how such hypothesized risks would actually manifest in real training environments. While the similarities would be limited because the training setups would be optimizing for their discovery, it should at least serve as a good lower bound for the scenarios in which these risks could manifest.
Perhaps having a concrete bound for when dangerous capabilities appear (eg a X parameter model trained in Y modality has Z chance of forming a gradient hacker) would make it easier for policy folks to push for regulations.
Is AI gain-of-function equally dangerous as biotech gain-of-function? Some arguments in favor (of the former being dangerous):
Some arguments against:
Random alignment-related idea: train and investigate a "Gradient Hacker Enzyme"
TL;DR, Use meta-learning methods like MAML to train a network submodule i.e. circuit that would resist gradient updates in a wide variety of contexts (various architectures, hyperparameters, modality, etc), and use mechanistic interpretability to see how it works.
It should be possible to have a training setup for goals other than "resist gradient updates," such as restricting the meta-objective to a specific sub-sub-circuit. In that case, the outer circuit might (1) instrumentally resist updates, or (2) somehow get modified while keeping its original behavioral objective intact.
This setup doesn't have to be restricted to circuits of course; there was a previous work which did this on the level of activations, although iiuc the model found a trivial solution by exploiting relu—it would be interesting to extend this to more diverse setup.
Anyways, varying this "sub-sub-circuit/activation-to-be-preserved" over different meta-learning episodes would incentivize the training process to find "general" Gradient Hacker designs that aren't specific to a particular circuit/activation—a potential precursor for various forms of advanced Gradient Hackers (and some loose analogies to how enzymes accelerate reactions).
What is the Theory of Impact for training a "Gradient Hacker Enzyme"?
(note: while I think these are valid, they're generated post-hoc and don't reflect the actual process for me coming up with this idea)
This is just a random idea and I'm probably not going to work on it; but if you're interested, let me know. While I don't think this is capabilities-relevant, this probably falls under AI gain-of-function research and should be done with caution.
Update: I'm trying to upskill mechanistic interpretability, and training a Gradient Hacker Enzyme seems like a fairly good project just to get myself started.
I don't think this project would be highly valuable in and of itself (although I would definitely learn a lot!), so one failure mode I need to avoid is ending up investing too much of my time in this idea. I'll probably spend a total of ~1 week working on it.
I think what's so crushing about it, is that it reminds me that the wrong people are designing things, and that they wont allow them to be fixed, and I can only find solace in thinking that the inefficiency of their designs is also a sign that they can be defeated.