LESSWRONG
LW

Jemist's Shortform

by J Bostock
31st May 2021
1 min read
57

2

This is a special post for quick takes by J Bostock. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Jemist's Shortform
38J Bostock
23niplav
6J Bostock
3Kaj_Sotala
1nielsrolf
3kairos_
5Jeremy Gillen
4J Bostock
1CB
1Garrett Baker
37J Bostock
22Buck
5Oliver Sourbut
2Thane Ruthenis
6Buck
1Thane Ruthenis
3Guive
3Kabir Kumar
1Kabir Kumar
26J Bostock
2Raemon
4Raemon
2J Bostock
2J Bostock
24J Bostock
5Seth Herd
24J Bostock
1Qumeric
20J Bostock
8evhub
1sjadler
7J Bostock
2J Bostock
1idly
7J Bostock
7J Bostock
6Dagon
2mako yass
1JBlack
6J Bostock
5habryka
2JBlack
2J Bostock
5J Bostock
4J Bostock
3J Bostock
1ProgramCrafter
3J Bostock
3Pattern
1J Bostock
2J Bostock
2J Bostock
2J Bostock
2J Bostock
1J Bostock
1J Bostock
1J Bostock
57 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:36 PM
[-]J Bostock5mo*3817

From Rethink Priorities:

  1. We used Monte Carlo simulations to estimate, for various sentience models and across eighteen organisms, the distribution of plausible probabilities of sentience.
  2. We used a similar simulation procedure to estimate the distribution of welfare ranges for eleven of these eighteen organisms, taking into account uncertainty in model choice, the presence of proxies relevant to welfare capacity, and the organisms’ probabilities of sentience (equating this probability with the probability of moral patienthood)

Now with the disclaimer that I do think that RP are doing good and important work and are one of the few organizations seriously thinking about animal welfare priorities...

Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.

Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons. The authors get around this by refusing to stake themselves on any theory of consciousness.

The overall structure of the RP welfare range report, does not cut to the truth, instead the core mental motion seems to be to engage with as many existing piece of work as possible; credence is doled out to different schools of thought and pieces of evidence in a way which seems more like appeasement, lip-service, or a "well these guys have done some work, who are we disrespect them by ignoring it" attitude. Removal of noise is one of the most important functions of meta-analysis, and it is largely absent.

The result of this is an epistemology where the accuracy of a piece of work is a monotonically increasing function of the number of sources, theories, and lines of argument. Which is fine if your desired output is a very long Google doc, and a disclaimer to yourself (and, more cynically, your funders) that "No no, we did everything right, we reviewed all the evidence and took it all into account." but it's pretty bad if you want to actually be correct.

I grow increasingly convinced that the epistemics of EA are not especially good, worsening, and already insufficient to work on the relatively low-stakes and easy issue of animal welfare (as compared to AI x-risk).

Reply11
[-]niplav5mo2312

Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.

Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons.

epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.

I disagree, especially functionalism can justify a number like this. Here's an example for reasoning on this:

  1. Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
  2. What matters is whether that computation is happening.
  3. The structure of suffering is simple enough to be represented in the neurons of a shrimp.

Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some threshold of number of neurons. One might argue that higher levels of suffering require computations with higher complexity, but intuitively I don't buy this—more/purer suffering appears less complicated to me, on introspection (just as higher/purer pleasure appears less complicated as well.)

I think I put a bunch of probability mass on a view like above.

(One might argue that it's about the number of times the suffering computation is executed, not whether it's present or not, but I find that view intuitively less plausible.)

You didn't link the report and I'm not able to make it out from all of the Rethink Priorities moral weight research, so I can't agree/disagree on the state of EA epistemics shown in there.

Reply
[-]J Bostock5mo60

I have added a link to the report now.

As to your point: this is one of the better arguments I've heard that welfare ranges might be similar between animals. Still I don't think it squares well with the actual nature of the brain. Saying there's a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.

Reply11
[-]Kaj_Sotala5mo30

Can you elaborate how

all of the neurons in the brain are firing at once and doing computations in at the same time

leads to

the more neurons are computing some sort of suffering, the greater the intensity of suffering

?

Reply
[-]nielsrolf5mo10

One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude.

In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic - the main points are (writing from memory, might not be entirely accurate):

  • humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection.
  • dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem

If we consider the question from an evolutionary angle, I'd also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.

Reply
[-]kairos_5mo32

I agree with you that the "structure of suffering" is likely to be represented in the neurons of shrimp. I think it's clear that shrimps may "suffer" in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.

But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less "conscious" (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there's much of a subjective experience going on there. This is purely intuitive, and I'm not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the "level of consciousness" animals have sit on an axis that's roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom.

Another analogy I'll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I'd guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.

Reply
[-]Jeremy Gillen5mo51

Can you link to where RP says that?

Reply
[-]J Bostock5mo40

Good point, edited a link to the Google Doc into the post.

Reply
[-]CB5mo10

Your disagreement, from what I understand, seems mostly to stem from the fact that shrimps have less neuron than humans.

Did you check RP's piece on that topic, "Why Neuron Counts Shouldn't Be Used as Proxies for Moral Weight?"

https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral

They say this:

"In regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight; 

Many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and

There is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.

Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely"

Reply
[-]Garrett Baker5mo10

This hardly seems an argument against the one in the shortform, namely

Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons. The authors get around this by refusing to stake themselves on any theory of consciousness.

If the original authors never thought of this that seems on them.

Reply
[-]J Bostock1mo3720

Are there any high p(doom) orgs who are focused on the following:

  1. Pick an alignment "plan" from a frontier lab (or org like AISI)
  2. Demonstrate how the plan breaks or doesn't work
  3. Present this clearly and legibly for policymakers

Seems like this is a good way for people to deploy technical talent in a way which is tractable. There are a lot of people who are smart but not alignment-solving levels of smart who are currently not really able to help.

Reply1
[-]Buck1mo223

I'd say that work like our Alignment Faking in Large Language Models paper (and the model organisms/alignment stress-testing field more generally) is pretty similar to this (including the "present this clearly to policymakers" part).

A few issues:

  • AI companies don't actually have specific plans, they mostly just hope that they'll be able to iterate. (See Sam Bowman's bumper post for an articulation of a plan like this.) I think this is a reasonable approach in principle: this is how progress happens in a lot of fields. For example, the AI companies don't have plans for all kinds of problems that will arise with their capabilities research in the next few years, they just hope to figure it out as they get there. But the lack of specific proposals makes it harder to demonstrate particular flaws.
  • A lot of my concerns about alignment proposals are that when AIs are sufficiently smart, the plan won't work anymore. But in many cases, the plan does actually work fine right now at ensuring particular alignment properties. (Most obviously, right now, AIs are so bad at reasoning about training processes that scheming isn't that much of an active concern.) So you can't directly demonstrate that current plans will fail later without making analogies to future systems; and observers (reasonably enough) are less persuaded by evidence that requires you to assess the analogousness of a setup.
Reply
[-]Oliver Sourbut1mo50

(Psst: a lot of AISI's work is this, and they have sufficient independence and expertise credentials to be quite credible; this doesn't go for all of their work, some of which is indeed 'try for a better plan')

Reply
[-]Thane Ruthenis1mo20

That seems like a pretty good idea!

(There are projects that stress-test the assumptions behind AGI labs' plans, of course, but I don't think anyone is (1) deliberately picking at the plans AGI labs claim, in a basically adversarial manner, (2) optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers. Explicitly setting those priorities might be useful.)

Reply
[-]Buck1mo60

optimizing experimental setups and results for legibility to policymakers, rather than for convincingness to other AI researchers.

People who do research like this are definitely optimizing for legibility to policymakers (always at least a bit, and usually a lot).

One problem is that if AI researchers think your work is misleading/scientifically suspect, they get annoyed at you and tell people that your research sucks and you're a dishonest ideologue. This is IMO often a healthy immune response, though it's a bummer when you think that the researchers are wrong and your work is fine. So I think it's pretty costly to give up on convincingness to AI researchers.

Reply
[-]Thane Ruthenis1mo12

"Not optimized to be convincing to AI researchers" ≠ "looks like fraud". "Optimized to be convincing to policymakers" might involve research that clearly demonstrates some property of AIs/ML models which is basic knowledge for capability researchers (and for which they already came up with rationalizations why it's totally fine) but isn't well-known outside specialist circles.

E. g., the basic example is the fact that ML models are black boxes trained by an autonomous process which we don't understand, instead of manually coded symbolic programs. This isn't as well-known outside ML communities as one might think, and non-specialists are frequently shocked when they properly understand that fact.

Reply1
[-]Guive1mo32

What kind of "research" would demonstrate that ML models are not the same as manually coded programs? Why not just link to the Wikipedia article for "machine learning"? 

Reply
[-]Kabir Kumar1mo30

AI Plans does this

Reply
[-]Kabir Kumar1mo10

yes. AI Plans

Reply
[-]J Bostock1mo262

My impression is that the current Real Actual Alignment Plan For Real This Time amongst medium p(Doom) people looks something like this:

  1. Advance AI control, evals, and monitoring as much as possible now
  2. Try and catch an AI doing a maximally-incriminating thing at roughly human level
  3. This causes [something something better governance to buy time]
  4. Use the almost-world-ending AI to "automate alignment research"

(Ignoring the possibility of a pivotal act to shut down AI research. Most people I talk to don't think this is reasonable.)

I'll ignore the practicality of 3. What do people expect 4 to look like? What does an AI assisted value alignment solution look like?

My rough guess of what it could be, i.e. the highest p(solution is this|AI gives us a real alignment solution) is something like the following. This tries to straddle the line between the helper AI being obviously powerful enough to kill us and obviously too dumb to solve alignment:

  1. Formalize the concept of "empowerment of an agent" as a property of causal networks with the help of theorem-proving AI.
  2. Modify today's autoregressive reasoning models into something more isomorphic to a symbolic casual network. Use some sort of minimal circuit system (mixture of depths?) and prove isomorphism between the reasoning traces and the external environment.
  3. Identify "humans" in the symbolic world-model, using automated mech interp.
  4. Target a search of outcomes towards the empowerment of humans, as defined in 1.

Is this what people are hoping plops out of an automated alignment researcher? I sometimes get the impression that most people have no idea whatsoever how the plan works, which means they're imagining the alignment-AI to be essentially magic. The problem with this is that magic-level AI is definitely powerful enough to just kill everyone.

Reply1
[-]Raemon1mo20

Is your question more about "what's the actual structure of the 'solve alignment' part", or "how are you supposed to use powerful AIs to help with this?"

Reply
[-]Raemon1mo42

I think there's one structure-of-plan that is sort of like your outline (I think it is similar to John Wentworth's plan but sort of skipping ahead past some steps and being more-specific-about-the-final-solution which means more wrong)

(I don't think John self-identifies as particularly oriented around your "4 steps from AI control to automate alignment research". I haven't heard the people who say 'let's automate alignment research' say anything that sounded very coherent. I think many people are thinking something like "what if we had a LOT of interpretability?" but IMO not really thinking through the next steps needed for that interpretability to be useful in the endgame.)

STEM AI -> Pivotal Act

I haven't heard anyone talk about this for awhile, but a few years back I heard a cluster of plans that were something like "build STEM AI with very narrow ability to think, which you could be confident couldn't model humans at all, which would only think about resources inside a 10' by 10' cube, and then use that to invent the pre-requisites for uploading or biological intelligence enhancement, and then ??? -> very smart humans running at fast speeds figure out how to invent a pivotal technology." 

I don't think the LLM-centric era lends itself well to this plan. But, I could see a route where you get a less-robust-and-thus-necessarily-weaker STEM AI trained on a careful STEM corpus with careful control and asking it carefully scoped questions, which could maybe be more powerful than you could get away with for more generically competent LLMs. 

Reply
[-]J Bostock1mo20

Yes, a human-uploading or human-enhancing pivotal act might actually be something people are thinking about. Yudkowsky gives his nanotech-GPU-melting pivotal act example, which---while he has stipulated that it's not his real plan---still anchors "pivotal act" on "build the most advanced weapon system of all time and carry out a first-strike". This is not something that governments (and especially companies) can or should really talk about as a plan, since threatening a first-strike on your geopolitical opponents does not a cooperative atmosphere make.

(though I suppose a series of targeted, conventional strikes on data centers and chip factories across the world might be on the pareto-frontier of "good" vs "likely" outcomes)

Reply
[-]J Bostock1mo20

My question was an attempt to trigger a specific mental motion in a certain kind of individual. Specifically, I was hoping for someone who endorses that overall plan to envisage how it would work end-to-end, using their inner sim.

My example was basically what I get when I query my inner sim, conditional on that plan going well. 

Reply
[-]J Bostock4mo244

Too Early does not preclude Too Late

Thoughts on efforts to shift public (or elite, or political) opinion on AI doom.

Currently, it seems like we're in a state of being Too Early. AI is not yet scary enough to overcome peoples' biases against AI doom being real. The arguments are too abstract and the conclusions too unpleasant.

Currently, it seems like we're in a state of being Too Late. The incumbent players are already massively powerful and capable of driving opinion through power, politics, and money. Their products are already too useful and ubiquitous to be hated.

Unfortunately, these can both be true at the same time! This means that there will be no "good" time to play our cards. Superintelligence (2014) was Too Early but not Too Late. There may be opportunities which are Too Late but not Too Early, but (tautologically) these have not yet arrived. As it is, current efforts must fight on bith fronts.

Reply
[-]Seth Herd4mo53

I like this framing; we're both too early and too late. But it might transition quite rapidly from too early to right on time.

One idea is to prepare strategies and arguments and perhaps prepare the soil of public discourse in preparation for the time when it is no longer too early. Job loss and actually harmful AI shenanigans are very likely before takeover-capable AGI. Preparing for the likely AI scares and negative press might help public opinion shift very rapidly as it sometimes does (e.g., COVID opinions went from no concern to shutting down half the economy very quickly).

The average American and probably the average global citizen already dislikes AI. It's just the people benefitting from it that currently like it, and that's a minority.

Whether that's enough is questionable, but it makes sense to try and hope that the likely backlash is at least useful in slowing progress or proliferation somewhat.

Reply
[-]J Bostock9mo242

So Sonnet 3.6 can almost certainly speed up some quite obscure areas of biotech research. Over the past hour I've got it to:

  1. Estimate a rate, correct itself (although I did have to clock that it's result was likely off by some OOMs, which turned out to be 7-8), request the right info, and then get a more reasonable answer.
  2. Come up with a better approach to a particular thing than I was able to, which I suspect has a meaningfully higher chance of working than what I was going to come up with.

Perhaps more importantly, it required almost no mental effort on my part to do this. Barely more than scrolling twitter or watching youtube videos. Actually solving the problems would have had to wait until tomorrow.

I will update in 3 months as to whether Sonnet's idea actually worked.

(in case anyone was wondering, it's not anything relating to protein design lol: Sonnet came up with a high-level strategy for approaching the problem)

Reply1
[-]Qumeric9mo10

I think you might find this paper relevant/interesting: https://aidantr.github.io/files/AI_innovation.pdf

TL;DR: Research on LLM productivity impacts in material disocery.

Main takeaways:

  • Significant productivity improvement overall
  • Mostly at idea generation phase
  • Top performers benefit much more (because they can evaluate AI's ideas well)
  • Mild decrease in job satisfaction (AI automates most interesting parts, impact partly counterbalanced by improved productivity)
[This comment is no longer endorsed by its author]Reply
[-]J Bostock3mo*201

The latest recruitment ad from Aiden McLaughlin tells a lot about OpenAI's internal views on model training:

Image

My interpretation of OpenAI's worldview, as implied by this, is:

  1. Inner alignment is not really an issue. Training objectives (evals) relate to behaviour in a straightforward and predictable way.
  2. Outer alignment kinda matters, but it's not that hard. Deciding the parameters of desired behaviour is something that can be done without serious philosophical difficulties.
  3. Designing the right evals is hard, you need lots of technical skill and high taste to make good enough evals to get the right behaviour.
  4. Oversight is important, in fact oversight is the primary method for ensuring that the AIs are doing what we want. Oversight is tractable and doable.

None of this dramatically conficts with what I already thought OpenAI believed, but it's interesting to get another angle on it.

It's quite possible that 1 is predicated on technical alignment work being done in other parts of the company (though their superalignment team no longer exists) and it's just not seen as the purview of the evals team. If so it's still very optimistic. If there isn't such a team then it's suicidally optimistic.

For point2, I think the above ad does implies that the evals/RL team is handling all of the questions of "how should a model behave" and they're mostly not looking at it from the perspective of moral philosophy a la Amanda Askell at Anthropic. If questions of how models behave are entirely being handled by people selected only for artistic talent + autistic talent then I'm concerned these won't be done well either.

3 seems correct in that well-designed evals are hard to make and you need skills beyond technical skills. Nice, but it's telling that they're doing well on the issue which brings in immediate revenue, and badly on the issues that get us killed at some point in the future.

Point 4 is kinda contentious. Some very smart people take oversight very seriously, but it also seems kinda doomed as an agenda when it comes to ASI. Seems like OpenAI are thinking about at least one not-kill-everyoneism plan, but a marginally promising one at best. Still, if we somehow make miraculous progress on oversight, perhaps OpenAI will take up those plans.

Finally, I find the mention of "The Gravity of AGI" to be quite odd since I've never got the sense that Aiden feels the gravity of AGI particularly strongly. As an aside I think that "feeling the AGI" is like enlightenment, where everyone behind you on the journey is a naive fool and everyone ahead of you is a crazy doomsayer.

EDIT: a fifth implication: little to no capabilities generalization. Seems like they expect each individual capability to be produced by a specific high-quality eval, rather than for their models to generalize broadly to a wide range of tasks.

Reply
[-]evhub3mo80

Link is here, if anyone else was wondering too.

Reply
[-]sjadler3mo10

Re: 1, during my time at OpenAI I also strongly got the impression that inner alignment was way underinvested. The Alignment team’s agenda seemed basically about better values/behavior specification IMO, not making the model want those things on the inside (though this is now 7 months out of date). (Also, there are at least a few folks within OAI I’m sure know and care about these issues)

Reply
[-]J Bostock2mo70

As much as the amount of fraud (and lesser cousins thereof) in science is awful as a scientist, it must be so much worse as a layperson. For example this is a paper I found today suggesting that cleaner wrasse, a type of finger-sized fish, can not only pass the mirror test, but are able to remember their own face and later respond the same way to a photograph of themselves as to a mirror.

https://www.pnas.org/doi/10.1073/pnas.2208420120

Ok, but it was published in PNAS. As a researcher I happen to know that PNAS allows for special-track submissions from members of the National Academy of Sciences (the NAS in PNAS) which are almost always accepted. The two main authors are Japanese, and have zero papers other than this, which is a bit suspicious in and of itself but it does mean that they're not members of the NAS. But PNAS is generally quite hard to publish in, so how did some no-names do that?

Aha! I see that the paper was edited by Frans de Waal! Frans de Waal is a smart guy but he also generally leans in favour of animal sentience/abilities, and crucially he's a member of the NAS so it seems entirely plausible that some Japanese researchers with very little knowledge managed to "massage" the data into a state where Frans de Waal was convinced by it.

Or not! There's literally no way of knowing at this point, since "true" fraud (i.e. just making shit up) is basically undetectable, as is cherry-picking data!

This is all insanely conspiratorial of course, but this is the approach you have to take when there's so much lying going on. If I was a layperson there's basically no way I could have figured all this out, so the correct course of action would be to unboundedly distrust everything regardless.

[This comment is no longer endorsed by its author]Reply
[-]J Bostock2mo20

So I still don't know what's going on but this probably mischaracterizes the situation. So the original notification that Frans de Waal "edited" the paper actually means that he was the individual who coordinated the reviews of the paper at the Journal's end, which was not made particularly clear. The lead authors do have other publications (mostly in the same field) it's just the particular website I was using didn't show them. There's also a strongly skeptical response to the paper that's been written by ... Frans de Waal so I don't know what's going on there!

The thing about PNAS having a secret submission track is true as far as I know though.

Reply
[-]idly2mo10

The editor of an article is the person who decides whether to desk-reject or seek reviewers, find and coordinate the reviewers, communicate with the authors during the process and so on. That's standard at all journals afaik. The editor decides on publication according to the journal's criteria. PNAS does have this special track but one of the authors must be in NAS, and as that author you can't just submit a bunch of papers in that track, you can use it once a year or something. And most readers of PNAS know this and are suitably sceptical of those papers (and it's written on the paper if it used that track). The journal started out only accepting papers from NAS members and opened to everyone in the 90s so it's partly a historical quirk.

Reply
[-]J Bostock2mo73

https://threadreaderapp.com/thread/1925593359374328272.html

Reading between the lines here, Opus 4 was RLed by repeated iterating and testing. Seems like they had to hit it fairly (for Anthropic) hard with the "Identify specific bad behaviors and stop them" technique.

Relatedly: Opus 4 doesn't seem to have the "good vibes" that Opus 3 had.

Furthermore, this (to me) indicates that Anthropic's techniques for model "alignment" are getting less elegant and sophisticated over time, since the models are getting smarter---and thus harder to "align"---faster than Anthropic is getting better at it. This is a really bad trend, and is something that people at Anthropic should be noticing as a sign that Business As Usual does not lead to a good future.

Reply
[-]J Bostock4y70

There's a court at my university accommodation that people who aren't Fellows of the college aren't allowed on, it's a pretty medium-sized square of mown grass. One of my friends said she was "morally opposed" to this (on biodiversity grounds, if the space wasn't being used for people it should be used for nature).

And I couldn't help but think, how tiring it would be to have a moral-feeling-detector this strong. How could one possibly cope with hearing about burglaries, or North Korea, or astronomical waste.

I've been aware of scope insensitivity for a long time now but, this just really put things in perspective in a visceral way for me.

Reply
[-]Dagon4y60

For many who talk about "moral opposition", talk is cheap, and the cause of such a statement may be in-group or virtue signaling rather than an indicator of intensity of moral-feeling-detector.

Reply
[-]mako yass4y20

You haven't really stated that she's putting all that much energy into this (implied, I guess), but I'd see nothing wrong with having a moral stance about literally everything but still prioritizing your activity in healthy ways, judging this, maybe even arguing vociferously for it, for about 10 minutes, before getting back to work and never thinking about it again.

Reply
[-]JBlack4y10

To me it seems more likely that this person is misreporting their motive than that they really oppose this allocation of a patch of grass on biodiversity grounds. I would expect grounds like "I want to use it myself" or slightly more general "it should be available for a wider group" to be very much more common, for example if I had to rank likelihood of motives after hearing that someone objects, but before hearing their reasons. I'd end up with more weight on "playing social games" than on "earnestly believes this".

On the other hand it would not surprise me very much that at least one person somewhere might truly hold this position. Just my weight for any particular person would be very low.

Reply
[-]J Bostock10mo61

Seems like if you're working with neural networks there's not a simple map from an efficient (in terms of program size, working memory, and speed) optimizer which maximizes X to an equivalent optimizer which maximizes -X. If we consider that an efficient optimizer does something like tree search, then it would be easy to flip the sign of the node-evaluating "prune" module. But the "babble" module is likely to select promising actions based on a big bag of heuristics which aren't easily flipped. Moreover, flipping a heuristic which upweights a small subset of outputs which lead to X doesn't lead to a new heuristic which upweights a small subset of outputs which lead to -X. Generalizing, this means that if you have access to maximizers for X, Y, Z, you can easily construct a maximizer for e.g. 0.3X+0.6Y+0.1Z but it would be non-trivial to construct a maximizer for 0.2X-0.5Y-0.3Z. This might mean that a certain class of mesa-optimizers (those which arise spontaneously as a result of training an AI to predict the behaviour of other optimizers) are likely to lie within a fairly narrow range of utility functions.

Reply1
[-]habryka10mo52

True if you don't count the training process as part of the optimizer (which is a choice that sometimes makes sense and sometimes doesn't). If you count the training process as part of the optimizer, then you can of course just flip your loss function or RL signal most of the time.

Reply
[-]JBlack10mo20

How do you construct a maximizer for 0.3X+0.6Y+0.1Z from three maximizers for X, Y, and Z? It certainly isn't true in general for black box optimizers, so presumably this is something specific to a certain class of neural networks.

Reply
[-]J Bostock10mo20

My model: suppose we have a DeepDreamer-style architecture, where (given a history of sensory inputs) the babbler module produces a distribution over actions, a world model predicts subsequent sensory inputs, and an evaluator predicts expected future X. If we run a tree-search over some weighted combination of the X, Y, and Z maximizers' predicted actions, then run each of the X, Y, and Z maximizers' evaluators, we'd get a reasonable approximation of a weighted maximizers.

This wouldn't be true if we gave negative weights to the maximizers, because while the evaluator module would still make sense, the action distributions we'd get would probably be incoherent e.g. the model just running into walls or jumping off cliffs.

My conjecture is that, if a large black box model is doing something like modelling X, Y, and Z maximizers acting in the world, that large black box model might be close in model-space to a itself being a maximizer which maximizes 0.3X + 0.6Y + 0.1Z, but it's far in model-space from being a maximizer which maximizes 0.3X - 0.6Y - 0.1Z due to the above problem. 

Reply
[-]J Bostock8mo50

Thinking back to the various rationalist attempts to make vaccine. https://www.lesswrong.com/posts/niQ3heWwF6SydhS7R/making-vaccine For bird-flu related reasons. Since then, we've seen mRNA vaccines arise as a new vaccination method. mRNA vaccines have been used intra-nasally for COVID with success in hamsters. If one can order mRNA for a flu protein, it would only take mixing that with some sort of delivery mechanism (such as Lipofectamine, which is commercially available) and snorting it to get what could actually be a pretty good vaccine. Has RaDVac or similar looked at this?

Reply
[-]J Bostock3y40

Seems like there's a potential solution to ELK-like problems. If you can force the information to move from the AI's ontology to (it's model of) a human's ontology and then force it to move it back again.

This gets around "basic" deception since we can always compare the AI's ontology before and after the translation.

The question is how do we force the knowledge to go through the (modeled) human's ontology, and how do we know the forward and backward translators aren't behaving badly in some way.

Reply
[-]J Bostock23d30

Rather than using Bayesian reasoning to estimate P(A|B=b) it seems like most people the following heuristic:

  • Condition on A=a and B=b for different values of a
  • For each a, estimate the remaining uncertainty given A=a and B=b
  • Choose the a with the lowest remaining uncertainty from step 2

This is how you get "Saint Austacious could levitate, therefore God", since given [levitating saint] AND [God exists] there is very little uncertainty over what happened. Whereas given [levitating saint] AND [no God] there's a lot still left to wonder about regarding who made up the story at what point.

Reply
[-]ProgramCrafter22d10

If so, they must be committing a 'disjunction fallacy', grading the second option as less likely than the first disregarding that it could be true in more ways!

Reply
[-]J Bostock4y30

Getting rid of guilt and shame as motivators of people is definitely admirable, but still leaves a moral/social question. Goodness or Badness of a person isn't just an internal concept for people to judge themselves by, it's also a handle for social reward or punishment to be doled out. 

I wouldn't want to be friends with Saddam Hussein, or even a deadbeat parent who neglects the things they "should" do for their family. This also seems to be true regardless of whether my social punishment or reward has the ability to change these people's behaviour. But what about being friends with someone who has a billion dollars but refuses to give any of that to charity? What if they only have a million dollars? What if they have a reasonably comfortable life but not much spare income?

Clearly the current levels of social reward/punishment are off (billionaire philanthropy etc.) so there seems an obvious direction to push social norms in if possible. But this leaves the question of where the norms should end up.

Reply
[-]Pattern4y30

I think there's a bit of a jump from 'social norm' to 'how our government deals with murders'. Referring to the latter as 'social' doesn't make a lot of sense.

Reply
[-]J Bostock4y10

I think I've explained myself poorly, I meant to use the phrase social reward/punishment to refer exclusively to things forming friendships and giving people status, which is doled out differently to "physical government punishment". Saddam Hussein was probably a bad example as he is also someone who would clearly also receive the latter.

Reply
[-]J Bostock1mo20

The constant hazard rate model probably predicts exponential training inference (i.e. the inference done during guess and check RL) compute requirements agentic RL with a given model, because as hazard rate decreases exponentially, we'll need to sample exponentially more tokens to see an error, and we need to see an error to get any signal.

Reply2
[-]J Bostock3mo20

Hypothesis: one type of valenced experience---specifically valenced experience as opposed to conscious experience in general, which I make no claims about here---is likely to only exist in organisms with the capability for planning. We can analogize with deep reinforcement learning: seems like humans have a rapid action-taking system 1 which is kind of like Q-learning, it just selects actions; we also have a slower planning-based system 2, which is more like value learning. There's no reason to assign valence to a particular mental state if you're not able to imagine your own future mental states. There is of course moment-to-moment reward-like information coming in, but that seems to be a distinct thing to me.

Reply
[-]J Bostock4mo20

Heuristic explanation for why MoE gets better at higher model size:

The input/output of a feedforward layer is equal to the model_width, but the total size of weights grows as model_width squared. Superposition helps explain how a model component can make the most use of its input/output space (and presumably its parameters) using sparse overcomplete features, but in the limit, the amount of information accessed by the feedforward call scales with the number of active parameters. Therefore at some point, more active parameters won't scale so well, since you're "accessing" too much "memory" in the form of weights, and overwhelming your input/output channels.

Reply
[-]J Bostock5mo20

If we approximate an MLP layer with a bilinear layer, then the effect of residual stream features on the MLP output can be expressed as a second order polynomial over the feature coefficients $f_i$. This will contain, for each feature, an $f_i^2 v_i+ f_i w_i$ term, which is "baked into" the residual stream after the MLP acts. Just looking at the linear term, this could be the source of Anthropic's observations of features growing, shrinking, and rotating in their original crosscoder paper. https://transformer-circuits.pub/2024/crosscoders/index.html

Reply
[-]J Bostock5mo10

I think you should pay in Counterfactual Mugging, and this is one of the newcomblike problem classes that is most common in real life.

Example: you find a wallet on the ground. You can, from least to most pro social:

  1. Take it and steal the money from it
  2. Leave it where it is
  3. Take it and make an effort to return it to its owner

Let's ignore the first option (suppose we're not THAT evil). The universe has randomly selected you today to be in the position where your only options are to spend some resources to no personal gain, or not. In a parallel universe, perhaps your pocket had the hole in it, and a random person has come across your wallet.

Firstly, what they might be thinking is "Would this person do the same for me?"

Secondly, in a society which wins, people return each others' wallets.

You might object that this is different from the Mugging, because you're directly helping someone else in this case. But I would counter that the Mugging is the true version of this problem, one where you have no crutch of empathy to help you, so your decision theory alone is tested.

Reply1
[-]J Bostock4y10

The UK has just switched their available rapid Covid tests from a moderately unpleasant one to an almost unbearable one. Lots of places require them for entry. I think the cost/benefit makes sense even with the new kind, but I'm becoming concerned we'll eventually reach the "imagine a society where everyone hits themselves on the head every day with a baseball bat" situation if cases approach zero.

Reply
[-]J Bostock4y10

Just realized I'm probably feeling much worse than I ought to on days when I fast because I've not been taking sodium. I really should have checked this sooner. If you're planning to do long (I do a day, which definitely feels long) fasts, take sodium! 

Reply
Moderation Log
More from J Bostock
View more
Curated and popular this week
57Comments