LESSWRONG
LW

85
J Bostock
2558Ω14752650
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Dead Ends
Statistical Mechanics
Independent AI Research
Rationality in Research
johnswentworth's Shortform
J Bostock2d40

Huh interesting. I might get myself full genome sequenced at some point. I already got myself 23andme sequenced, downloaded the raw data, and put it into promethease a while ago. I did find out I'm AG at rs53576 which is slightly linked to lower empathy, but is also extremely common. I don't think this is enough to explain a large proportion of my personality, the way your OXTR deletion might be.

(There was something quite amusing to check my SNPs checking whether to start early anti-balding interventions, and have result number 1 be "Low Empathy". As a further datapoint, I mentioned this to my mum and she basically said "Yeah but what did you expect with me and [dad] as parents?")

Seeing this

A few details/examples:

  • As a child, I had a very noticeable lack of interest in other people (especially those my own age), to the point where a school psychologist thought it was notable.
  • I remember being unusually eager to go off to overnight summer camp (without my parents), at an age where nobody bothered to provide overnight summer camp because kids that young were almost all too anxious to be away from their parents that long.
  • When family members or pets died, I've generally been noticeably less emotionally impacted than the rest of the family.
  • When out and about with the family, I've always tended to wander around relatively independently of the rest of the group.

Made me think I should take a deeper look. This all sounds pretty familiar, and I don't think the AG in rs53576 is strong enough to shift me off-distribution to the degree that I am.

Reply
Does My Appearance Primarily Matter for a Romantic Partner?
Answer by J BostockSep 14, 202593

Short answer: yes.

Long answer: being romantically desirable via being hot is not the be-all and end-all. If someone is mildly attracted to you, you'll have more pleasant interactions with them. I have felt this in both directions for myself.

Then those effects percolate outwards as "status" (in the generic rationalist description of the word). If third parties see you having positive interactions with other people, they'll think more highly of you.

And also, non-autistic individuals (I mean in the actual, clinical sense of autism; the effect I'm describing is actually one of the most interesting and unique features of autism as opposed to just social awkwardness) preferentially ally themselves with people they expect to be high-status. So if you're attractive, since people (on some level which is basically instinctual) will expect you to be have more social cache with others, they'll be nicer to you even if they're not attracted to you, nor have they seen you have a series of positive interactions with people who are.

P.S. I'm unclear from the information you've given whether you actually do have high variance in attractiveness over time. E.g. a "fresh" haircut is only good if the haircut looks good on you. For men it is (famously!) often the case that their freshly-cut hair is unattractively short. I usually find 2-10 weeks post-haircut to be optimal. For clothes, do they fit you well? Do the outfits go together and complement your overall style? The variance here comes from "5-year-old T-shirt and jeans" vs "well-fitting shirt and jacket" not vs "new T-shirt and jeans".

Generally from what you've said I'd guess that you're not getting much natural variance in attractiveness.

Reply
Lessons from Studying Two-Hop Latent Reasoning
J Bostock5dΩ260

RE part 6:

I think there's a more intuitive/abstract framing here. If a model has only seen e_2 with respect to two different facts, it probably won't have generated an abstraction for e_2 in its world model at all. An abstraction is mostly useful as a hub of different inferences, like in the old blegg/rube diagram.

Something which has come up in pretraining will already be an abstraction with an easy-to-reach-for handle that the model can pull.

Might be testable by fine-tuning on only some of (or some pairs of) the spokes of a blegg/rube diagram, to see whether the final spoke-pairs fill in.

I.e.

"This object is round, so it's a blegg, so it's blue"

"This object is smooth, so it's a blegg, so it's round"

"This object is smooth, so it's a blegg, so it's bouncy"

"This object is round, is it bouncy?"

Something like that might cause "blegg" to be bound up and assembled into an abstraction in the AI, with a single representation.

Overall I consider this work to be weak evidence in favour of multi-step reasoning being an issue, since the latter parts show that it definitely can occur (just not if both facts are fine-tuned separately)

Reply
chanamessinger's Shortform
J Bostock6d00

Is anyone using the book as a funnel to LessWrong? I don't think MIRI are (afaik). The only (again, afaik) event going on in the UK is being joint hosted by Pause AI, Control AI and some other local community members are helping out, which is not going to be a funnel for LW at all. I assume Lighthaven is doing something (haven't checked) but are they going to say "If you like this book you'll love our online forum?"

Moreover, is using LessWrong as the default funnel a good idea in the first place? I'd guess not. I know lots of people (notably Oliver Habryka) don't approve of Pause AI or Control AI, but I assume there must be other directions for suddenly-invigorated normies to be pointed in (though I've not actually looked for them).

Reply
Mikhail Samin's Shortform
J Bostock7d20

That is true. "People are on hunger strikes and the CEOs haven't even commented" is (some) public evidence of "AI CEOs are unempathetic"

I misunderstood your point, I thought you were arguing against painting individuals as evil in general.

Reply
Mikhail Samin's Shortform
J Bostock7d31

I don't really see the problem with painting people as evil in principle, given that some people are evil. You can argue against it in specific cases, but I think the case for AI CEOs being evil is strong enough that it can't be dismissed out of hand.

The case in question is "AI CEOs are optimising for their short-term status/profits, and for believing things about the world which maximise their comfort, rather than doing the due diligence required of someone in their position, which is to seriously check whether their company is building something which kills everyone"

Whether this is a useful frame for one's own thinking---or a good frame to deploy onto the public---I'm not fully sure, but I think it does need addressing. Of course it might also differ between CEOs. I think Demis and Dario are two of the CEOs who it's relatively less likely to apply to, but also I don't think it applies weakly enough for them to be dismissed out of hand even in their cases.

Reply
Mikhail Samin's Shortform
J Bostock7d20

Fair enough. I think these actions are +ev under a coarse grained model where some version of "Attention on AI risk" is the main currency (or a slight refinement to "Not-totally-hostile attention on AI risk"). For a domain like public opinion and comms, I think that deploying a set of simple heuristics like "Am I getting attention?" "Is that attention generally positive?" "Am I lying or doing something illegal?" can be pretty useful.

Michael said on twitter here that he's had conversations with two sympathetic DeepMind employees, plus David Silver, who was also vaguely sympathetic. This itself is more +ev than I expected already, so I'm updating in favour of Michael here.

It's also occurred to me that if any of the CEOs cracks and at least publicly responds the hunger strikers, then the CEOs who don't do so will look villainous, so you actually only need to have one of them respond to get a wedge in.

Reply
Mikhail Samin's Shortform
J Bostock8d45

I spoke to Michaël in person before he started. I told him I didn't think the game theory worked out (if he's not willing to die, GDM should ignore him; if he does die then he's worsening the world, since he can definitely contribute better by being alive, and GDM should still ignore him). I don't think he's going to starve himself to death or serious harm, but that does make the threat empty. I don't really think that matters too much on a game-theoretic-reputation method since nobody seems to be expecting him to do that.

His theory of change was basically "If I do this, other people might" which seems to be true: he did get another person involved. That other person has said they'll do it for "1-3 weeks" which I would say is unambiguously not a threat to starve oneself to death.

As a publicity stunt it has kinda worked in the basic sense of getting publicity. I think it might change the texture and vibe of the AI protest movement in a direction I would prefer it to not go in. It certainly moves the salience-weighted average of public AI advocacy towards Stop AI-ish things.

Reply1
My AI Vibes are Shifting
J Bostock9d20

I think that AI companies being governed (in general) is marginally better than them not being governed at all, but I also expect that the AI governance that occurs will look more like "AI companies have to pay X tax and heed Y planning system" which still leads to AI(s) eating ~100% of the economy, while not being aligned to human values, and then the first coalition (which might be a singleton AI, or might not be) which is capable of killing off the rest and advancing its own aims will just do that, regulations be damned. I don't expect that humans will be part of the winning coalition that gets a stake in the future.

Reply
My AI Vibes are Shifting
J Bostock12d42

SpaceX doesn't run a country because rockets+rocket building engineers+money cannot perform all the functions of labour, capital, and government and there's no smooth pathway to them expanding that far. Increasing company scale is costly and often decreases efficiency; since they don't have a monopoly on force, they have to maintain cost efficiency and can't expand into all the functions of government.

An AGI has the important properties of labour and capital and government (i.e. no "Lump of Labour" so it does 't devalue the more of it there is, but it can be produced at scale by more labour, but also it can organize itself without external coordination or limitations). I expect any AGI which has these properties to very rapidly outscale all humans, regardless of starting conditions, since the AGI won't suffer from the same inefficiencies of scale or shortages of staff.

I don't expect AGIs to respect human laws and tax codes once they have the capability to just kill us.

Reply
Load More
2Jemist's Shortform
4y
57
183Will Any Crap Cause Emergent Misalignment?
20d
37
8Steelmanning Conscious AI Default Friendliness
23d
0
100Red-Thing-Ism
2mo
9
10Demons, Simulators and Gremlins
2mo
1
58You Can't Objectively Compare Seven Bees to One Human
2mo
26
37Lurking in the Noise
3mo
2
11We Need a Baseline for LLM-Aided Experiments
4mo
1
34Everything I Know About Semantics I Learned From Music Notation
6mo
2
19Turning up the Heat on Deceptively-Misaligned AI
8mo
16
26Intranasal mRNA Vaccines?
8mo
2
Load More