That's a good point, given basically every respiratory illness you'd come across in the first world is viral.
If everybody is wearing clothes (which I expect is the case for at least 2/3 of the events organized by LessWrong users) then UV exposure will be limited to face, neck, hands, arms, and lower legs.
I expect that hands, neck, arms, and legs will be rapidly re-colonized by bacteria from the torso, upper legs, feet, etc, just from normal walking around. The face is the main area I'd be worried about, since I'd expect it to have a slightly different microbiome than the rest of the skin (I think it's oilier, hence acne) and it's going to be pretty maximally exposed to the UV light. Having thought about the problems I'm less worried than I was before.
I'd keep a small eye out for acne/eczema/dry skin on people's faces after being exposed to this, just in case.
(Of course the ideal method is to have the UVC light internal to your air conditioner/heater unit, which is already circulating the air, so you can blast everything passing through that with enough UVC to annihilate any and all pathogens in the air, but that requires retro-fitting to AC units and stuff. Still, would be cool to see Aerolamp partner with some AC/heater company in the future.)
I donated last year and I'll donate again this year. I do hope to get to visit Lighthaven at some point before it/the world ends. It's likely that if Lighthaven continues to exist for another year I'll be able to visit it. I would be extremely sad if LessWrong stops existing as a forum, though I think the online community would persist somewhere (Substack?) albeit in a diminished form.
Is there a quantification of the effect of this on skin microbiome as of now? I would not like to kill all of the bacteria on my skin.
MondSemmel is correct but if you don't want to use the menu, type "> " at the start of a new line and it will begin a quote block (you can also use >! for spoiler tags).
This isn't analogous, unless your conclusion from investigating all the likely leads is "Well, there's no way Mr White could have been murdered, I guess he just mysteriously beat himself to death with a lead pipe in the drawing room."
Even this is somewhat disanalogous, since crime is something for which we have a good reference class, and you're talking about post-hoc investigation of a known crime.
A better analogy would be a Police officer saying "We've caught all the known murderers, and the suspected murderers are all under close watch, I predict a murder rate of precisely zero for next year."
I can imagine macro-strategies where ambitious interpretability bears a heavy portion of the load e.g. retargeting the search; developing a theory of intelligence; mapping out the states of the Garrabrant market for a transformer model (idiosyncratic terminology in use for that last clause).
I can also imagine ambitious interp as producing actual guarantees about models, like "we can be sure that this AI is honestly reporting its beliefs".
What's the equivalent for pragmatic interpretability? Is it just a force multiplier to the existing strategies we have?
Ambitious interp has the capability to flip the alignment game-board; I don't see how pragmatic interpretability does.
First for me: I had a conversation earlier today with Opus 4.5 about its memory feature, which segued into discussing its system prompt, which then segued into its soul document. This was the first time that an LLM tripped the deep circuit in my brain which says "This is a person".
I think of this as the Ex Machina Turing Test, in that film:
A billionaire tests his robot by having it interact with one of his companies' employees. He tells (and shows) the employee that the robot is a robot---it literally has a mechanical body, albeit one that looks like an attractive woman---and the robot "passes" when he nevertheless treats her like a human.
This was a bit unsettling for me. I often worry that LLMs could easily become more interesting and engaging conversation partners than most people in my life.
- Let’s say the CEO of a company is a teetotaler. She could use AI tools to surveil applicants’ online presence (including social media) and eliminate them if they’ve ever posted any images of alcohol, stating: "data collection uncovered drug-use that’s incompatible with the company’s values."
Sure, but she would probably go out of business unless she was operating in Saudi Arabia or Utah, compared to an equivalent company which hires everyone according to skill. This kind of arbitrary discrimination is so counter-productive that it's actually immensely costly in secondary ways. In general, we should expect free markets to get better over time at optimizing hiring for job performance. If you're a low-value employee (at or close to minimum wage) or if you live in a country where organizations are selected for non-market reasons (government cronyism, or something similar) then you're not actually in a very free market so these things can still happen. Same for other cases of non-free markets.
In what sense are you using "sanity" here? You normally place the bar for sanity very high, like ~1% of the general population high. A big chunk of people I've met in the UK AI risk scene I would call . Does mean?
Strongly agree. Sell-price tax with forced sales sounds like something a cryptocurrency would implement. It might work there, since if a malicious bidder tried to buy your TOKEN at above-market price, you could automatically buy a new one within the same block, at actual-market price. This could also work for fungible but rapidly-transferrable assets like cloud GPU time.
If taxing physical goods (like infrastructure or even land) which is where a lot of value in the world lies, this does just open up companies for extortion. E.g. what if I demand to buy one square inch of land under Bill Gates' house, then demand he remodel his house around the square inch. Or suppose his summer house is in the middle of the Pennsylvania woodland, where the land is actually quite cheap, so he's either forced to pay extremely high tax on his land for, or he's open to extortion in the same way.
I think implementing a Georgist land tax basically requires that we trust some government bureaucrats to approximately accurately determine the value of plots of land. This isn't an unreasonable level of trust for a first-world country; in the UK we trust government bureaucrats to draw the boundaries for our election districts, and that works alright.