Would be good to hear more of this
Many excellent examples and analysis. Obviously v long but no doubt others will find it useful source material.
Cancer is an interesting example I haven’t seen before, with suitably alarming connotations.
I don’t know; I had assumed so but maybe not
Re ‘AI is being rapidly adopted, and people are already believing the AIs’ - two recent cases from the UK of national importance:
In a landmark employment case (re trans rights), the judge’s ruling turned out to have been partly written by AI which had made up case law:
https://www.rollonfriday.com/news-content/sandie-peggie-judge-accused-packing-verdict-dodgy-quotes
And in a controversy in which police banned Israeli fans from attending a soccer match, their decision cited evidence which had also been made up by AI (eg an entirely fictional previous UK visit by the Israeli team). The local police chief has just resigned over it:
Also with eg dog territory, the boundary markers aren’t arbitrary - presumably the reason dogs piss on trees & lampposts, which are not physical thresholds, is (a) they provide some protection for the scent against being removed eg by rain; (b) they are (hence) standard locations for rival dogs to check for scent, rather than having to sniff vast areas of ground; ie they are (evolved) Schelling points for potential boundary markers.
(Walls are different as they are both potential boundary markers and physical thresholds.)
According to the Wikipedia article above, the Frisch–Peierls memorandum included those two scientist’s suggestion that the best way to deal with their concern that the Germans would develop an atomic bomb was to build one first. But what they thought about the moral issues I don’t know
When scientists first realised an atomic bomb might be feasible (in the UK in 1939), and how important it would be, the UK defence science adviser reckoned there was only a 1 in 100,000 chance of successfully making one. Nonetheless the government thought that high enough to instigate secret experiments into it.
(Obliquely relevant to AI risk.)
Reminds me of when I was 8 and our history teacher told us about some king of England being deposed by the common people. We were shocked and confused as to how this could happen - he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found this hilarious.)
Great post. Three comments:
If it were the case that events in the future mattered less than events now (as is the case with money, because money sooner can earn interest), one could discount far future events almost completely and thereby make the long-term effects of one’s actions more tractable. However I understand time discounting doesn’t apply to ethics (though maybe this is disputed by some).
That said, I suspect discounting the future instead on the grounds of uncertainty (the further out you go the harder it is to predict anything) - using say a discount rate per year (as with money) to model this - may be a useful heuristic. No doubt this is a topic discussed in the field.
Secondly, no doubt there is much to be said about what the natural social and temporal boundaries of people’s moral and other influence & plans are, eg family, friends, work, retirement, death (and contents of their will); and how these can change - eg if you gain or exercise power/influence, say by getting an important job, having children, or doing things with wider influence (eg donating to charity), which can be for better or worse.
Thirdly, a minor observation: chess has an equivalent to the Go thing about a local sequence of moves ending in a stop sign, viz. an exchange of pieces - eg capturing a pawn in exchange for a pawn, or a much longer & more complicated sequence involving multiple pieces, but either way ending in a ‘quiet position’ where not very much is happening. Before Alpha Zero, chess programs considering an exchange would look at all plausible ways it might play out, stopping each move sequence only when a quiet position was reached. And in the absence of an exchange or other instability, would stop a sequence after a ‘horizon’ of say 10 moves (and evaluate the resulting situation on the basis of the board position, eg what pieces there are and their mobility).
Having read a few studies myself I got a CO2 monitor (from AirThings, also monitors VOCs, temperature, humidity etc). From which I can confirm that CO2 builds to quite high levels in an unventilated room within an hour or two. But even leaving a window only slightly ajar helps a lot.
Apparently fan heating and air conditioning systems may or may not mix in air from outside - many just recirculate the same air - so switching these on may or may not help with ventilation.
Some studies suggest high CO2 also harms sleep - though again the research is inadequate. If so, sleeping with the window slightly open should help; if cold/noise makes this impractical, sleep with the bedroom door ajar (if there aren’t other people around) and a window open in another room Or even if no window is open at all, having your bedroom door ajar seems to help by letting the CO2 out. I’ve done this for the last year, though can’t be sure if it’s helped my sleep.
A confounding factor is that it’s best to sleep in a cool room, which opening a window also achieves. Either way this is an argument for opening a window while you sleep.