Having read a few studies myself I got a CO2 monitor (from AirThings, also monitors VOCs, temperature, humidity etc). From which I can confirm that CO2 builds to quite high levels in an unventilated room within an hour or two. But even leaving a window only slightly ajar helps a lot.
Apparently fan heating and air conditioning systems may or may not mix in air from outside - many just recirculate the same air - so switching these on may or may not help with ventilation.
Some studies suggest high CO2 also harms sleep - though again the research is inadequate. If so, sleeping with the window slightly open should help; if cold/noise makes this impractical, sleep with the bedroom door ajar (if there aren’t other people around) and a window open in another room Or even if no window is open at all, having your bedroom door ajar seems to help by letting the CO2 out. I’ve done this for the last year, though can’t be sure if it’s helped my sleep.
A confounding factor is that it’s best to sleep in a cool room, which opening a window also achieves. Either way this is an argument for opening a window while you sleep.
Would be good to hear more of this
Many excellent examples and analysis. Obviously v long but no doubt others will find it useful source material.
Cancer is an interesting example I haven’t seen before, with suitably alarming connotations.
I don’t know; I had assumed so but maybe not
Re ‘AI is being rapidly adopted, and people are already believing the AIs’ - two recent cases from the UK of national importance:
In a landmark employment case (re trans rights), the judge’s ruling turned out to have been partly written by AI which had made up case law:
https://www.rollonfriday.com/news-content/sandie-peggie-judge-accused-packing-verdict-dodgy-quotes
And in a controversy in which police banned Israeli fans from attending a soccer match, their decision cited evidence which had also been made up by AI (eg an entirely fictional previous UK visit by the Israeli team). The local police chief has just resigned over it:
Also with eg dog territory, the boundary markers aren’t arbitrary - presumably the reason dogs piss on trees & lampposts, which are not physical thresholds, is (a) they provide some protection for the scent against being removed eg by rain; (b) they are (hence) standard locations for rival dogs to check for scent, rather than having to sniff vast areas of ground; ie they are (evolved) Schelling points for potential boundary markers.
(Walls are different as they are both potential boundary markers and physical thresholds.)
According to the Wikipedia article above, the Frisch–Peierls memorandum included those two scientist’s suggestion that the best way to deal with their concern that the Germans would develop an atomic bomb was to build one first. But what they thought about the moral issues I don’t know
When scientists first realised an atomic bomb might be feasible (in the UK in 1939), and how important it would be, the UK defence science adviser reckoned there was only a 1 in 100,000 chance of successfully making one. Nonetheless the government thought that high enough to instigate secret experiments into it.
(Obliquely relevant to AI risk.)
Reminds me of when I was 8 and our history teacher told us about some king of England being deposed by the common people. We were shocked and confused as to how this could happen - he was the king! If he commanded them to stop, they’d have to obey! How could they not do that?? (Our teacher found this hilarious.)
I’ve only just realised that a key part of the AI alignment problem is essentially Wittgenstein’s rule-following argument. (Maybe obvious, but I’ve never seen this stated before.)
His rule-following argument claims that it’s impossible to define a term unambiguously, whether by examples or rules or using other terms; indeed any definition is so ambiguous as to be consistent with any future application of the term. So you can’t even teach someone ‘+’ in such a way that when following your definition/rule/algorithm they will give your desired answer to a sum they haven’t seen before, eg 1000 + 1000 = 2000. They could just as ‘correctly’ give 3000 or -45.7 or pi. (I won’t explain why here.)
Cf no amount of training an AI to be ‘good’ etc will ensure that it remains so in novel situations.
I’m not convinced Wittgenstein was right (and argued against the rule-following argument for my philosophy masters FWIW); maybe a real philosopher more familiar with the topic could apply it usefully to AI alignment.