I have signed no contracts or agreements whose existence I cannot mention.
They thought they found in numbers, more than in fire, earth, or water, many resemblances to things which are and become; thus such and such an attribute of numbers is justice, another is soul and mind, another is opportunity, and so on; and again they saw in numbers the attributes and ratios of the musical scales. Since, then, all other things seemed in their whole nature to be assimilated to numbers, while numbers seemed to be the first things in the whole of nature, they supposed the elements of numbers to be the elements of all things, and the whole heaven to be a musical scale and a number.
Who (besides yourself) has this position? I feel like believing the safety research we do now is bullshit is highly correlated with thinking its also useless and we should do something else.
the property electrons have that you observe within yourself and want to call "conscious"-as-in-hard-problem-why-is-there-any-perspective is, imo, simply "exists". existence is perspective-bearing. in other words, in my view, the hard problem is just the localitypilled version of "why is there something rather than nothing?"
This actually leads into why I feel drawn to Tegmark’s mathematical universe. It seems that regardless of whether or not my electrons are tagged with the “exists” xml tag, I would have no way of knowing that fact, and would think the same thoughts regardless, so I’m skeptical this word doesn’t get dissolved as we know more philosophy, so that we end up saying stuff like “yeah actually everything exists” or “well no, nothing exists”, and then derive our UDASSA without reference to “existence” as a primitive.
How do electrons having the property “conscious”, but otherwise continuing to obey Maxwell’s equations translate into me saying “I am conscious”?
Or more generally, how does any lump of matter, having the property “conscious” but otherwise continuing to obey unchanged physical laws, end up uttering the words “I am conscious”?
Becoming really aggressive and accusing me of being '"absurd" and "appealing to authority" doesn't change this.
You were appealing to authority, and being absurd (and also appealing to in/out-groupness). I feel satisfied getting a bit aggressive when people do that. I agree that style doesn't have any bearing on the validity of my argument, but it does discourage that sort of talk.
I'm not certain what you're arguing for in this latest comment, I definitely don't think you show here that humans aren't privileged objects when it comes to human values, nor do you show that your quote by Eliezer recommends any special process more than a pointer to humans thinking about their values in an ideal situation, which were my main two contentions in my original comment.
I don't think anyone in this conversation argued that humans can generalize from a fixed training distribution arbitrarily far, and I think everyone also agrees that humans think about morality by making iterative, small, updates to what they already know. But, of course, that does still privilege humans. There could be some consistent pattern to these updates, such that something smarter wouldn't need to run the same process to know the end-result, but that would be a pattern about humans.
Humans very clearly are privileged objects for continuing human values, there is no "giving up on transhumanism". Its literally right there in the name! It would be (and is) certainly absurd to suggest otherwise.
As for CEV, note that the quote you have there indeed does privilege the "human" in human values, in the sense that it suggests giving the AI under consideration a pointer to what humans would want if they had perfect knowledge and wisdom.
Stripping away these absurdities (and appeals to authority or in-groupedness), your comment becomes "Well to generalize human values without humans, you could provide an AI with a pointer to humans thinking under ideal conditions about their values", which is clearly a valid answer, but doesn't actually support your original point all that much, as this relies on humans having some ability to generalize their values out of distribution.
But what statement? Can you just copy your whole message? I just want to try it out myself.
What, specifically, was your prompt?
You don't need to wear the mask at all times, for example you can buy an air quality monitor, and wear the mask only when the sensors detect unsafe levels of contaminants (in which case your fellow passengers ought to be scared).
Based on the article, these events seem most common on Airbus A320 aircrafts, and those are the crafts for which these events have been getting more common. Boeing 737s remain under the FAA’s industry wide estimate (the article claims Airbus far exceeds that estimate), and incidence for them has been basically constant since 2015, so if you want to dodge the whole question, I’d just make sure you use 737 flights.
Edit: Reading more, it sounds like the Boeing 787 completely fixes the relevant design issue (running cabin air through the engine compartment)
Doesn’t having multiple layers of protection seem better to you? Having it be so the AI would more likely naturally conclude we won’t read its scratchpad and modifying its beliefs in this way seems better than not.
You have also recently argued modern safety research is ”shooting with rubber bullets”, so what are we getting in return by breaking such promises now? If its just practice, there’s no reason to put the results online.