Are there any memes prevalent in the US government that make racing to AGI with China look obviously foolish?
The "let's race to AGI with China" meme landed for a reason. Is there something making the US gov susceptible to some sort of counter-meme, like the one expressed in this comment by Gwern?
The no interest in an AI arms race is now looking false, as apparently China as a state has devoted $137 billion to AI, which is at least a yellow flag that they are interested in racing.
apparently China as a state has devoted $1 trillion to AI
Source? I only found this article about 1 trillion Yuan, which is $137 billion.
Yeah, that was what I was referring to, and I thought it would actually be a trillion dollars, sorry for the numbers being wrong.
I'd vainly hope that everyone would know about the zero-sum nature of racing to the apocalypse from nuclear weapons, but the parallel isn't great, and no-one seems to have learned the lesson anyways, given the failure of holding SALT III or even doing START II.
For as long as I can remember, I've had a very specific way of imagining the week. The weekdays are arranged on an ellipse, with an inclination of ~30°, starting with Monday in the bottom-right, progressing along the lower edge to Friday in the top-left, then the weekend days go above the ellipse and the cycle "collapses" back to Monday.
Actually, calling it "ellipse" is not quite right because in my mind's eye it feels like Saturday and Sunday are almost at the same height, Sunday just barely lower than Saturday.
I have a similar ellipse for the year, this one oriented bottom-left to top-right:
This one also feels wrong because "in my head" each of the following is true:
The main interesting commonalities I see between them:
I don't remember exactly when I became meta-aware of this in the sense of knowing that this way of imagining the most basic temporal cycles is probably rather peculiar to me. It certainly was between learning about associative synesthesia (of which I think this is an example?), which was at my first year of university at the latest (~8 years ago), and when I first described this to somebody, maybe about 2 or 3 years ago, their reaction being approximately "WTF".
I've spoken about it to a few people so far (~10?), and nobody reported having anything like this.
This is very similar to how l perceive time! what I find interesting is that while I’ve heard people talk about the way they conceptualize time before I’ve never heard anyone else mention the bizarre geometry aspect. The sole exceptions to this were my Dad and Grandfather, who brought this phenomenon to my attention when I was young.
I just met someone recently who has this! They said they have always visualized the months of the year as on a slanted treadmill, unevenly distributed. They described it as a form of synesthesia, which is conceptually consistent with how I experience grapheme-color associative synesthesia.
I have similar thing for week days, but somehow with a weird shape?
in general, it's a similar cycle, but flipped horizontally, going left to right:
on top it's: sun, sat
on the bottom: mon, tue, wed, thu, fri
the shape connecting days goes downwards from sun to mon, tue, wed, then upwards to thu, then down to fri, then up to sat, sun, closing the loop.
not sure is this makes any sense )
not sure is this makes any sense )
I think I understand.
Has it always been with you? Any ideas what might be the reason for the bump at Thursday? Was Thursday in some sense "special" for you when you were a kid?
Ha, thinking back to childhood I get it now, it's the influence of the layot of the school daily journal in USSR/Ukraine, like https://cn1.nevsedoma.com.ua/images/2011/33/7/10000000.jpg
I have something like this for years: https://www.lesswrong.com/posts/j8WMRgKSCxqxxKMnj/what-i-think-about-when-i-think-about-history
For as long as I can remember, I have always placed dates on an imaginary timeline, that "placing" involving stuff like fuzzy mental imagery of events attached to the date-labelled point on the timeline. It's probably much less crisp than yours because so far I haven't tried to learn history that intensely systematically via spaced repetition (though your example makes me want to do that), but otherwise sounds quite familiar.
beyond doom and gloom - towards a comprehensive parametrization of beliefs about AI x-risk
doom - what is the probability of AI-caused X-catastrophe (i.e. p(doom))?
gloom - how viable is p(doom) reduction?
foom - how likely is RSI?
loom - are we seeing any signs of AGI soon, looming on the horizon?
boom - if humanity goes extinct, how fast will it be?
room - if AI takeover happens, will AI(s) leave us a sliver of the light cone?
zoom - how viable is increasing our resolution on AI x-risk?
Recently, I watched Out of This Box. In the musical, they test their nascent AGI on the Christiano-Sisskind test, a successor to the Turing test. What the test involves exactly remains unexplained. Here are my hypotheses.[1]
Sisskind certainly refers to Scott Alexander, and one thing that Scott Alexander posted about something in the vicinity of the Turing test was this post (italics added):
The year is 2028, and this is Turing Test!, the game show that separates man from machine! Our star tonight is Dr. Andrea Mann, a generative linguist at University of California, Berkeley. She’ll face five hidden contestants, code-named Earth, Water, Air, Fire, and Spirit. One will be a human telling the truth about their humanity. One will be a human pretending to be an AI. One will be an AI telling the truth about their artificiality. One will be an AI pretending to be human. And one will be a total wild card. Dr. Mann, you have one hour, starting now.
Notably, the last line in the post is:
MANN: You said a bad word! You’re a human pretending to be an AI pretending to be a human! I knew it!
Christiano is, of course, Paul Christiano. One of the many things that Paul Christiano came up with is HCH:
Consider a human who has access to a question-answering machine. Suppose the machine answers questions by perfectly imitating what the human would do if asked that question.
To make things twice as tricky, suppose the human-to-be-imitated is herself able to consult a question-answering machine, which answers questions by perfectly imitating what the human would do if asked that question…
Let’s call this process HCH, for “Humans Consulting HCH.”
The limit of HCH is an infinite HCH tree where each node is able to consult a subtree rooted at itself, to answer the question coming from its parent node.
My first hypothesis is that the Christiano-Sisskind test is some recursive shenanigan like the following:
The AI's overall performance is somehow aggregated into the score of the test.
An alternative possibility is that the test involves something closer to Debate, but I'm much more unsure what that might look like. Maybe something like:
If any of the musical creators are watching this, I'm curious how close this is to what you had in mind (if you had anything specific in mind (but it's, of course, totally valid not to have anything specific in mind and just nerd-name-drop Christiano and Scott)).
I've read the SEP entry on agency and was surprised how irrelevant it feels to whatever it is that makes me interested in agency. Here I sketch some of these differences by comparing an imaginary Philosopher of Agency (roughly the embodiment of the approach that the "philosopher community" seems to have to these topics), and an Investigator of Agency (roughly the approach exemplified by the LW/AI Alignment crowd).[1]
If I were to put my finger on one specific difference, it would be that Philosopher is looking for the true-idealized-ontology-of-agency-independent-of-the-purpose-to-which-you-want-to-put-this-ontology, whereas Investigator wants a mechanistic model of agency, which would include a sufficient understanding of goals, values, dynamics of development of agency (or whatever adjacent concepts we're going to use after conceptual refinement and deconfusion), etc.
Another important component is the readiness to take one's intuitions as the starting point, but also assume they will require at least a bit of refinement before they start robustly carving reality at its joints. Sometimes you may even need to discard almost all of your intuitions and carefully rebuild your ontology from scratch, bottom-up. Philosopher, on the other hand, seems to (at least more often than Investigator) implicitly assume that their System 1 intuitions can be used as the ground truth of the matter and the quest for formalization of agency ends when the formalism perfectly captures all of our intuitions and doesn't introduce any weird edge cases.
Philosopher asks, "what does it mean to be an agent?" Investigator asks, "how do we delineate agents from non-agents (or specify some spectrum of relevant agency-adjacent) properties, such that they tell us something of practical importance?"
Deviant causal chains are posed as a "challenge" to "reductive" theories of agency, which try to explain agency by reducing it to causal networks.[2] So what's the problem? Quoting:
… it seems always possible that the relevant mental states and events cause the relevant event (a certain movement, for instance) in a deviant way: so that this event is clearly not an intentional action or not an action at all. … A murderous nephew intends to kill his uncle in order to inherit his fortune. He drives to his uncle’s house and on the way he kills a pedestrian by accident. As it turns out, this pedestrian is his uncle.
At least in my experience, this is another case of a Deep Philosophical Question that no longer feels like a question, once you've read The Sequences or had some equivalent exposure to the rationalist (or at least LW-rationalist) way of thinking.
About a year ago, I had a college course in philosophy of action. I recall having some reading assigned, in which the author basically argued that for an entity to be an agent, it needs to have an embodied feeling-understanding of action. Otherwise, it doesn't act, so can't be an agent. No, it doesn't matter that it's out there disassembling Mercury and reusing its matter to build the Dyson Sphere. It doesn't have the relevant concept of action, so it's not an agent.
You are suffused with a return-to-womb mentality - desperately destined for the material tomb. Your philosophy is unsupported. Why do AI researchers think they are philosophers when its very clear they are deeply uninvested in the human condition? there should be another term, 'conjurers of the immaterial snake oil', to describe the actions you take when you riff on Dyson Sphere narratives to legitimize your paltry and thoroughly uninteresting research
Is there any research on how the actual impact of [the kind of AI that we currently have] lives up to the expectations from the time [shortly before we had that kind of AI but close enough that we could clearly see it coming]?
This is vague but not unreasonable periods for the second time would be:
I'm also interested in research on historical over- and under-performance of other tech (where "we kinda saw it (or could see) it coming") relative to expectations.
Does severe vitamin C deficiency (i.e. scurvy) lead to oxytocin depletion?
According to Wikipedia
The activity of the PAM enzyme [necessary for releasing oxytocin fromthe neuron] system is dependent upon vitamin C (ascorbate), which is a necessary vitamin cofactor.
I.e. if you don't have enough vitamin C, your neurons can't release oxytocin. Common sensically, this should lead to some psychological/neurological problems, maybe with empathy/bonding/social cognition?
Quick googling "scurvy mental problems" or "vitamin C deficiency mental symptoms" doesn't return much on that. This meta-analysis finds some association of sub-scurvy vitamin C deficiency with depression, mood problems, worse cognitive functioning and some other psychiatric conditions but no mention of what I'd suspect from lack of oxytocin. Possibly oxytocin is produced in low enough levels that this doesn't really matter because you need very little vit C? But on the other hand (Wikipedia again)
By chance, sodium ascorbate by itself was found to stimulate the production of oxytocin from ovarian tissue over a range of concentrations in a dose-dependent manner.
So either this (i.e. disturbed social cognition) is not how we should expect oxytocin deficiencies to manifest or vitamin C deficiency manifests in so many ways in the brain that you don't even bother with "they have worse theory of mind than when they ate one apple a day".
Googling for "scurvy low mood", I find plenty of sources that indicate that scurvy is accompanied by "mood swings — often irritability and depression". IIRC, this has remarked upon for at least two hundred years.
That's also what this meta-analysis found but I was mostly wondering about social cognition deficits (though looking back I see it's not clear in the original shortform)