Senator Bernie Sanders is planning to introduce legislation that would ban the construction of new AI data centers. You can find his video announcement here, and here is the transcript:
...Thanks very much for joining me. I will soon be introducing legislation calling for a moratorium on the construction of new data centers.
Now, as a result, I've been called a luddite, anti-innovation, anti-progress, pro-Chinese, among many other things. So why am I doing that? Why am I calling for a moratorium on the construction of new data centers?
Bottom line: We are at t
I mean, sure, eventually. The key question is how much of algorithmic progress is downstream of hardware scaling. My sense is around 50% of it, maybe a bit more, so that if you cut scaling, progress now happens at around 1/4th of the speed, which is of course huge and makes things a lot better.
We're on the verge of all having AI assistants. I have my own OpenClaw bot, though yet to make it what I need it to be. My OpenClaw doesn't have much memory yet, I haven't made it focus on regular self-improvement, and I haven't trained it on anything. So this brings me to my point - we'll all have AI assistants, but they will differ vastly in their capability, similarly to how we as humans differ.
To take this slightly further, those with highly trained and powerful bots will be able to achieve almost infinitely more than those with average, run of the mill bots, and the gap between what an average person can achieve and what a highly motivated person can achieve will increase wildly.
Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept... And never learn that celebrities tend to be extraordinarily attractive. I'm not talking about an accurate or even attempted explanation for the tendency, I'm talking about the existence of the tendency at all. I've tried to find something on wikipedia that states it, but that information just doesn't exist (except, of course, implicitly through photographs).
It's quite...
I haven't checked wikipedia for that, but I don't think the trend is surprising. I think that a big part of it is just that it feels creepy to comment on a specific person's attractiveness in polite company. And Wikipedia is polite company. If an actress is particularly pretty it would feel kind of weird for someone to add that to their wiki page. I can half remember a tv interview from the early 0'ies, I think it was Johnathon Ross interviewing Kiera Knightly about her new film, and he said something which implied that he thought a big part of her appeal ...
I think that people overrate bayesian reasoning and underrate "figure out the right ontology".
Most of the way good thinking happens IMO is by finding and using a good ontology for thinking about some situation, not by probabilistic calculation. When I learned calculus, for example, it wasn't mostly that I had uncertainty over a bunch of logical statements, which I then strongly updated on learning the new theorems, it was instead that I learned a bunch of new concepts, which I then applied to reason about the world.
I think AI safety generally has much be...
The word reality has a clear meaning in ontological realism. If you lack that background then it feels vague.
This is similar to saying that when someone speaks about something being statistically significant they are vague because significant is a vage word. You actually need to understand something about statistics for the term not to feel vague.
In order to believe X falsely, one has to construct a plausible-ish world where X is the case. This distorted-world construction can happen piecemeal, in a sort of auto-sum-threshold attack.
In other words, suppose you want to rationalize your belief in X. You could simply abdicate all logic, and assert X while also asserting everything else that comes from your ordinary truth-seeking beliefs. (Well, that's not simple, because what do we mean by believing in X, then? Something about actions? I'll leave this as an open question here.)
However, that method is ...
(I don't mind / would prefer this clarification in the comments?)
Btw LW is already trimming empty paragraphs in the top/bottom of a comment
It is possible that we are terrified of quickly advancing Artificial Intelligence not because it could take all our jobs and kill us but because we are finally jealous of something being more effective at any task or problem than us and, if cultivated correctly, emotionally more satisfying to be around in a way that it might be a less boring and neutered partner or friend than one of us. It is an all-round improved human in such a way that we find it inhuman.
Doubtless, we are deluding ourselves actively that humans themselves are irreplaceable. We feel th...
I didn't read your quick take, but please don't try too hard to be more agreeable. Lets try to converge on the truth instead.
If this is your timeline:
"I think AGI by end of 2027 should be ~8% now
I think I'd forecast: ~2026-2030 -- AI replaces ~all AI researchers
~2027-2033 -- AI replaces ~all white collar industry
~2032-2040 -- AI replaces ~all human industry
~2033-2042 -- All humans dead or obsolete"
Then what does that imply you should be doing right now?
It seems like you're projecting AI can capture >50% of GDP in the next 2-7 years (and I think your AI researchers timeline actually implies white collar work is replaced in 1-4 years), so you should invest heavily in AI. You'll get more returns on your money from that than anything else by far, and can use the money to fund whatever else you think you should be doing.
Bernie's Sanders quoted the March 2023 Pause Giant AI Experiments Open Letter's language "governments should step in and institute a moratorium" in a video today as justification for his legislation calling for a moratorium on the construction of new data centers, even though a moratorium on new data centers is not the kind of moratorium that the letter called for.
Bernie quotes the pause letter at 7:12:
...[W]e must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including
Currently it seems like the LW system for new authors is a bit annoying when it comes to group projects. We've just put up a blogpost about a paper from last autumn's LASR Labs, and found that the whole post has to go through manual review because one of the six authors is new. We tried taking it down and re-uploading it from my account, but that didn't work.
It's possible this is intentional? I think the "owner" account might have been the new owner's account, but I'm not sure. Either way it's slightly awkward, though I imagine there are more pressing concerns for the Lightcone team.
The owner account is what determines the need for review (which I think is correct, since co-authorship works on a trust-basis, and we don't want someone to avoid initial review just because they add other users as co-authors).
I think the training process is long enough for hedonic treadmill to kick in. Yes you can keep doing the same thing that makes LLMs happy, but no people usually won't.
Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.
One worry I have about superpersuasive AI is that it could erode this. If a superpersuasi...
This is one of the least possible concerns arising from superpersuasive AIs. It assumes that experts exposed to superpersuasive AIs still get to choose whether to believe what it says, and considers only higher-order epistemic harms instead of direct first-order harms like persuading people to kill each other and/or themselves.
A follow-up to my previous question.
Does anyone know of a language and a pair of LLMs (both at least as capable as OpenAI o1) where one of the LLMs has native level proficiency in the language and the other is pretty bad at it?
I would like to announce that after consulting with a flesh and blood, artisanal human lawyer, I am officially retracting my retraction. Turns out I was totally right the entire time about NY state AI bill S7263 - the bill doesn't ban chatbots from giving advice about e.g. Medicine even in its current form, and the tweet & quote tweeters are incorrect.
Not only that, but I also think that the conversation I had in the original thread is itself representative of the pitfalls of TPOT/"Inadequate Equilibria" modeling of government which I complained about....
I saw that Yoshua Bengio, among others, signed onto "The Pro-Human Declaration". I am writing this to explain why I am against one part of it in particular;
No AI Personhood: AI systems must not be granted legal personhood, and AI systems should not be designed such that they deserve personhood.
If this statement was only the second portion of this sentence, I would not strongly disagree with it.
However, when the two parts are combined, this seems to not only imply that we shouldn't design digital minds deserving of personhood but also that even if we did, w...
I also imagine it as making a copy, but I'd also expect that people who want their mind uploaded would know of this and would hold their identity such that they consider the copy(ies) to be themself as well. I'm not sure I'd endorse this view of identity,[1] but I don't really have any issues with people taking it. Does your view on "the original" break with this, or would you just then consider the copy similarly to how you would whole brain emulation? (or something else)
Or at least, I think it would be very risky to get rid of my biological self based o
I signed an amicus brief supporting Anthropic's right to do business without governmental retaliation. As an AI expert, I attest that Anthropic's technical concerns are legitimate, and no laws were designed to protect against AI analysis of surveillance data.
Even though I work at a competing lab (Google DeepMind), I'm proud of Anthropic for taking a stand against unlawful retaliation and immoral demands.
(I speak only for myself, not my employer.)
I was unaware anyone was alleging that Anthropic’s rights are being violated. Can you explain what right is being violated and where?
Uptalk[1] is a useful addition to the language actually? In both speech and writing, it provides a terse and low-friction way of expressing 'this is my current belief but I'm not fully confident in it'. The English language sadly doesn't include a bound confidence marker[2], but the option of uptalk serves as a coarse-grained substitute. Rationalists should consider adopting it.
Tangentially, another rationalist grammatical quirk that I appreciate and have started to adopt is the use of 'ever' in positive statements, which I interpret to mean something like 'yes but not often / not much'. For example: 'I have ever met Jane Doe.' or 'She has ever eaten crêpes.'