I think it probably meets the definition, but, caveat, it isn't actually out in the relevant sense, so there's some risk that it has a caveat that wasn't on my radar.
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into "we invent algorithms for transformative AGI"; an algorithm with such extreme inference costs wouldn't count (and, I think, would be unlikely to be developed in the firs...
This appears to be someone else's shortform, which was edited so that the shortform container doesn't look like a shortform container anymore.
This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they're double- and triple-counting the same uncertainties. And since they're all mulitplied together, and all err in the same direction, the error compounds.
I don't recall the source, but I do recall having seen a public source saying: The US air force had a problem with pilots getting buzzed by foreign drones, and not reporting the incidents because of stigma around UFOs. An executive decision was made to solve the problem by removing the stigma.
I can reproduce loss-of-selection on mouseover some of the time on up-to-date Chrome, so, I think probably not browser specific.
Wait, you're running Firefox 88, on Xenial?! Why? What's wrong with you? You're a terrible person.
The main reason WaPo would have delayed is if they wanted additional confirmation/due diligence and didn't have it.
based on the vehicle morphologies and material science testing and the possession of unique atomic arrangements and radiological signatures
That only sounds impressive if you don't think too hard about what it means. It's saying that the fragments are made of a fancy alloy that they can't identify. But every military contractor has materials scientists, and being made of fancy new alloys is completely expected for cutting edge military aircraft.
The thing that has to be explained is why serious intelligence professionals speak of likely non-human origin.
It's either:
(1) The US military/intelligence community is made up of people who are really crazy.
(2) There are actual aliens
(3) It's a strange disinformation campaign that seems to go counter to the core interests of the military/intelligence community. It's damaging to public and congressional trust in the military and invites oversight.
(4) There are some really strange turf wars going on in the military/intelligence community.
(5) Ther...
Reusing a response I made to a previous UFO story, on a mailing list, lightly edited because the same logic still applies.
There's one core truth that you need to understand, and then all the talk of UFOs, videos, and the reactions to them make sense.
The US military has secret aircraft. Other militaries also have secret aircraft. These are kept in reserve for high-stakes operations. For example, in 2011, a previously-unseen model of stealth helicopter crashed in the middle of the raid on Osama bin Laden's compound. Rumor is that the Chinese military go...
There are a few videos taken from fighter-jet sensor packages floating around
The military released some videos but there's plenty of reporting that it didn't released their best-quality videos precisely for the reasons you listed. Having the best quality videos that are taking by fighter jets would reveal information about the fighter jets that the US military does not want out on the open so those high-quality videos are classified in a way the videos we have aren't.
The kind of people who are in the know and who are qualified to interpret those vide...
There are a few significant things to say about this post. The first is that you ought to read the Metaethics sequence (long version) or Value Theory (abridged version). Knowing how our current values arose is reason for pessimism about whether AIs we create will share our values, about humanity's values in unsteered futures, and about what the values of aliens might look like. Knowing how our current values arose is not something that should move us away from them, or confuse us about which things are good and bad.
The impression I get from this post is th...
You mix up your tenses in sneaky way, projecting bad aspects of the past onto the future:
I think this would normally be an astute observation but in my case mixing up tenses has been a persistent source of frustration for my editors. Despite the scolding I get and my efforts to watch out for it, I screw this up constantly (I don't know if this explains it but English is my third language and I mostly learned it in an ad-hoc manner). All I can say now is that the tense mix-up was an inadvertent error on my part. When I talk about which cultural values "will...
When new users pop up in the moderation dashboard (which happens when they make their first post/comment), we see the HTTP referrer they had the first time they landed on the site. (At domain granularity, not page-level granularity, and only for users who clicked a link not users who typed the URL into the address bar). So, if we get a bunch of users coming from YouTube leaving bad comments that would make the site worse, we do have the ability to notice that's what happening.
That said, I do think that there's a real risk here. Among all the places on the ...
I think there is some value in exploring the philosophical foundations of ethics, and LessWrong culture is often up for that sort of thing. But, it's worth saying explicitly: the taboo against violence is correct, and has strong arguments for it from a wide variety of angles. People who think their case is an exception are nearly always wrong, and nearly always make things worse.
(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)
They should be live now. They were live, then temporarily rolled back because a change that deployed in the same operation (not related to reacts) broke something, now they're live again.
First round of changes based on feedback from this thread. There are a bunch of new reactions, some UI changes, and some small bugfixes. Not being in this changeset does not mean that a suggestion has been rejected; this is just a first pass.
You aren't meant to be able to anti-react to a reaction that no one else has reacted (but there are some minor bugs that make this not be fully enforced). Tooltip should probably be on the right rather than below, will fix.
Currently they aren't sorted at all (so the order is some arbitrary emergent property, which I haven't reverse-engineered but which might be "sort by least recently applied"). I agree that sorting by descending count makes sense and will change it to that.
It's probably a thing with regular vote-score, but I think it's worth the tradeoff of having scores at the top because when there are too many comments to read everything, the score feeds into the decision of what to read vs what to skim.
Currently only comments, not posts. This is because it's still experimental, and "change the voting system for comments on just one post" turns out to be a pretty good mechanism for experimenting. If we do make it the default for comments everywhere, then extending it to posts too will be a pretty natural thing to do.
I agree this is a concern. At an earlier stage of this prototype, the reactions were at the top with the rest of the voting buttons; moving them to the bottom was done partially to reduce the chance that you see the reactions before you've read it.
I was thinking the latter (but agree that the description left ambiguity there and will rewrite it.)
Subthread for proposing new reactions. Icons for the existing reactions come from The Noun Project, so this is a good place for finding more icons that match the existing ones.
If this feature is in part meant to address the problems of 1) threads often ending without people knowing why and 2) people feeling bad about receiving certain kinds of criticism or about certain critics because it's costly to both respond and not respond, I would suggest adding the following reactions:
a week long picket around OpenAI's headquarters.
This is an unexpectedly creative way to screw this up. Planning a protest to be a week long means that most would-be attendees don't know when they should be there, and will show up at a random time in a week-long interval, see that there's no one else there, and leave.
If you want this to be at all successful, you need to pick a specific date and time. It's fine if you're there more often than just then, but please, for crying out loud, don't position yourself as an organizer and then create ambiguity about basic logistical details.
It saddens me a bit to see so little LW commenter engagement (both with this post, and with Orthogonal's research agenda in general. I think this is because they're the sort of posts that feel like you're supposed to be engaging with the object-level content in a sophisticated way, and the barrier to entry for that is quite high. Without having much to add about the object-level content of the research agenda, I would like to say: This flavor of research is something the world desperately needs more of. Carado's earlier posts and I few conversations I've h...
This is a known bug that occurs when all of the first page of comments are on posts that you can't see (because the posts were deleted or moved to drafts).
Rationality isn't the sort of thing that can take positions on things. But many prominent rationalist writers have discussed the subject, and in general, they take a very dim view of lying, in the usual meaning of the term. The relevant aphorism, originally from Steven Kaas and quoted in the sequences here:
Promoting less than maximally accurate beliefs is an act of sabotage. Don't do it to anyone unless you'd also slash their tires.
There are corner cases; the classic thought experiment in philosophy is, if you were hiding Jews in your attic during WW2...
This is a pretty simple litmus test for whether the US government is awake, in any meaningful sense: does the author of ChaosGPT get a checkin from the DHS or any similar agencies? The AI used in this video is clearly too stupid to actually destroy humanity, but its author is doing something that is, in practice, equivalent to calling up industrial suppliers and asking to buy enriched uranium and timing chips.
All of the additive modifiers that apply (eg +25 karma, for each tag the post has) are added together and applied first, then all of the multiplicative modifiers (ie the "reduced" option) are multiplied together and applied, then time decay (which is multiplicative) is applied last. The function name is filterSettingsToParams.
Assuming "lizardman" here is referring to this post, the usage of terminology seems wrong. In that post, "lizardman" is used specifically to mean rare outliers, so under that definition, it's quite impossible for 99.9%+ of the world to be that. It also portrays a particular archetype of unreasonable person, which I think is what you're intending to refer to; but as far as I can tell that archetype is in fact rare.
(Note: This was first posted April 1, but due to a moderation-queue backlog, was delayed until now. Our apologies if that spoils the humor.)
This is not something he said, and not something he thinks. If you read what he wrote carefully, through a pedantic decoupling lens, or alternatively with the context of some of his previous writing about deterrence, this should be pretty clear. He says that AI is bad enough to put a red line on; nuclear states put red lines of lots of things, most of which are nowhere near as bad as nuclear war is.
One of the big differences between decoupling norms and contextualizing norms is that, in practice, it doesn't seem possible to make statements with too many moving parts without contextualizers misinterpreting. Under contextualizing norms, saying "X would imply Y" will be interpreted as meaning both X and Y. Under decoupling norms, a statement like that usually means a complicated inference is being set up, and you are supposed to hold X, Y, and this relation in working memory for a moment while that inference is explained. There's a communication-culture...
The alternative would have been to embed a small lecture about international relations into the article.
I don't think that's correct, there are cheap ways of making sentences like this one more effective as communication. (E.g. less passive/vague phrasing than "run some risk", which could mean many different things.) And I further claim that most smart people, if they actually spent 5 minutes by the clock thinking of the places where there's the most expected disvalue from being misinterpreted, would have identified that the sentences about nuclear exchang...
It does seem an important and useful difference, that the sort of person who complains about Rainbowland is probably prone to starting and escalating fights in general, while the person who has misconceptions about AI is probably about as reasonable as the average person. In most of these cases (with some exceptions), LW is finding itself, not in the role of a superintendent fielding paranoid complaints, but something more like the role of a professor who's struggling to focus on research because there are too many undergraduates.
I think there's an important meta-level point to notice about this article.
This is the discussion that the AI research and AI alignment communities have been having for years. Some agree, some disagree, but the 'agree' camp is not exactly small. Until this week, all of this was unknown to most of the general public, and unknown to anyone who could plausibly claim to be a world leader.
When I say it was unknown, I don't mean that they disagreed. To disagree with something, at the very least you have to know that there is something out there to disagree...
Until this week, all of this was [...] unknown to anyone who could plausibly claim to be a world leader.
I don't think this is known to be true.
In fact they had no idea this debate existed.
That seems too strong. Some data points:
1. There's been lots of AI risk press over the last decade. (E.g., Musk and Bostrom in 2014, Gates in 2015, Kissinger in 2018.)
2. Obama had a conversation with WIRED regarding Bostrom's Superintelligence in 2016, and his administration cited papers by MIRI and FHI in a report on AI the same year. Quoting that report:
...General AI (some
Other comments here have made the case that freedom and transparency, interpreted straightforwardly, probably just make AGI happen sooner and be less safe. Sadly, I agree. The imprint of open-source values in my mind wants to apply itself to this scenario, and finds it appealing to be in a world where that application would work. But I don't think that's the world we're currently living in.
In a better world, there would be a strategy that looks more like representative democracy: large GPU clusters are tightly controlled, not by corporations or by countrie...
My understanding is that they used to have a lot more special-purpose modules than they do now, but their "occupancy network" architecture has replaced a bunch of them. So they have one big end-to-end network doing most of the vision, which hands a volumetric representation over to the collection of special-purpose-smaller-modules for path planning. But path planning is the easier part (easier to generate synthetic data for, easier to detect if something is going wrong beforehand and send a take-over alarm.).
One question I sometimes see people asking is, if AGI is so close, where are the self-driving cars? I think the answer is much simpler, and much stupider, than you'd think.
Waymo is operating self-driving robotaxis in SF and a few other select cities, without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders".
Tesla also has self-driving, but it isn't reliable enough to work without close human oversight. Until less than a ...
I don't know how most articles get into that section, but I know, from direct communication with a Time staff writer, that Time reached out and asked for Eliezer to write something for them.
I believe the high-profile names at the top are individually verified, at least, and it looks like there's someone behind the form deleting fake entries as they're noticed. (Eg Yann LeCun was on the list briefly, but has since been deleted from the list.)
When we did Scott's petition, names were not automatically added to the list, but each name was read by me-or-Jacob, and if we were uncertain about one we didn't add it without checking in with others or thinking it over. This meant that added names were staggered throughout the day because we only checked every hour or two, but overall prevented a number of fake names from getting on there.
(I write this to contrast it with automatically adding names then removing them as you notice issues.)
Lecun heard he was incorrectly added to the list, so the reputational damage still mostly occurred.
This is covered by the Value Theory sequence. If I understand correctly, a "fundamental ought" (as you use the phrase) would be a universally compelling argument.
Also, while Amazon AWS is arguably the biggest player in cloud computing generally, I have heard (though not independently vetted) that AWS is rarely used for training cutting-edge LLMs. Because compared to some other compute providers, Amazon's compute is so geographically distributed and not centralized enough for the purpose of training very large models.
I don't think this is the reason. Rare is the training run that's so big it doesn't fit comfortably in what you can buy in a single Amazon datacenter. I think the real reason is that AWS has significantly larger margins than most cloud providers, since their offering is partially a SaaS offering.
As others have said, if an AI is truly superintelligent, there are many paths to world takeover. That doesn't mean that it isn't worth fortifying the world against takeover; rather, it means that defenses only help if they're targeted at the world's weakest link, for some axis of weakness. In particular that means finding the civilizational weakness with the low bar for how smart the AI system needs to be, and raising the bar there. This buys time in the race between AI capability and AI alignment, and buys the extra time at the endgame when time is most v...
Thanks for the report; that should be fixed now.
(A bunch of custom code related to syncing from fanfiction.net got dropped in the progress of migrating from a bespoke server to WP-Engine, and go.php got lost in the shuffle.)
(I wrote this comment for the HN announcement, but missed the time window to be able to get a visible comment on that thread. I think a lot more people should be writing comments like this and trying to get the top comment spots on key announcements, to shift the social incentive away from continuing the arms race.)
On one hand, GPT-4 is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.
R...
Update: It should be fixed now. If you see any problems with the new server, let us know (replying to this comment will work). If you're still getting a redirect to archive.org, it should fix itself in a few hours when your ISP's DNS cache expires.
"Hippy" is an aesthetic, not a specific idea, and an aesthetic can apply to both true ideas and false ideas. This post names the aesthetic and stops there; it doesn't name any specific hippy idea. This creates distance between evaluating the aesthetic, and any specific idea that could be true or false. And if there's nothing to judge as false, then everything is okay, right? The post then provides a supposed reason for objecting to (unspecified things within) the aesthetic: blind demand for a specific sort of rigor, applied inappropriately.
It feels unfair ... (read more)