All of jimrandomh's Comments + Replies

"Hippy" is an aesthetic, not a specific idea, and an aesthetic can apply to both true ideas and false ideas. This post names the aesthetic and stops there; it doesn't name any specific hippy idea. This creates distance between evaluating the aesthetic, and any specific idea that could be true or false. And if there's nothing to judge as false, then everything is okay, right? The post then provides a supposed reason for objecting to (unspecified things within) the aesthetic: blind demand for a specific sort of rigor, applied inappropriately.

It feels unfair ... (read more)

I think it probably meets the definition, but, caveat, it isn't actually out in the relevant sense, so there's some risk that it has a caveat that wasn't on my radar.

P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8

P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.

P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into "we invent algorithms for transformative AGI"; an algorithm with such extreme inference costs wouldn't count (and, I think, would be unlikely to be developed in the firs... (read more)

2Ted Sanders4d
Interested in betting thousands of dollars on this prediction? I'm game.

This appears to be someone else's shortform, which was edited so that the shortform container doesn't look like a shortform container anymore.

This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they're double- and triple-counting the same uncertainties. And since they're all mulitplied together, and all err in the same direction, the error compounds.

5harfe8d
There is an additional problem where one of the two key principles for their estimates is If this principle leads you to picking probability estimates that have some distance to 1 (eg by picking at most 0.95). If you build a fully conjunctive model, and you are not that great at extreme probabilities, then you will have a strong bias towards low overall estimates. And you can make your probability estimates even lower by introducing more (conjunctive) factors.
6Ted Sanders8d
What conditional probabilities would you assign, if you think ours are too low?

I don't recall the source, but I do recall having seen a public source saying: The US air force had a problem with pilots getting buzzed by foreign drones, and not reporting the incidents because of stigma around UFOs. An executive decision was made to solve the problem by removing the stigma.

2ChristianKl9d
That seems to be a public justification for why the AATIP was started in 2007. It's worth noting that the US air force didn't want AATIP. Harry Reid forced them to do it because Robert Bigelow encouraged him to do so.  AATIP seems to come up with the term UAP to remove the stigma associated with the term UFO. That does not explain why AATIP reported having found the strange incidents that it found.  It certainly does not explain the reporting of a 90-year coverup of programs to retrieve UFOs. That's not the kind of news you would want to produce if you want to reduce stigma. 

I can reproduce loss-of-selection on mouseover some of the time on up-to-date Chrome, so, I think probably not browser specific.

Wait, you're running Firefox 88, on Xenial?! Why? What's wrong with you? You're a terrible person.

The main reason WaPo would have delayed is if they wanted additional confirmation/due diligence and didn't have it.

based on the vehicle morphologies and material science testing and the possession of unique atomic arrangements and radiological signatures

That only sounds impressive if you don't think too hard about what it means. It's saying that the fragments are made of a fancy alloy that they can't identify. But every military contractor has materials scientists, and being made of fancy new alloys is completely expected for cutting edge military aircraft.

The thing that has to be explained is why serious intelligence professionals speak of likely non-human origin. 

It's either:

(1) The US military/intelligence community is made up of people who are really crazy.

(2) There are actual aliens

(3) It's a strange disinformation campaign that seems to go counter to the core interests of the military/intelligence community. It's damaging to public and congressional trust in the military and invites oversight. 

(4) There are some really strange turf wars going on in the military/intelligence community.

(5) Ther... (read more)

Reusing a response I made to a previous UFO story, on a mailing list, lightly edited because the same logic still applies.

There's one core truth that you need to understand, and then all the talk of UFOs, videos, and the reactions to them make sense.

The US military has secret aircraft. Other militaries also have secret aircraft. These are kept in reserve for high-stakes operations. For example, in 2011, a previously-unseen model of stealth helicopter crashed in the middle of the raid on Osama bin Laden's compound. Rumor is that the Chinese military go... (read more)

1144
2mako yass11d
I'm wondering whether people within or on the peripheries of these recovery and reverse engineering programs have decided that convincing people that they're UFO recovery programs is beneficial on net. I've seen some people dismiss this possibility, but it seems like they're presuming to know a lot about the strategic landscape that they're not players in.
5lc11d
According to the article, the evidence comes not in the form of video or sensor data, but in recovered portions of or whole aircraft.

There are a few videos taken from fighter-jet sensor packages floating around

The military released some videos but there's plenty of reporting that it didn't released their best-quality videos precisely for the reasons you listed. Having the best quality videos that are taking by fighter jets would reveal information about the fighter jets that the US military does not want out on the open so those high-quality videos are classified in a way the videos we have aren't. 

The kind of people who are in the know and who are qualified to interpret those vide... (read more)

Ack, I misread that, sorry. Will edit the grandparent comment to remove that part.

There are a few significant things to say about this post. The first is that you ought to read the Metaethics sequence (long version) or Value Theory (abridged version). Knowing how our current values arose is reason for pessimism about whether AIs we create will share our values, about humanity's values in unsteered futures, and about what the values of aliens might look like. Knowing how our current values arose is not something that should move us away from them, or confuse us about which things are good and bad.

The impression I get from this post is th... (read more)

2TAG17d
Why? Does it solve everything? Does it make any good points? I am very unconvinced that we humans have a single coherent set of values, and reading the metaethics sequence did not change my mind -- the claim is assumed , not proven. (A commentator notices the problem: https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right?commentId=pgSokbnCJDWPbCRDC [https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right?commentId=pgSokbnCJDWPbCRDC] No one solves it)(But you responded by editing out the link to Meaning of Right). Would that have been true if stated by a roman slave owner? If his values were wrong then, yours could be now. (A commentator notes the problem-- well, he uses Washington, not a Roman: https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right?commentId=eR5f6SZS3iJydHQsH [https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right?commentId=eR5f6SZS3iJydHQsH])

You mix up your tenses in sneaky way, projecting bad aspects of the past onto the future:

I think this would normally be an astute observation but in my case mixing up tenses has been a persistent source of frustration for my editors. Despite the scolding I get and my efforts to watch out for it, I screw this up constantly (I don't know if this explains it but English is my third language and I mostly learned it in an ad-hoc manner). All I can say now is that the tense mix-up was an inadvertent error on my part. When I talk about which cultural values "will... (read more)

9Kaj_Sotala19d
Do you mean this quote? That's someone criticizing Cartwright's practice of coming up with such excuses, so having the quote is already an argument against Cartwright (and thus slavery). Arguing against the quotation would be arguing for slavery and oppression.

When new users pop up in the moderation dashboard (which happens when they make their first post/comment), we see the HTTP referrer they had the first time they landed on the site. (At domain granularity, not page-level granularity, and only for users who clicked a link not users who typed the URL into the address bar). So, if we get a bunch of users coming from YouTube leaving bad comments that would make the site worse, we do have the ability to notice that's what happening.

That said, I do think that there's a real risk here. Among all the places on the ... (read more)

I think there is some value in exploring the philosophical foundations of ethics, and LessWrong culture is often up for that sort of thing. But, it's worth saying explicitly: the taboo against violence is correct, and has strong arguments for it from a wide variety of angles. People who think their case is an exception are nearly always wrong, and nearly always make things worse.

(This does not include things that could be construed as violence but only if you stretch the definition, like supporting regulation through normal legal channels, or aggressive criticism, or lawsuits. I think those things are not taboo and would support some of them.)

1ArisC21d
Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).
1M. Y. Zuo21d
Can you clarify on this?  I think most people would agree that lawsuits do count as explicitly sanctioned violence beyond some low threshold, especially in a loser-pays jurisdiction. As in that's the intended purpose of the idea, to let the victor rely on the state's monopoly on violence instead of their private means.

They should be live now. They were live, then temporarily rolled back because a change that deployed in the same operation (not related to reacts) broke something, now they're live again.

1simon22d
Now seems reverted again plus I'm seeing red "Error: TypeError: n is undefined" at the bottom of some top-level comments.

First round of changes based on feedback from this thread. There are a bunch of new reactions, some UI changes, and some small bugfixes. Not being in this changeset does not mean that a suggestion has been rejected; this is just a first pass.

  • New reactions: Agree, Disagree, Obtuse, I'll Reply Later, Not Planning to Respond, I Don't Understand, Non Sequitur, Shaky Premise, Too Many Assumptions, Misrepresentation, Continue, Not Worth the Time
  • The add-reaction button is moved to the bottom-right
  • The reactions display highlights reactions that you yourself made or antireacted to
  • Replaced the icons for Support and Concrete
  • Recapitalized some reaction titles to be in title-case
1Dweomite22d
The new Support icon looks less like a trash can than the one pictured in the OP, but still looks kinda like a trash can to me. Making it taller/narrower might help, or making the top/bottom pieces look more different from the body. Or maybe a 3D view that lets you see that the top is solid rather than hollow. (Disclaimer: I am not an artist.)
4Ben Pace22d
I find it amusing that I can now both agree and disagree with a comment.
1simon23d
When will these changes be live? Or are they already live in some version I am not using? 

You aren't meant to be able to anti-react to a reaction that no one else has reacted (but there are some minor bugs that make this not be fully enforced). Tooltip should probably be on the right rather than below, will fix.

2Said Achmiz23d
But this seems bad, then, given the current stable of reactions! I understand it from the standpoint of interaction design, of course—but then it seems like you should add opposite-valence reactions for those reactions which currently make sense as standalone anti-reacts (see my other comments in this thread for some examples).

Currently they aren't sorted at all (so the order is some arbitrary emergent property, which I haven't reverse-engineered but which might be "sort by least recently applied"). I agree that sorting by descending count makes sense and will change it to that.

The intent is that you can post comments on any subject here.

2Gunnar_Zarncke23d
Yes, that was it! Thank you. OPEN THREAD - JAN 2022 [VOTE EXPERIMENT!] [https://www.lesswrong.com/posts/ywpWMnJmqAkeaDtne/open-thread-jan-2022-vote-experiment]

It's probably a thing with regular vote-score, but I think it's worth the tradeoff of having scores at the top because when there are too many comments to read everything, the score feeds into the decision of what to read vs what to skim.

2mako yass23d
Reactions seem like they're going to be far more likely to be useful for that purpose.

Currently only comments, not posts. This is because it's still experimental, and "change the voting system for comments on just one post" turns out to be a pretty good mechanism for experimenting. If we do make it the default for comments everywhere, then extending it to posts too will be a pretty natural thing to do.

I agree this is a concern. At an earlier stage of this prototype, the reactions were at the top with the rest of the voting buttons; moving them to the bottom was done partially to reduce the chance that you see the reactions before you've read it.

4mako yass24d
So, are you having similar thoughts about votes, too?
3Amarko24d
Perhaps they could be next to the "Reply" button, and fully contained in the comment's container?

I was thinking the latter (but agree that the description left ambiguity there and will rewrite it.)

Subthread for proposing new reactions. Icons for the existing reactions come from The Noun Project, so this is a good place for finding more icons that match the existing ones.

7simon23d
Since I suggested them in another comment [https://www.lesswrong.com/posts/SzdevMqBusoqbvWgt/open-thread-with-experimental-feature-reactions?commentId=3pSoDqnvBvpojrqtG], some potential replacements for "muddled" and "wrong". Here I'll add some possible icons: * TL;DR * Unnecessarily wordy [and make "Overcomplicated" specific to the content not form] * Unable to parse * Non sequitur * Ambiguous (i.e. multiple potential meanings) * Disagree with premise(s) * Unclear point * Misrepresentation [could potentially replace "Strawman" but be more general] (note I don't want to imply "lie" here only that the reader thinks there is a mismatch to what is actually going on, maybe this one should be clarified. I drew my own icon since I couldn't find one I liked the relevance of on the noun project).
2DanielFilan24d
Maybe too much semantic content to be a react, but I wanted to reach for "That's a good thing" in response to this comment https://www.lesswrong.com/posts/SzdevMqBusoqbvWgt/open-thread-with-experimental-feature-reactions?commentId=o5ug4AQCHNB6tL8YG [https://www.lesswrong.com/posts/SzdevMqBusoqbvWgt/open-thread-with-experimental-feature-reactions?commentId=o5ug4AQCHNB6tL8YG]

If this feature is in part meant to address the problems of 1) threads often ending without people knowing why and 2) people feeling bad about receiving certain kinds of criticism or about certain critics because it's costly to both respond and not respond, I would suggest adding the following reactions:

  • I plan to respond later.
  • I'm not planning to respond. (On second thought this could be left out, as it would be implied if someone gave a reaction without also giving "I plan to respond later.")
  • I don't understand.
  • I disagree. (Similar to "wrong" but I th
... (read more)

a week long picket around OpenAI's headquarters.

This is an unexpectedly creative way to screw this up. Planning a protest to be a week long means that most would-be attendees don't know when they should be there, and will show up at a random time in a week-long interval, see that there's no one else there, and leave.

If you want this to be at all successful, you need to pick a specific date and time. It's fine if you're there more often than just then, but please, for crying out loud, don't position yourself as an organizer and then create ambiguity about basic logistical details.

It saddens me a bit to see so little LW commenter engagement (both with this post, and with Orthogonal's research agenda in general. I think this is because they're the sort of posts that feel like you're supposed to be engaging with the object-level content in a sophisticated way, and the barrier to entry for that is quite high. Without having much to add about the object-level content of the research agenda, I would like to say: This flavor of research is something the world desperately needs more of. Carado's earlier posts and I few conversations I've h... (read more)

This is a known bug that occurs when all of the first page of comments are on posts that you can't see (because the posts were deleted or moved to drafts).

Rationality isn't the sort of thing that can take positions on things. But many prominent rationalist writers have discussed the subject, and in general, they take a very dim view of lying, in the usual meaning of the term. The relevant aphorism, originally from Steven Kaas and quoted in the sequences here:

Promoting less than maximally accurate beliefs is an act of sabotage. Don't do it to anyone unless you'd also slash their tires.

There are corner cases; the classic thought experiment in philosophy is, if you were hiding Jews in your attic during WW2... (read more)

This is a pretty simple litmus test for whether the US government is awake, in any meaningful sense: does the author of ChaosGPT get a checkin from the DHS or any similar agencies? The AI used in this video is clearly too stupid to actually destroy humanity, but its author is doing something that is, in practice, equivalent to calling up industrial suppliers and asking to buy enriched uranium and timing chips.

All of the additive modifiers that apply (eg +25 karma, for each tag the post has) are added together and applied first, then all of the multiplicative modifiers (ie the "reduced" option) are multiplied together and applied, then time decay (which is multiplicative) is applied last. The function name is filterSettingsToParams.

Assuming "lizardman" here is referring to this post, the usage of terminology seems wrong. In that post, "lizardman" is used specifically to mean rare outliers, so under that definition, it's quite impossible for 99.9%+ of the world to be that. It also portrays a particular archetype of unreasonable person, which I think is what you're intending to refer to; but as far as I can tell that archetype is in fact rare.

1at_the_zoo2mo
That post made me write this post, but I'm not sure that I'm referring to the same thing. Basically I mean something like "people whose beliefs or actions are so unreasonable, even on things that they should have thought long and hard about, that they seem to belong to a different species from myself." Like Robin Hanson in this tweet [https://twitter.com/robinhanson/status/1642719316465397760] or Elizer Yudkowsky when he thought he would singlehandedly solve all the philosophical problems associated with building a Friendly AI (looks like I can't avoid giving examples after all). I'm pretty sure these two belong in the top 0.1 percentile of all humans as far as being reasonable, hence the title.

(Note: This was first posted April 1, but due to a moderation-queue backlog, was delayed until now. Our apologies if that spoils the humor.)

1Adam Zerner2mo
If this was intended as an April Fool's Day joke I think it'd be good to mark it with the April Fool's tag [https://www.lesswrong.com/tag/april-fool-s].

This is not something he said, and not something he thinks. If you read what he wrote carefully, through a pedantic decoupling lens, or alternatively with the context of some of his previous writing about deterrence, this should be pretty clear. He says that AI is bad enough to put a red line on; nuclear states put red lines of lots of things, most of which are nowhere near as bad as nuclear war is.

8dsj2mo
In response to the question, “[Y]ou’ve gestured at nuclear risk. … How many people are allowed to die to prevent AGI?”, he wrote: “There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply. So long as that's true, there's still a chance of reaching the stars someday.” He later deleted that tweet because he worried it would be interpreted by some as advocating a nuclear first strike. I’ve seen no evidence that he is advocating a nuclear first strike, but it does seem to me to be a fair reading of that tweet that he would trade nuclear devastation for preventing AGI.

One of the big differences between decoupling norms and contextualizing norms is that, in practice, it doesn't seem possible to make statements with too many moving parts without contextualizers misinterpreting. Under contextualizing norms, saying "X would imply Y" will be interpreted as meaning both X and Y. Under decoupling norms, a statement like that usually means a complicated inference is being set up, and you are supposed to hold X, Y, and this relation in working memory for a moment while that inference is explained. There's a communication-culture... (read more)

3TekhneMakre2mo
I'm unsure but I suspect that in many cases you're thinking of, this is incorrect. I think people can track implication when it's something that "makes sense" to them, that they care about. I suspect that at least in a certain subset of these cases, what's really happening is this: They believe not-X (in some fashion). You start trying to say "X implies Y". They get confused and are resistant to your statement. When pressed, it comes to light that they are refusing to think about possible worlds in which X is the case. They're refusing because they believe not-X (in some fashion), and it's pointless to think about worlds that are impossible--it won't affect anything because it's unreal, and it's impossible to reason about because it's contradictory. (They wouldn't be able to say all of this explicitly.)
8Sam FM2mo
I think it’s a mistake to qualify this interpretation as an example of following decoupling norms. Deterrence and red lines aren’t mentioned in Eliezer’s comment at all; they’re just extra context that you’ve decided to fill in. That’s generally what people do when they read things under contextualizing norms. Interpreting this comment as a suggestion to consider initiating a nuclear exchange is also a contextualized reading, just with a different context filled in. A highly-decoupled reading, by contrast, would simply interpret “some risk of nuclear exchange” as, well, some unquantified/unspecified risk.

The alternative would have been to embed a small lecture about international relations into the article.

I don't think that's correct, there are cheap ways of making sentences like this one more effective as communication. (E.g. less passive/vague phrasing than "run some risk", which could mean many different things.) And I further claim that most smart people, if they actually spent 5 minutes by the clock thinking of the places where there's the most expected disvalue from being misinterpreted, would have identified that the sentences about nuclear exchang... (read more)

4the gears to ascension2mo
extremely insightful point, and for some reason it seems I deeply disagree with the aggregate point, but I can't figure out why at the moment. strong upvoted though.

Manifold rules permit insider training, so I'll collect the information bounty on that one.

It does seem an important and useful difference, that the sort of person who complains about Rainbowland is probably prone to starting and escalating fights in general, while the person who has misconceptions about AI is probably about as reasonable as the average person. In most of these cases (with some exceptions), LW is finding itself, not in the role of a superintendent fielding paranoid complaints, but something more like the role of a professor who's struggling to focus on research because there are too many undergraduates.

I think there's an important meta-level point to notice about this article.

This is the discussion that the AI research and AI alignment communities have been having for years. Some agree, some disagree, but the 'agree' camp is not exactly small.  Until this week, all of this was unknown to most of the general public, and unknown to anyone who could plausibly claim to be a world leader.

When I say it was unknown, I don't mean that they disagreed. To disagree with something, at the very least you have to know that there is something out there to disagree... (read more)

Until this week, all of this was [...] unknown to anyone who could plausibly claim to be a world leader.

I don't think this is known to be true.

In fact they had no idea this debate existed.

That seems too strong. Some data points:

1. There's been lots of AI risk press over the last decade. (E.g., Musk and Bostrom in 2014, Gates in 2015, Kissinger in 2018.)

2. Obama had a conversation with WIRED regarding Bostrom's Superintelligence in 2016, and his administration cited papers by MIRI and FHI in a report on AI the same year. Quoting that report:

General AI (some

... (read more)
2Roman Leventov3mo
I don't think that the lack of wide public outreach before was a cold calculation. Such outreach would simply not go through. It wouldn't be published in Time, NYT, or aired on broadcast TV channels. The Overton window has started to open only after ChatGPT and especially after GPT-4. I also don't agree that the FLI letter is a continuation of some deceptive plan. It's toned down deliberately for the purpose of marshalling many diverse signatories who would otherwise probably not sign, such as Bengio, Yang, Mostaque, DeepMind folks, etc. So it's not deception, it's an attempt to find the common ground.

Other comments here have made the case that freedom and transparency, interpreted straightforwardly, probably just make AGI happen sooner and be less safe. Sadly, I agree. The imprint of open-source values in my mind wants to apply itself to this scenario, and finds it appealing to be in a world where that application would work. But I don't think that's the world we're currently living in.

In a better world, there would be a strategy that looks more like representative democracy: large GPU clusters are tightly controlled, not by corporations or by countrie... (read more)

My understanding is that they used to have a lot more special-purpose modules than they do now, but their "occupancy network" architecture has replaced a bunch of them. So they have one big end-to-end network doing most of the vision, which hands a volumetric representation over to the collection of special-purpose-smaller-modules for path planning. But path planning is the easier part (easier to generate synthetic data for, easier to detect if something is going wrong beforehand and send a take-over alarm.).

One question I sometimes see people asking is, if AGI is so close, where are the self-driving cars? I think the answer is much simpler, and much stupider, than you'd think.

Waymo is operating self-driving robotaxis in SF and a few other select cities, without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task "driving but your eyes are laser rangefinders".

Tesla also has self-driving, but it isn't reliable enough to work without close human oversight. Until less than a ... (read more)

5habryka3mo
My answer to this is quite different. The paradigm that is currently getting very close to AGI is basically having a single end-to-end trained system with tons of supervised learning.  Self-driving car AI is not actually operating in this current paradigm as far as I can tell, but is operating much more in the previous paradigm of "build lots of special-purpose AI modules that you combine with the use of lots of special-case heuristics". My sense is a lot of this is historical momentum, but also a lot of it is that you just really want your self-driving AI to be extremely reliable, so training it end-to-end is very scary.  I have outstanding bets that human self-driving performance will be achieved when people switch towards a more end-to-end trained approach without tons of custom heuristics and code.
2lc3mo
That... Would be hilarious, if true. Do you think we will see self driving cars soon, then?

I don't know how most articles get into that section, but I know, from direct communication with a Time staff writer, that Time reached out and asked for Eliezer to write something for them.

I believe the high-profile names at the top are individually verified, at least, and it looks like there's someone behind the form deleting fake entries as they're noticed. (Eg Yann LeCun was on the list briefly, but has since been deleted from the list.)

When we did Scott's petition, names were not automatically added to the list, but each name was read by me-or-Jacob, and if we were uncertain about one we didn't add it without checking in with others or thinking it over. This meant that added names were staggered throughout the day because we only checked every hour or two, but overall prevented a number of fake names from getting on there.

(I write this to contrast it with automatically adding names then removing them as you notice issues.)

Lecun heard he was incorrectly added to the list, so the reputational damage still mostly occurred.

This is covered by the Value Theory sequence. If I understand correctly, a "fundamental ought" (as you use the phrase) would be a universally compelling argument.

-5Donatas Lučiūnas3mo
-4Tor Økland Barstad3mo
Agreed (more or less). I have pointed him to this post earlier. He has given no signs so far of comprehending it, or even reading it and trying to understand what is being communicated to him. I'm saying this more directly than I usually would @Donatas [https://www.lesswrong.com/users/donatas-luciunas?mention=user], since you seem insistent on clarifying a disagreement/misunderstanding you think is important for the world, while it seems (as far as I can see) that you're not comprehending all that is communicated to you (maybe due to being so confident that we are the ones who "don't get it" that it's not worth it to more carefully read the posts that are linked to you, more carefully notice what we point to as cruxes [https://www.lesswrong.com/tag/double-crux], etc). Edit: I was unnecessarily hostile/negative here.

Also, while Amazon AWS is arguably the biggest player in cloud computing generally, I have heard (though not independently vetted) that AWS is rarely used for training cutting-edge LLMs. Because compared to some other compute providers, Amazon's compute is so geographically distributed and not centralized enough for the purpose of training very large models.

I don't think this is the reason. Rare is the training run that's so big it doesn't fit comfortably in what you can buy in a single Amazon datacenter. I think the real reason is that AWS has significantly larger margins than most cloud providers, since their offering is partially a SaaS offering.

As others have said, if an AI is truly superintelligent, there are many paths to world takeover. That doesn't mean that it isn't worth fortifying the world against takeover; rather, it means that defenses only help if they're targeted at the world's weakest link, for some axis of weakness. In particular that means finding the civilizational weakness with the low bar for how smart the AI system needs to be, and raising the bar there. This buys time in the race between AI capability and AI alignment, and buys the extra time at the endgame when time is most v... (read more)

Thanks for the report; that should be fixed now.

(A bunch of custom code related to syncing from fanfiction.net got dropped in the progress of migrating from a bespoke server to WP-Engine, and go.php got lost in the shuffle.)

(I wrote this comment for the HN announcement, but missed the time window to be able to get a visible comment on that thread. I think a lot more people should be writing comments like this and trying to get the top comment spots on key announcements, to shift the social incentive away from continuing the arms race.)

On one hand, GPT-4 is impressive, and probably useful. If someone made a tool like this in almost any other domain, I'd have nothing but praise. But unfortunately, I think this release, and OpenAI's overall trajectory, is net bad for the world.

R... (read more)

1Noosphere893mo
Going to write this now, but I disagree right now due to differing models of AI risk.
1JNS3mo
When I look at the recent Stanford paper, where they retained a LLaMA model using training data generated by GPT-3, and some of the recent papers utilizing memory. I get that tinkling feeling and my mind goes "combining that and doing .... I could ..." I have not updated for faster timelines, yet. But I think I might have to.

Update: It should be fixed now. If you see any problems with the new server, let us know (replying to this comment will work). If you're still getting a redirect to archive.org, it should fix itself in a few hours when your ISP's DNS cache expires.

4localdeity3mo
Problem: If I go to a chapter, e.g. https://hpmor.com/chapter/63 [https://hpmor.com/chapter/63] , and then I use the dropdown menu from the top to select another chapter, it takes me to e.g. https://hpmor.com/go.php?chapter=36 [https://hpmor.com/go.php?chapter=36] , which is a "Page not found" page.
2Ben Pace3mo
Hurrah!
Load More