Do you have any evidence for this?
My prior is that other things are less effective and you need evidence to show they are more effective not vice versa.
Not all EA's are longtermists.
Of course. I'm saying it doesn't even get to make that argument which can sometimes muddy the waters enough to make some odd-seeming causes look at least plausibly effective.
I'm impressed how modern EAs manage to spin any cause into being supposedly EA.
There's just no way that things like this are remotely as effective as say GiveWell causes (though it wouldn't even meet a much lower bar) and it barely even has longtermist points for it that can make me see why there's at least a chance it could be worth it.
EA's whlole brand is massively diluted by all these causes and I don't think they are remotely as effective as other places where your money can go, nor that they help the general message.
It's like people get into EA, reali...
While NVDA is naively the most obvious play - the vast majority of GPU-based AI systems use them, I fail to see why you'd expect it will outperform the market, at least in the medium term. Even if you don't believe in the EMH, I assume you acknowledge things can be more or less priced-in? Well, NVDA's such an obvious choice that it does seem like all the main arguments for it are priced-in which has helped get it to a PE ratio of 55.
I also don't see OpenAI making a huge dent on MSFT's numbers anytime soon. Almost all of MSFT's price is going to be de...
No, it's the blockchain Terra (with Luna being its main token).
https://en.wikipedia.org/wiki/Terra_(blockchain)
There is little reason to think that's a big issue. A lot of data is semi-tagged, some of the ML-generated data can be removed either that way or by being detected by newer models. And in general as long as the 'good' type of data is also increasing model quality will also keep increasing even if you have some extra noise.
What's the GiveWell/AMF of AI Safety? I'd like to occasionally donate. In the past I've only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.
In Bulgaria (where cyrilic was invented) writing in Latin is common (especially before cyrilic support was good) but frowned upon as it is considered uneducated and ugly. The way we do it is just replace each letter with the equivalent latin letter one to one and do whatever with the few which don't fit (eg just use y for ъ but some might use a, ч is just ch etc). So молоко is just moloko. Водка is vodka. Стол is stol etc. This is also exactly how it works on my keyboard with the phonetic layout.
Everyone else who uses cyrilic online seems to get it when you write like that in my experience though nowadays it's rarer.
I've been considering for years that I should write more, and save more of my messages and activities purely so I can constrain the mindspace for a future AI to recreate a version of me as approximate to my current self as years ago me is. As far as I can tell, this is fairly low effort, and the more information you have the closer you can get.
I just don't see an obvious refutation for why an advanced AI optimizing for creating a person that would write/do/etc. all the things I have with the highest probability it can would be that different from me.
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
Why are so many resources being sunk into this specifically? I just don't understand how it makes sense, what the motivation is and how they arrived at the idea. Maybe there is a great explanation and thought process which I am missing.
From my point of view, there is little demand for it and the main motivation might plausibly have been "we want to say we've published a book" rather than something that people want or need.
Having said that, I'd rather get an answer to my initial comment - why it makes sense to you/them - rather than me having to give reasons why I don't see how it makes sense.
They've sold 1000 copies. I'm surprised, too!
We have written some things about our motivation on this, though I don't think we've been fully comprehensive by any means (since that itself would have increased the cost of the vote a good amount). Here are the posts that we've written on the review and the motivation behind it:
The first post includes more of our big-picture motivation for this. Here are some of the key quotes:
In his LW 2.0 Strategic Overview, ha
...Thanks for the reply. That seems like a sensible position.
It sounds like maybe you were less involved in this than some of the 7(is that right?) other employees/admins so I'm very curious to hear their take, too.
Printing costs are hardly the only or even main issue, and I hadn't even mentioned them. You are right though, those costs make the insistence on publishing a book make even less sense.
I’m confused by this. Why would only voters be interested in the books?
Because I doubt there are all that much more people interested in these than the number of voters. Even at 1000 it doesn't seem like a book makes all that much sense. In fact, I still don't get why turning them into a book is even considered.
Print-on-demand books aren't necessarily very expensive: I've made board books for my friend's son in print runs of one or two for like thirty bucks per copy. If the team has some spare cash and someone wants to do the typesetting, a tiny print run of 100 copies could make sense as "cool in-group souvenir", even if it wouldn't make sense as commercial publishing.
I hope they are buying 50+ books each otherwise I don’t see how the book part is remotely worth it.
As a data point, I did not vote, but if there is a book, I will almost certainly be buying a copy of it if it is reasonably priced, i.e. similar price to the first two volumes of R:A-Z ($ 6-8).
So, my question is - do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both - what would failure have looked like?
These are really excellent questions. The OP mentions the intention to "review the review" in coming weeks; there will be posts about this, so hang tight. Obviously the whole project had very high costs, so we have to think carefully through whether the benefits justify them and whether we should continue the Review process in future years. Speaking for myself, it's n
...It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.
I'm confused by this. Why would only voters be interested in the books? Also, this statement assumes that you have to sell 500-1000 books for it to be worth it– what's the calculation for the value of a book sold vs the cost of making the books?
...The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just c
He has been trying to do it for years and failed. The first time I read his attempts at doing that, years ago, I also assigned a high probability of success. Then 2 years passed and he hadn't done it, then another 2 years..
You have to adjust your estimates based on your observations.
I have a bunch of comments on this:
I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important.
several people’s reaction was, “Why i
I think of CFAR as having "forked the LW epistemological codebase", and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using 'the private beta branch' and people using the classic 'LessWrong 1.0 epistemological framework.'"
This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.
So, the thing I actually said in the other thread was:
...Naively attempting to merge the latest dev branch back into "Sequences Era LessWrong" results in merge conflicts, and it's unclear when this is because:
- "oh, we just haven't written up the right explanations to make sure this was backwards compatible", vs
- "oh, these were just some ideas we were experimenting with that didn't pan out" vs
- "oh, this integration-test-failure is actually an indicator that something was wrong with the idea."
- "oh, actually, it's original LessWrong sequences that are wrong here, no
not a real error, comment, post or karma.
I assume that means you print them? Because I find pdfs to be the worst medium, compared to mobi, epub or html - mainly because I usually read from my phone.
All you were saying was "That’s not the question that was asked, so … no." so I'm sorry if I had to guess and ask. Not sure what I've missed by 'not focusing'.
I see you've added both an edit after my comment and then this response, as wellwhich is a bit odd.
Meditations on Moloch is top of the list by a factor of perhaps four
Is that post really that much more relevant than everything else for TEACHING rationality? How come?
"Current system < OP's system"
I think Tenoke things that we are talking about the usual post and comment vote system.
Isn't that what you were going to use initially or at least the most relevant system here to compare to?
Seems better than the current system which as far as I can tell is just 10 if
statements that someone chose without much reason to think it makes sense.
I know you do follow-ups with most/all CFAR attendees. Do you have any aggregate data from the questionnaires? How much do they improve on the outcomes you measure and which ones?
Are there that many social skills mentors who take on students for that to be a more realistic course of action than finding books? Wouldn't you need solid social skills to convince one to mentor you in the first place?
I mean, he uses the exact same phrase I do here but yes, I see your point.
Vaniver is saying that the personal stuff didn't come into account when banning him and that epistemic concerns were enough. From OP:
We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.
but then the epistemic concerns seem to be purely based on stuff from the "other allegations" part.
And honestly, the quality of ...
What I am asking is, why pretend it is the content and that the "other allegations" have no part?
As mentioned in a sibling comment, I am trying to establish the principle that 'promoting reasoning styles in a way we think is predatory' can be a bannable offense, independent of whether or not predation has obviously happened, in part because I think that's part of having a well-kept garden and in part so that the next person in ialdabaoth's reference class can be prevented from doing significant harm. Simply waiting until someone has been exiled doesn't do
...it's not like there's anything I can do about it anyway.
It's sad it's gotten that bad with the current iteration of LW. Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.
He is using this comment to show the 'epistemic concerns' side specifically, and claiming the personal stuff were separate .
This is the specific claim.
We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.
Maybe I'm confused about what you mean by "the personal stuff". My impression is that what I would consider "the personal stuff" is central to why ialdabaoth is considered to pose an epistemic threat: he has (allegedly) a history of manipulation which makes it more likely that any given thing he writes is intended to deceive or manipulate. Which is why jimrandomh said:
The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to...
jimrandomh's comment, linked in the OP, is the current best explanation of the epistemic concerns.
Excluding the personal stuff, this comment is just a somewhat standard LW critique of a LW post (which has less karma than the original post fwiw). If this is the criteria for an 'epistemic concerns' ban, then you must've banned hundreds of people. If you haven't you are clearly banning him for the other reasons, I don't know why you insist on being dishonest about it.
I read this post where you keep claiming you are banning him for 'epistemic concerns' but then link to 0 examples and mostly talk about some unrelated real-life thing which you also give 0 real explanation for.
The comments here mention a sex crime, but OP doesn't. If that's what happened why vaguebook, stay silent for a year and lie the ban's for 'epistemic concerns'? Who else have you banned for 'epistemic concerns' - nobody?
Honestly, after reading everything here I do have major concerns about ialdabaoth's character, but the main epistemic concerns I have are about OP presenting this dishonestly after a year of silence.
is there any explanation of the current Karma System? The main thing I can find is this. (you need to scroll to 'The karma system', for some reason you can click on subsections to go to them, but you can't link to them).
Also why do I see a massive message that says 'habryka's commenting guidelines' when I am writing this comment, but there are no guidelines or link? Is this just a weird extra ad for your own name?
Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them.
Nobody is claiming that everything around rationalist circles is completely new or invented by them. It's often looked to me more like separating the more and less useful stuff with various combinations of bottom-up and top-down approaches.
Additionally, I'd like to also identify as someone who is definitely in a much much better place now because they discovered LW almost a decad...
There are definitely some decent posts, but calling a couple of good posts a official LessWrong Sequence still seems to cheapen what that used to mean.
Not to mention that I read this on facebook, so I barely associate it with here.
Note also that you can view this on GreaterWrong.
Thanks, GreaterWrong seems to still be an improvement over the redesign for me. I'm back to using it.
I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).
I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.
Does the Quantum Physics Sequence hold up?
It's been the better part of a decade since I read it (and I knew a lot less back then), and recently I've been curious about getting a refresher. I am not going to pick up a textbook or spend too much time on this, but if it doesn't hold up what alternative/supplementary resources would you recommend (the less math-heavy the better, although obviously some of the math is inescapable)?
I actually learnt quantum physics from that sequence, and I'm now a mathematician working in Quantum Computing. So it can't be too bad!
The explanation of quantum physics is the best I've seen anywhere. But this might be because it explained it in a style that was particularly suited to me. I really like the way it explains the underlying reality first and only afterwards explains how this corresponds with what we perceive. A lot of other introductions follow the historical discovery of the subject, looking at each of the famous experiments i...
I also want to know this.
(This is part of a more general question: how much of the science cited in the Sequences holds up? Certainly nearly all the psychology has to be either discarded outright or tagged with “[replication needed]”, but what about the other stuff? The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today; was the fact that NNs weren’t yet the solution to (seemingly) everything essential to Eliezer’s actual points, or peripheral? How many of the conclusions drawn in the Sequences are based on facts
...I haven't gotten the voting link (I've now emailed to ask), but I am sadly already pretty negatively surprised at how lesserwrong.com has turned out (props to the maker of greaterwrong, though) and very much hope that it doesn't completely replace LessWrong.com. Even if LessWrong.com is just killed and made read-only (since after all the efforts to migrate people here, it is even more unlikely that the original lesswrong will get any new use), that's a better outcome for me.
I wouldn't even post this, but I hear a lot more people sharing the same opinion (selection effects apply), but (selection effects again) few of them are here to actually say it.
Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.
I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably ...
So there's no way for us to login with our regular accounts before the launch? Is it scheduled for anytime soon?
I'd hate to keep seeing all the constant promotion for your site without being able to check it out (since I am not really up for using a temporary account).
The fact that you engage with the article and share it, might suggest to the author that he did everything right.
True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.
The idea that your email will discourage the author from writing similar articles might be mistaken.
I can see it having no influence on some journalist, but again
...I am n
After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.
I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in...
At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.
At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.
Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research ...
Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?
P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.
More quality content (either in terms of discussions or actual posts).
P.S. I do see how that might not be especially helpful.
What is the latest time that I can sign up and realistically expect that there'll be spaces left? I am interested, but I can't really commit 10 months in advance.
Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.
If someone is going to turn away at the first sight an unknown term, then they have no chance in lasting here (I mean, imagine what'll happen when they see the Sequences).
It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.