All of Tenoke's Comments + Replies

It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.

Do you have any evidence for this?

My prior is that other things are less effective and you need evidence to show they are more effective not vice versa.

Not all EA's are longtermists.

Of course. I'm saying it doesn't even get to make that argument which can sometimes muddy the waters enough to make some odd-seeming causes look at least plausibly effective.

-1B Jacobs2mo
Appeal to presuppositions always feels weird to me. A socialist could just as easily say 'my priors say the opposite'. In any case, you made a claim of comparison, not me, why is the burden of proof suddenly on me? I'm trying to explain the scientific literature on co-ops, not persuade you of some scam.

I'm impressed how modern EAs manage to spin any cause into being supposedly EA.

There's just no way that things like this are remotely as effective as say GiveWell causes (though it wouldn't even meet a much lower bar) and it barely even has longtermist points for it that can make me see why there's at least a chance it could be worth it.

EA's whlole brand is massively diluted by all these causes and I don't think they are remotely as effective as other places where your money can go, nor that they help the general message.

It's like people get into EA, reali... (read more)

7gjm2mo
The article mentions EA exactly twice. One is to quote the "80,000 hours" figure. The other is at the end, suggesting that EA organizations should consider adopting the sort of structure the article argues for. Neither of these things claims, or implies, or even suggests, that shifting firms to a worker-cooperative model is an "EA" cause in the sense of being a more effective thing to do with money than, say, feeding starving poor people or preventing cheaply-preventable disease or (for those who favour such things) trying to increase the probability that some time in the future there are a billion trillion gazillion happy satisfied productive people. (I don't know whether Ben is correct about worker cooperatives being a better organizational structure in general. I don't know whether EA organizations are similar enough to other businesses that this would indicate it's a good structure for them. But if it is a good structure for them, they should consider using it even if persuading others to adopt it isn't an efficient use of money.)
2B Jacobs2mo
Do you have any evidence for this? Not all EA's are longtermists.

While NVDA is naively the most obvious play - the vast majority of GPU-based AI systems use them, I fail to see why you'd expect it will outperform the market, at least in the medium term. Even if you don't believe in the EMH, I assume you acknowledge things can be more or less priced-in? Well, NVDA's such an obvious choice that it does seem like all the main arguments for it are priced-in which has helped get it to a PE ratio of 55.

 

I also don't see OpenAI making a huge dent on MSFT's numbers anytime soon. Almost all of MSFT's price is going to be de... (read more)

125Hour2mo
I actually think you can get an acceptable picture of whether something is priced in by reading stock analysts on the topic, since one useful thing you can get from them is a holistic perspective of what is on/off the radar of finance types, and what they perceive as important. Having done this for various stocks, i actually do not think LLM-based advances are on anyone's radar and i do not believe they are priced in meaningfully.

No, it's the blockchain Terra (with Luna being its main token).

 

https://en.wikipedia.org/wiki/Terra_(blockchain)

2philh3mo
Thanks!

There is little reason to think that's a big issue. A lot of data is semi-tagged, some of the ML-generated data can be removed either that way or by being detected by newer models. And in general as long as the 'good' type of data is also increasing model quality will also keep increasing even if you have some extra noise.

What's the GiveWell/AMF of AI Safety? I'd like to occasionally donate. In the past I've only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.

2Raemon3mo
Lark's end of year reports. (Looking at their user profile [https://www.lesswrong.com/users/larks] is probably your best bet)

In Bulgaria (where cyrilic was invented) writing in Latin is common (especially before cyrilic support was good) but frowned upon as it is considered uneducated and ugly. The way we do it is just replace each letter with the equivalent latin letter one to one and do whatever with the few which don't fit (eg just use y for ъ but some might use a, ч is just ch etc). So молоко is just moloko. Водка is vodka. Стол is stol etc. This is also exactly how it works on my keyboard with the phonetic layout.

Everyone else who uses cyrilic online seems to get it when you write like that in my experience though nowadays it's rarer.

4Viliam3mo
I agree that this is the simplest way; algorithmically, but ultimately also for humans. That's where the dozen different standards come from. My first impression is that Bulgarian language uses fewer characters than Russian and Ukrainian. Not fewer sounds, though, it just doesn't have characters like "ё" or "ї", which represent pairs of other existing characters anyway. (Though you still have "ю" and "я", which work the same way.)

I've been considering for years that I should write more, and save more of my messages and activities purely so I can constrain the mindspace for a future AI to recreate a version of me as approximate to my current self as years ago me is. As far as I can tell, this is fairly low effort, and the more information you have the closer you can get. 

I just don't see an obvious refutation for why an advanced AI optimizing for creating a person that would write/do/etc. all the things I have with the highest probability it can would be that different from me.

A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.

Why are so many resources being sunk into this specifically? I just don't understand how it makes sense, what the motivation is and how they arrived at the idea. Maybe there is a great explanation and thought process which I am missing.

From my point of view, there is little demand for it and the main motivation might plausibly have been "we want to say we've published a book" rather than something that people want or need.

Having said that, I'd rather get an answer to my initial comment - why it makes sense to you/them - rather than me having to give reasons why I don't see how it makes sense.

They've sold 1000 copies. I'm surprised, too!

8Zack_M_Davis3y
Sure. I explained my personal enthusiasm for the Review in a November comment [https://www.lesswrong.com/posts/qXwmMkEBLL59NkvYR/the-lesswrong-2018-review#d4RrEizzH85BdCPhE].

We have written some things about our motivation on this, though I don't think we've been fully comprehensive by any means (since that itself would have increased the cost of the vote a good amount). Here are the posts that we've written on the review and the motivation behind it: 

The first post includes more of our big-picture motivation for this. Here are some of the key quotes: 

Quotes

In his LW 2.0 Strategic Overview, ha

... (read more)

Thanks for the reply. That seems like a sensible position.

It sounds like maybe you were less involved in this than some of the 7(is that right?) other employees/admins so I'm very curious to hear their take, too.

2Ruby3y
There are five people on the team. I wasn't the most involved, but I was still very involved. But you'll hear from all of soon, don't you worry.

Printing costs are hardly the only or even main issue, and I hadn't even mentioned them. You are right though, those costs make the insistence on publishing a book make even less sense.

6Zack_M_Davis3y
What, in your view, is the main issue? Other than printing/distribution costs, the only other problem that springs to mind is the opportunity cost of the labor of whoever does the design/typesetting, but I don't think either of us is in a good position to assess that. What bad thing happens if the people who run a website also want to print a few paper books?

I’m confused by this. Why would only voters be interested in the books?

Because I doubt there are all that much more people interested in these than the number of voters. Even at 1000 it doesn't seem like a book makes all that much sense. In fact, I still don't get why turning them into a book is even considered.

Print-on-demand books aren't necessarily very expensive: I've made board books for my friend's son in print runs of one or two for like thirty bucks per copy. If the team has some spare cash and someone wants to do the typesetting, a tiny print run of 100 copies could make sense as "cool in-group souvenir", even if it wouldn't make sense as commercial publishing.

  1. It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.
  2. The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just cast out due to the organizers' mistake to allow it.
  3. The voting was broken in the way described in the post, too.
  4. People didn't understand how the voting worked (Look at the few comments here) so they didn't really even manage to vote in the wa
... (read more)

I hope they are buying 50+ books each otherwise I don’t see how the book part is remotely worth it.

As a data point, I did not vote, but if there is a book, I will almost certainly be buying a copy of it if it is reasonably priced, i.e. similar price to the first two volumes of R:A-Z ($ 6-8).

So, my question is - do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both - what would failure have looked like?

These are really excellent questions. The OP mentions the intention to "review the review" in coming weeks; there will be posts about this, so hang tight. Obviously the whole project had very high costs, so we have to think carefully through whether the benefits justify them and whether we should continue the Review process in future years. Speaking for myself, it's n

... (read more)

It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.

I'm confused by this. Why would only voters be interested in the books? Also, this statement assumes that you have to sell 500-1000 books for it to be worth it– what's the calculation for the value of a book sold vs the cost of making the books?

The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just c

... (read more)

He has been trying to do it for years and failed. The first time I read his attempts at doing that, years ago, I also assigned a high probability of success. Then 2 years passed and he hadn't done it, then another 2 years..

You have to adjust your estimates based on your observations.

I have a bunch of comments on this:

  1. I really liked the bit. Possibly because I've been lowkey following his efforts.
  2. He looks quite good, and I like the beard on him.
  3. ..

I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important.

  1. Eliezer clearly gets too much flack in general, and especially in this case. It's not like I haven't criticised him but come on.
  2. several people’s reaction was, “Why i

... (read more)
I think of CFAR as having "forked the LW epistemological codebase", and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using 'the private beta branch' and people using the classic 'LessWrong 1.0 epistemological framework.'" 

This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.

So, the thing I actually said in the other thread was:

Naively attempting to merge the latest dev branch back into "Sequences Era LessWrong" results in merge conflicts, and it's unclear when this is because:

  • "oh, we just haven't written up the right explanations to make sure this was backwards compatible", vs
  • "oh, these were just some ideas we were experimenting with that didn't pan out" vs
  • "oh, this integration-test-failure is actually an indicator that something was wrong with the idea."
  • "oh, actually, it's original LessWrong sequences that are wrong here, no
... (read more)

not a real error, comment, post or karma.

I assume that means you print them? Because I find pdfs to be the worst medium, compared to mobi, epub or html - mainly because I usually read from my phone.

2ozziegooen3y
I personally kind of like pdfs. PDF Expert on the ipad is pretty great; it crops things if you want, and I find pdfs good for annotation. My impression is that a lot of academics like pdfs for similar reasons (there are at least some valid reasons why they are popular). There are also other programs that read pdfs aloud, which are kinda nice, though I'm sure similar exists for epub/mobi.

All you were saying was "That’s not the question that was asked, so … no." so I'm sorry if I had to guess and ask. Not sure what I've missed by 'not focusing'.

I see you've added both an edit after my comment and then this response, as wellwhich is a bit odd.

4Duncan_Sabien3y
In general, if you don't understand what someone is saying, it's better to ask "what do you mean?" than to say "are you saying [unrelated thing that does not at all emerge from what they said]??" with double punctuation.
1Duncan_Sabien3y
I'm not saying that, either. I request that you stop jumping to wild conclusions and putting words in people's mouths, and focus on what they are actually saying.
9Ben Pace3y
In other threads on this post, Brienne and others describe themselves as doing research, so CFAR seems to be doing both. Math research and teaching math are a bit different. Although I am also interested to know of SSC posts that were helpful for developing curriculum.

Meditations on Moloch is top of the list by a factor of perhaps four

Is that post really that much more relevant than everything else for TEACHING rationality? How come?

5Duncan_Sabien3y
That's not the question that was asked, so ... no. Edit: more helpfully, I found it valuable for thinking about rationality and thinking about CFAR from a strategic perspective—what it was, what it should be, what problems it was up against, how it interfaced with the rest of society.
I think Tenoke things that we are talking about the usual post and comment vote system. 

Isn't that what you were going to use initially or at least the most relevant system here to compare to?

2habryka3y
No, we never really planned using the existing voting system for deciding on the 2018 review sequence and book. I agree it's a reasonable thing to compare the system to, and we considered it for a bit, but I don't think it's very well-suited to the task (mostly because it doesn't allow individual users to express enough variance in their assessments). 

Seems better than the current system which as far as I can tell is just 10 if statements that someone chose without much reason to think it makes sense.

4Raemon3y
Not sure I parse this sentence. Could you explain in different words?

I know you do follow-ups with most/all CFAR attendees. Do you have any aggregate data from the questionnaires? How much do they improve on the outcomes you measure and which ones?

Are there that many social skills mentors who take on students for that to be a more realistic course of action than finding books? Wouldn't you need solid social skills to convince one to mentor you in the first place?

2ChristianKl3y
There are different kinds of social skills. If your social skill situation is so that you appear to everyone like an asshole, it will be hard to find someone to mentor you. If you however seem like a nice guy who wants the best but has no charisma and is willing to learn, there's no barrier to finding someone to help you.

I mean, he uses the exact same phrase I do here but yes, I see your point.

Vaniver is saying that the personal stuff didn't come into account when banning him and that epistemic concerns were enough. From OP:

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.

but then the epistemic concerns seem to be purely based on stuff from the "other allegations" part.

And honestly, the quality of ... (read more)

What I am asking is, why pretend it is the content and that the "other allegations" have no part?

As mentioned in a sibling comment, I am trying to establish the principle that 'promoting reasoning styles in a way we think is predatory' can be a bannable offense, independent of whether or not predation has obviously happened, in part because I think that's part of having a well-kept garden and in part so that the next person in ialdabaoth's reference class can be prevented from doing significant harm. Simply waiting until someone has been exiled doesn't do

... (read more)
it's not like there's anything I can do about it anyway.

It's sad it's gotten that bad with the current iteration of LW. Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.

7Vaniver3y
That is... not how I would characterize the days when Eliezer was the primary moderator [https://www.lesswrong.com/posts/bBckx3K4SxwfeDsTm/who-owns-lesswrong#eQs4BHNLdGWWAms5v].
2habryka3y
I think you are misunderstanding the comment above. As moridinamael says, this is about the counterfactual in which the moderation team goes crazy for some reason, which I think mostly bottoms out in where the actual power lies. If Eliezer decides to ban everyone tomorrow, he was always able to do that, and I don't think anyone would really have the ability to stop him now (since MIRI still owns the URL and a lot of the data). This has always been the case, and if anything is less the case now, but in either case is a counterfactual I don't think we should optimize too much for. 
7moridinamael3y
I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

He is using this comment to show the 'epistemic concerns' side specifically, and claiming the personal stuff were separate .

This is the specific claim.

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.

Maybe I'm confused about what you mean by "the personal stuff". My impression is that what I would consider "the personal stuff" is central to why ialdabaoth is considered to pose an epistemic threat: he has (allegedly) a history of manipulation which makes it more likely that any given thing he writes is intended to deceive or manipulate. Which is why jimrandomh said:

The problem is, I think this post may contain a subtle trap, and that understanding its author, and what he was trying to do with this post, might actually be key to
... (read more)
jimrandomh's comment, linked in the OP, is the current best explanation of the epistemic concerns.

Excluding the personal stuff, this comment is just a somewhat standard LW critique of a LW post (which has less karma than the original post fwiw). If this is the criteria for an 'epistemic concerns' ban, then you must've banned hundreds of people. If you haven't you are clearly banning him for the other reasons, I don't know why you insist on being dishonest about it.

4gjm3y
Why would it make sense to "exclude the personal stuff"? Isn't the personal stuff the point here?

I read this post where you keep claiming you are banning him for 'epistemic concerns' but then link to 0 examples and mostly talk about some unrelated real-life thing which you also give 0 real explanation for.

The comments here mention a sex crime, but OP doesn't. If that's what happened why vaguebook, stay silent for a year and lie the ban's for 'epistemic concerns'? Who else have you banned for 'epistemic concerns' - nobody?

Honestly, after reading everything here I do have major concerns about ialdabaoth's character, but the main epistemic concerns I have are about OP presenting this dishonestly after a year of silence.

9clone of saturn3y
My understanding is that the epistemic concern is "after writing the affordance widths post, he would tell young women he needed to do BDSM stuff they weren't comfortable with in order to stay within his affordance width." And similar things for some of his other posts. I'm not sure why the OP was so vague about this. Edit: and he also managed to fool members of CFAR [https://pastebin.com/qPeamHQQ] with a similar line.
8Vaniver3y
jimrandomh's comment [https://www.lesswrong.com/posts/5zSbwSDgefTvmWzHZ/affordance-widths#ZetQf9JDdxGygTZJZ], linked in the OP, is the current best explanation of the epistemic concerns. Most of my impression comes from in-person discussions with ialdabaoth, which I obviously can't link to, and Facebook posts, of which relatively few individual posts would be damning in isolation but the whole of them adds up to a particular impression. I also want to stress that this is not just my opinion; I currently know of no person who would describe ialdabaoth as a good influence, merely people who think he is redeemable. [I'd be happy to edit this if anyone wants to defend him privately; I have tried to counteract the effect where people might be unwilling to defend him publicly because of the unpopularity of defending a man accused of sexual assault.] I currently don't have an interest in writing a post with this level of detail [https://forum.effectivealtruism.org/posts/fn7bo8sYEHS3RPKQG/concerns-with-intentional-insights] explaining what I'm concerned about, because it has massive costs and unclear benefits. One of ialdabaoth's virtues is that he was cooperative about leaving; rather than us having to figure out whether or not we wanted to ban him, he stopped commenting and posting to not force the issue, which is why this is happening now instead of before. If I were going to invest that level of effort into something, it would be people who I think are actively making things worse now.

is there any explanation of the current Karma System? The main thing I can find is this. (you need to scroll to 'The karma system', for some reason you can click on subsections to go to them, but you can't link to them).


Also why do I see a massive message that says 'habryka's commenting guidelines' when I am writing this comment, but there are no guidelines or link? Is this just a weird extra ad for your own name?

2habryka3y
Yeah, my current commenting guidelines are empty. Other users have non-empty commenting guideliens.  The FAQ covers almost all the site-functionality, including karma. Here is the relevant section:  https://www.lesswrong.com/faq#Karma___Voting [https://www.lesswrong.com/faq#Karma___Voting] You can also link to subsections, if you just right-click on the relevant section in the ToC and select "Copy Link Address". 
4ryan_b3y
The commenting guidelines allows users to set their own norms of communication for their own private posts. This lets us experiment with different norms to see which work better, and also allows the LessWrong community to diversify into different subcommunities should there be interest. It says habryka's guidelines because that's who posted this post; if you go back through the other open threads, you will see other people posted many of them, and different commenting guidelines here and there. I think the posts that speak to this the most are: [Meta] New moderation tools and moderation guidelines [https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines] (by habryka) Meta-tations on Moderation: Towards Public Archipelago [https://www.lesswrong.com/posts/5Ym7DN6h877eyaCnT/meta-tations-on-moderation-towards-public-archipelago] (by Raemon)
Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them.

Nobody is claiming that everything around rationalist circles is completely new or invented by them. It's often looked to me more like separating the more and less useful stuff with various combinations of bottom-up and top-down approaches.

Additionally, I'd like to also identify as someone who is definitely in a much much better place now because they discovered LW almost a decad... (read more)

There are definitely some decent posts, but calling a couple of good posts a official LessWrong Sequence still seems to cheapen what that used to mean.

Not to mention that I read this on facebook, so I barely associate it with here.

Note also that you can view this on GreaterWrong.

Thanks, GreaterWrong seems to still be an improvement over the redesign for me. I'm back to using it.

I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).

I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.

4Raemon3y
(first, noting that if the site content isn't exciting, no worries. Thanks for at least checking it out and giving it another look – I appreciate it) I'd add to Habryka's comment that my longterm plan here is something like: * This year, we review the best posts of 2018. This turns into a fairly simple sequence that clusters relevant posts around each other, and helps people get a sense of the overall major conversation threads that happened in 2018. This sequence is meant to be "highly curated", but not meant to be thought in the same terms as "The Sequences™". Sequence is just a generic term meaning "a collection of posts." * In the coming years, there's an additional step where some older posts are considered for something more like canonization, where they are actually added to a Major Updates sequence that's more in the genre of "The Sequences™", i.e. that everyone participating on the site is supposed to have read. This process is something I'd want to put a lot of care into, and my expectation is something like there'd typically be 1-5 posts in any given year that I wanted to add to the site's common-knowledge-pool, and that I'd want multiple years to reflect on it.
4Vaniver3y
Not even Local Validity [https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-validity-as-a-key-to-sanity-and-civilization]? Note also that you can view this on GreaterWrong, with 2018 posts [https://www.greaterwrong.com/archive/2018] and nominated posts [https://www.greaterwrong.com/index?view=nominations].
3habryka3y
Huh, there must be some confusion going on. The goal is not to add another sequence to Rationality: A-Z, the goal is just to compile a sequence of the type of which we already have many (like Luke's sequence on the neuroscience of happiness, or Kaj's multiagent sequence, or Anna's game theory sequence, etc.). 

Does the Quantum Physics Sequence hold up?

It's been the better part of a decade since I read it (and I knew a lot less back then), and recently I've been curious about getting a refresher. I am not going to pick up a textbook or spend too much time on this, but if it doesn't hold up what alternative/supplementary resources would you recommend (the less math-heavy the better, although obviously some of the math is inescapable)?

1Paperclip Minimizer4y
This [https://www.lesswrong.com/posts/5vZD32EynD9n94dhr/configurations-and-amplitude#iEb8Yjfqha2LY64WD] actually made me not read the whole sequence.

I actually learnt quantum physics from that sequence, and I'm now a mathematician working in Quantum Computing. So it can't be too bad!

The explanation of quantum physics is the best I've seen anywhere. But this might be because it explained it in a style that was particularly suited to me. I really like the way it explains the underlying reality first and only afterwards explains how this corresponds with what we perceive. A lot of other introductions follow the historical discovery of the subject, looking at each of the famous experiments i... (read more)

I also want to know this.

(This is part of a more general question: how much of the science cited in the Sequences holds up? Certainly nearly all the psychology has to be either discarded outright or tagged with “[replication needed]”, but what about the other stuff? The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today; was the fact that NNs weren’t yet the solution to (seemingly) everything essential to Eliezer’s actual points, or peripheral? How many of the conclusions drawn in the Sequences are based on facts

... (read more)

I haven't gotten the voting link (I've now emailed to ask), but I am sadly already pretty negatively surprised at how lesserwrong.com has turned out (props to the maker of greaterwrong, though) and very much hope that it doesn't completely replace LessWrong.com. Even if LessWrong.com is just killed and made read-only (since after all the efforts to migrate people here, it is even more unlikely that the original lesswrong will get any new use), that's a better outcome for me.

I wouldn't even post this, but I hear a lot more people sharing the same opinion (selection effects apply), but (selection effects again) few of them are here to actually say it.

4habryka5y
I would be interested in chatting to anyone who is disappointed by the new site. Also note that greaterwrong.com will continue to be available when we move, so if you like that site more, then you will be able to continue using that.
6gjm5y
Would you care to explain what harm you find LesserWrong to be doing, that would make replacing LW with a static archive preferable to letting LW2 continue? (That's a genuine question, not an attempt at snark or whatever.)

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably ... (read more)

4gwern5y
I did intend to take over the survey if Yvain stopped, although I didn't tell him in the hopes he would keep doing it rather than turn it over immediately. I'm not sure I would take it over now: the results seem increasingly irrelevant as I'm not sure the people taking the survey overlap much anymore as with the original LW surveys in 2009.
0c0rw1n5y
Note, last year's survey was also run by /u/ingres

So there's no way for us to login with our regular accounts before the launch? Is it scheduled for anytime soon?

I'd hate to keep seeing all the constant promotion for your site without being able to check it out (since I am not really up for using a temporary account).

0Habryka5y
Yeah, I apologize. Getting a complete copy of the database requires us to coordinate with Trike Apps and is a lot of work for everyone, so we are limiting the number of complete ports to 2 (i.e. one three months ago, and one on launch). I am working on a merge-accounts feature, so while I can't absolutely guarantee one, it's very likely you will have the ability to merge your old LW account with any temporary account you create now.

The fact that you engage with the article and share it, might suggest to the author that he did everything right.

True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.

The idea that your email will discourage the author from writing similar articles might be mistaken.

I can see it having no influence on some journalist, but again

I am n

... (read more)
3ChristianKl5y
My point wasn't that it creates no impact but that you show the journalist by emailing him that his article is engaging. This could encourage him to write more articles like this.

After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.

I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in... (read more)

7WalterL5y
I've always liked the phrase 'The problem isn't Terminator, it is King Midas. It isn't that AI will suddenly 'decide' to kill us, it is that we will tell it to without realizing it." I forget where I saw that first, but it usually gets the conversation going in the right direction.
7ChristianKl5y
The fact that you engage with the article and share it, might suggest to the author that he did everything right. The idea that your email will discourage the author from writing similar articles might be mistaken. Secondly, calling autonomous weapons killer robots isn't far of the mark. The policy question of whether or not to allow autonomous weapons is distinct from AGI.

At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research ... (read more)

0ChristianKl5y
To me the question isn't whether new DL findings are applicable but whether they are sufficient. I don't think they are sufficient to be able to solve problems where there isn't a big dataset available.

Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

0ChristianKl5y
The overview lecture doesn't really get me worried. It basically means that we are at the point where we can use machine learning to solve well-defined problems with plenty of training data. At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved. At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.
0Manfred5y
I think I don't know the solution, and if so it's impossible for me to guess what he thinks if he's right :) But maybe he's thinking of something vague like CIRL, or hierarchical self-supervised learning with generation, etc. But I think he's thinking of some kind of recurrent network. So maybe he has some clever idea for unsupervised credit assignment?

More quality content (either in terms of discussions or actual posts).

P.S. I do see how that might not be especially helpful.

What is the latest time that I can sign up and realistically expect that there'll be spaces left? I am interested, but I can't really commit 10 months in advance.

5blob7y
There's a good chance you'll be able to participate even if you sign up late. We plan for more participants this time (100 instead of 80) and will also stagger signups in a way keeps some tickets available until a month in advance. If you want to be sure, committing now remains the best option though.
2nino7y
I can't give you an exact time, but if you need a few months until you can sign up, your chances will be improved by providing content (e.g. lightning talks). If I remember correctly, last time it took about 6 months until all spots were filled, but it may well be sooner this time.

Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.

If someone is going to turn away at the first sight an unknown term, then they have no chance in lasting here (I mean, imagine what'll happen when they see the Sequences).

Load More