Thanks for the kind comment! I figured the game might work well as a cooperative experience, so I'm glad to hear that that was indeed the case :).
I wasn't disputing that Zvi mentioned the blood clot story, I was disputing your characterisation of it. Quoting from literally the first two paragraphs from your link:
And even if all the observed clots were extra, all were caused by the vaccine, all were fatal, and that represented the overall base rate, and we ignore all population-level benefits and economic issues, the vaccine would still be worth using purely for personal health and safety by multiple orders of magnitude.
The WHO and EMA said there was no evidence there was an issue.
This is not consist... (read more)
because he did not believe the blood clots were real
I'd need a source on that. From what I recall, the numbers were small and preliminary but plausibly real, but orders of magnitude below the danger of Covid (which IIRC incidentally also causes blood clots). So one could call the suspension penny-wise but pound-foolish, or some such. Not to mention that IIRC the suspension resulted in a dip in Covid vaccinations, so it's not clear that it was even the right call in retrospect. I also recall hearing the suspension justified as necessary to preserve trust in... (read more)
I would appreciate this kind of reply, except I can't update on it if the critique isn't public.
For now, I don't think basic notions like "all governments were incompetent on covid" are particularly easy to dispute?
To provide two examples:
Nope, I don't buy it. Having read Zvi's Covid posts and having a sense of how much better policy was possible (and was already being advocated at the time), I just don't buy a framing where government Covid policy can be partly considered as competent. I'm also dubious of treating "the military" as a monolithic black box entity with coherent behavior and goals, rather than consisting of individual decision-makers who follow the same dysfunctional incentives as everyone else.
If you have sources that e.g. corroborate sane military policy during the early Covid months, feel free to provide it, but for now I'm unconvinced.
This might benefit from being cross-posted on the page of Community events, though I don't know if there's a policy against posting paid workshops there.
Have you searched the EA forum on this topic? Seems like a potentially better resource for questions like this.
I figured, which is why I moderated my statement as only "somewhat" confused :).
I am somewhat confused that you provide that comment thread as an example of charity having negative effects, when the thing that spawned that entire thread, or so it seems to me, was insufficient charity / civility / agreeableness (as e.g. evidenced by several negative-karma comments).
I appreciated this post and found its arguments persuasive. Thanks for writing this!
The one thing I wish had been different was that the essay extensively argues against "argumentative charity", but I never got a sense of what exactly the thing is that's being argued against.
Steelmanning and the Ideological Turing Test get extensive descriptions, while argumentative charity is described as "a complete mess of a concept". Which, fair enough, but if the concept defies definition, I'd instead appreciate a couple examples to understand what not to do.
I figure... (read more)
I've found a new website bug: copy & pasting bullet points from LW essays into the comment field fails with a weird behavior. I've created a corresponding Github issue.
Incidentally, you might get more (reddit) comments if you crosspost this essay on the r/slatestarcodex subreddit. The interests of LW and SSC have some decent overlap, and it's sometimes easier to get comments on reddit than on LW.
You're welcome :). Anyway, feel free to delete my typo comments once you've read them; it's not like they serve any further purpose in the comment threads once they're fixed.
Phil held his face inn his hands. -> in his hands
More typos:
Note that Duncan just posted the relevant chapter from the CFAR Handbook as a standalone LW essay.
What an intense story! Thanks for writing about it.
If you like stories about huge accidents, you might also enjoy this video episode about a plane accident in 1990 where the captain got sucked outside of the cockpit and the co-pilot had to land the plane alone.
Typos:
Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.
By which mechanism do you expect to improve discussion by introducing censorship?
Question on acronyms: what do SOTA and PaLM mean?
I have built many PCs over the years (there are at this moment six machines that I built, sitting in this room), and I can tell you that this is not correct.
I've also built a smaller number of PCs over the years, maybe 5-ish. I always set a rough budget, then chose the best components in terms of price/performance (and e.g. silence), relative to that budget. I don't understand how you're ever supposed to get worse performance by using that algorithm while tripling your budget.
To be clear, I'm not suggesting that a random $3000 PC necessarily has higher per... (read more)
Re: the PC build example:
First, it would be foolish to suggest that, in any given category, any more expensive thing is always worse than any less expensive thing, and indeed that is not what I claimed. (Note, again, what I said: “It is a fundamental mistake to think that spending more money necessarily gets you more of anything that you value.”—I did not emphasize ‘necessarily’ in my initial comment, but it’s there for a reason!)
Second, an obvious point, but one whose importance is easy to overlook, is that the PC builds you link to, are not single produc... (read more)
My point in that quote was that while these products may be made of e.g. ostensibly better materials, they're inferior relative to your requirements. In your framing (of products that are "just worse", irrespective of requirements), it seems to me like one should be able to buy the $10 toothbrush, sell it for $100, outsell the originally more expensive item, and make $90 profit. As I presume that this doesn't actually happen, I conclude that some customers prefer the product that originally costs $100, and it can ergo not be considered "just worse".
When yo... (read more)
... (read more)Many things are like this. It is a fundamental mistake to think that spending more money necessarily gets you more of anything that you value. It is, in fact, quite common for spending more money to get you less quality and less aesthetics and less ease of use and less durability and less reliability and … etc.
When deciding what to buy—which thing, what kind of thing, how many things, etc.—you should not start with consideration of prices of products and then ask how much you’re willing to spend and so on. To do so is to head off in the wrong direction. Yo
Note that this is the best toothbrush available on the market (given my needs); there are many models that are more expensive, but they are all worse than the $10 model I bought. Let me be very clear about this: if I had spent more money, I would have gotten an inferior product—not in “value per dollar” terms, but in absolute terms.
Here's how I would put this: I like the ISO 9000 definition of "quality" as the "degree to which a set of inherent characteristics fulfils requirements".
When you want to find the best product for yourself, you have some requirem... (read more)
There's definitely a bug / inconsistency here: the linked comments are in a different order when viewed as a permalink vs. when viewed in the thread itself. But yeah, I was way too quick to assume, based on a single data point, that this was a) a new problem and b) caused by or influenced by agreement karma or the related recent website update. Oops. I thought these things were likely related because, as stated in this thread, only karma (but not agreement) is supposed to dictate how things are ordered; so when I saw a wrong ordering with differing agreeme... (read more)
Bug: When comment threads are loaded as a permalink, comment sorting is wrong or at least influenced by agreement karma.
Example: This comment thread. In this screenshot, the comment with 2 karma and 1 agreement is sorted above the comment with 8 karma and 0 agreement.
Having skimmed their further reading section for this video, I'm happy to see how seriously this channel takes its research. And as became apparent from the video itself, it was supported by Open Philanthropy and FHI.
Other random things I learned about the channel:
In German, the tap water is known to be very hard, so essentially no one drinks tap water.
Our local tap water (in a town close to Munich) is roughly as soft as tap water can be, and I drink nothing else.
But if you've found statistics on how countries differ in how much tap water their citizens drink, I'd be interested to see them. Unfortunately, searching for "tap water consumption" includes all the other uses like showering etc.
I could swear it was frontpaged when I wrote that, but now I'm only 80% sure that it was[1]. Anyway, I figured maybe auto-crossposted posts by high-karma LW posters might automatically get posted as Frontpaged rather than as Personal Blog.
I welcome evidence both for and against the hypothesis that I hallucinated that.
Pardon the confusion. It was frontpaged, I saw your comment, then moved it back to personal blog. The thought didn't occur to me that you would then be mildly gaslit about your comment!
And no, everything including crossposts get manually processed and frontpaged-or-not. Occasional simple errors make it through. Thx MondSemmel for the comment that pointed this one out.
Having politics posts on LW is fine, but they mostly shouldn't be frontpaged and instead remain personal blog posts.
If there's going to be an agreement-disagreement axis, maybe reconsider how and whether voting strength interacts with it. I saw a comment in this thread which got to -10 agreement from one vote. Which is, if not laughably absurd, certainly utterly unintuitive. What is that even supposed to mean?
I'm also struggling to interpret cases where karma & agreement diverge, and would also prefer a system that lets me understand how individuals have voted. E.g. Duncan's comment above currently has positive karma but negative agreement, with different numbers of upvotes and agreement votes. There are many potential voting patterns that can have such a result, so it's unclear how to interpret it.
Whereas in Duncan's suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement (so there's never a divergence be... (read more)
Whereas in Duncan's suggestion, a) all votes contain two bits of information and hence take a stand on something like agreement
I didn't notice that! I don't want to have to decide on whether to reward or punish someone every time I figure out whether they said a true or false thing. Seems like it would also severely enhance the problem of "people who say things that most people believe get lots of karma".
Another option would be heading-based voting, i.e. if you use headings in your comments, each one of those could become votable, or be treated internally as separate comments to vote on and reply to.
However, one problem with all such approaches (besides the big issue of increased UI complexity, of course) is that they're kind of incompatible with the ability to edit one's own comments - what if someone votes on a block quote or heading in you comment, and then you edit that part, or remove it altogether?
And while I'm already in my noticing-tiny-things perfectionist mode: The line spacings between paragraphs and bulleted lists of various indentation levels seem inconsistent. Though maybe that's good typographical practice?
See this screenshot from desktop Firefox: there seem to be 3+ different line spacings with little consistency. For example:
Also, also - it's a bit confusing that karma defaults to a normal upvote by the poster, but the agreement defaults to none (but it can be added by the poster if they actually agree with themselves)?
On this point, I suggest making it so that people cannot vote agree/disagree on their own comments. It's one thing to say "I find my own comment here so valuable that I use a strong upvote on it so more people see it" - that's weird and somewhat discouraged by the community, but at least carries some information.
But what's the equivalent supposed to be for agreement? "I find my own comment so correct that I strongly agree with it"? Just disallow that in the software.
Even when it comes to comments, I often wish people would break up their long comments more so I could vote separately on different claims.
Aesthetically speaking, this current implementation still looks rather ugly to me. Specific things I find ugly:
I appreciate this voting system in controversial threads, but find it a bit overkill otherwise.
Maybe you could make this option "enabled by default", so if a thread creator doesn't think it's a good fit for a post, they can opt out of it by unchecking a box?
The two images in this post don't load for me. As I understand it, they're embedded here from Twitter's content delivery network (CDN), and such embedded images don't always load. To avert problems like this, it's better to copy images in such a way that they're hosted on LW's own CDN instead.
Huh, you're right.
Crossposted LW posts list their original source next to the author's username. See this screenshot.
From Algorithms to Live By, I vaguely recall the multi-armed bandit problem. Maybe that's what you're looking for? Or is that still too closely tied to the explore-exploit paradigm?
dramatically lower my bar for posting on LessWrong. The votes will sort it out. If it's not interesting, people will just ignore it, but sometimes a post that seems not worth writing is actually something people are really excited to read, and people are also happy to read a quickly written post rather than no post at all
Something I just learned is that the audiences and interests of even nominally adjacent communities like LW and e.g. SSC are pretty different. This question post of mine saw no interest on LW, but a (to me) frankly surprising amount of int... (read more)
Note that there's also a discussion thread on the SSC subreddit.
And keep in mind that you need to do that without being discovered and in a super short amount of time.
While I expect that this would be the case, I don't consider it a crux. As long as the AGI can keep itself safe, it doesn't particularly matter if it's discovered, as long as it has become powerful enough, and/or distributed enough, that our civilization can no longer stop it. And given our civilization's level of competence, those are low bars to clear.
What about plans like "hack cryptocurrency for coins worth hundreds of millions of dollars" or "make ransomware attacks" is not trivial? Cybercrimes like these are regularly committed by humans, and so a superintelligence will naturally have a much easier time with them.
If we postulate a superintelligence with nothing but Internet access, it should be many orders of magnitude better at making money in the pure Internet economy (e.g. cybercrime, cryptocurrency, lots of investment stuff, online gambling, prediction markets) than humans are, and some humans already make a lot of money there.
Maybe it reduces the population enough for an AGI to target the rest of us or prevent us from rebuilding, though.
Yeah, I'm familiar with the arguments that neither pandemics nor nuclear war seem likely to be existential risks, i.e. ones that could cause human extinction; but I'd nonetheless expect such events to be damaging enough from the perspective of a nefarious actor trying to prevent resistance.
Ultimately this whole line of reasoning seems superfluous to me - it just seems so obvious that with sufficient cognitive power one can do ridiculous things -... (read more)
What current defenses do you think we have against nukes or pandemics?
For instance, the lesson from Covid seems to be that a small group of humans is already enough to trigger a pandemic. If one intended to develop an especially lethal pandemic via gain-of-function research, the task already doesn't seem particularly hard for researchers with time and resources, so we'd expect a superintelligence to have a much easier job.
If getting access to nukes via hacking seems too implausible, then maybe it's easier to imagine triggering nuclear war by tricking one n... (read more)
Such a policy invites moral hazard, though. If many people followed it, you could farm karma by simply beginning each post with the trite "this is going to get downvoted" thing.
Board games are always great! I also used to be part of the group that wanted to play games with long play times, but nowadays prefer shorter games.
Anyway, if anyone is looking for more game recommendations, there was a recent LW discussion here.