All of wunan's Comments + Replies

Covid 12/30: Infinity War

And would a doctor's note make any difference towards them allowing you to wear something like a versaflo or other PAPR?

Covid 12/30: Infinity War

Does anybody know what is the best mask that they'll allow you to wear on an airplane? Has anyone worn a P100 with the exhale valve covered, or do they not allow that?

4Dorikka1moi have worn this on an airplane with a surgical mask over the exhale valve https://www.amazon.com/GVS-SPR457-Elipse-Respirator-Medium/dp/B013SIIBFQ/ref=sxts_rp_s1_0?cv_ct_cx=gvs+elipse+p100&gclid=CjwKCAiAzrWOBhBjEiwAq85QZyLzBYwxwOZcFGAMZbn4S6OVwxcQXnl1L6L9xWu1dQgzAFdKK8oxOhoCqo0QAvD_BwE&hvadid=409918312011&hvdev=c&hvlocphy=9014238&hvnetw=g&hvqmt=e&hvrand=6118992613772945421&hvtargid=kwd-267611734495&hydadcr=20509_11115072&keywords=gvs+elipse+p100&pd_rd_i=B013SIIBFQ&pd_rd_r=03b1c070-7191-4557-b0bb-79b3930e0ac8&pd_rd_w=JYxDi&pd_rd_wg=uF7dV&pf_rd_p=53ef7f8b-c1cb-4ebe-8d67-39f3ca0ebaf7&pf_rd_r=PE758N16XDW0FMZ1MZXD&psc=1&qid=1640904683&sr=1-1-5e1b2986-06e6-4004-a85e-73bfa3ee44fe
1wunan1moAnd would a doctor's note make any difference towards them allowing you to wear something like a versaflo or other PAPR?
Jimrandomh's Shortform

hopefully realize it's a bad idea have a morality that allows this

 

To expand on this: https://www.nickbostrom.com/papers/unilateralist.pdf

Omicron Post #4

What do you mean by "immune erosion"? Is this different than "immune evasion" and "immune escape"? I can't find any explanation on google -- is this a standard term?

1cistrane1moExisting mRNA vaccines theoretically provide antibodies to about 42 epitopes of the original spike protein. So, they are very effective against the original strain as any one of these antibodies would be neutralizing. Let's say for example that only 10 out of 42 are neutralizing for omicron. That means that many people will be unlucky and not produce enough of the right antibodies to stop the omicron at first, thus their protection has eroded. But some will be lucky enough to produce the necessary amount of the right antibodies. So the protection was not completely bypassed.
2Caridorc Tergilti2moImmune erosion is used to make people understand that immune escape is only partial and not total
1bww2moPer my understanding "immune evasion" and "immune escape" imply a binary, i.e. previous immunity has no effect. "Immune erosion" is a more intuitive way of describing how this variant likely interacts with protection from previous infection or vaccination, with protection against infection, hospitalization, death "eroding" to a degree but not being completely negated. I believe I first saw it in this twitter thread: https://twitter.com/kakape/status/1428657737190322184?s=20
Omicron Post #3

What is meant by "immune erosive"? Is this different than "immune evasive"? I can't find any explanation on google -- is this a standard term?

-4mwacksen2moIt's the terminology you use to signal that you believe the vaccine is safe and effective™ and therefore cannot be fully "evaded".
Biology-Inspired AGI Timelines: The Trick That Never Works

If it's a normal distribution, what's the standard deviation?

Is it better to fix a problem directly, or start again so the problem never happens?

For software development, rewriting the code from scratch is typically a bad idea. It may be helpful to see how well the arguments in that article apply to your domain.

4shminux2moRewriting is hard, refactoring is easy and gets you 80% toward the goal that pushes one to rewrite. Also can be done incrementally.
Discussion with Eliezer Yudkowsky on AGI interventions

Context for anyone who's not aware:

Nerd sniping is a slang term that describes a particularly interesting problem that is presented to a nerd, often a physicist, tech geek or mathematician. The nerd stops all activity to devote attention to solving the problem, often at his or her own peril

Here's the xkcd comic which coined the term.

2adamShimi2moThanks, I sometimes forget not everyone knows the term. :)
Discussion with Eliezer Yudkowsky on AGI interventions

If MIRI hasn't already, it seems to me like it'd be a good idea to try reaching out. It also seems worth being at least a little bit strategic about it as opposed to, say, a cold email. 

 

+1 especially to this -- surely MIRI or a similar x-risk org could attain a warm introduction with potential top researchers through their network from someone who is willing to vouch for them.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

On one hand, meditation -- when done without all the baggage, hypothetically -- seems like a useful tool. On the other hand, it simply invites all that baggage, because that is in the books, in the practicing communities, etc.

 

I think meditation should be treated similarly to psychedelics -- even for meditators who don't think of it in terms of anything supernatural, it can still have very large and unpredictable effects on the mind. The more extreme the style of meditation (e.g. silent retreats), the more likely this sort of thing is.

Any subgroups he... (read more)

4Gunnar_Zarncke3moI have pointed out the risks of meditation and meditation-like practices before. The last time was on the Shoulder Advisors [https://www.lesswrong.com/posts/X79Rc5cA5mSWBexnd/shoulder-advisors-101?commentId=BTS2s6JiiKqEkYju2] which does seem to fall on the boundary. I have experience with meditation and have been to extended silent meditation retreats with only positive results. Nonetheless, bad trips are possible - esp. without a supportive teacher and/or community. But I wouldn't make a norm against groups fostering meditation. Meditation depends on groups for support (though the same might be said about psychedelics). Meditation is also a known way to gain high levels of introspective awareness and to have many mental health benefits (many posts about that on LW I'm too lazy to find). The group norm about these things should be to require oversight by a Living Tradition of Knowledge [https://www.lesswrong.com/posts/nnNdz7XQrd5bWTgoP/on-the-loss-and-preservation-of-knowledge] in the relevant area (for meditation e.g. an established - maybe even Buddhist - meditation school).
2Kenny 3moPsychedelics, woo, and meditation are very separate stuff. They are often used in conjunction with each other due to popularity and the context some of these things are discussed along with each other. Buddhism has incorporated meditation into its woo while other religions have mostly focused on group based services in terms of talking about their woos. I like how some commenters have grouped psychedelics and meditation separate of the woo stuff, but it was a bit surprising to me to see Eliezer dismissing psychedelics along with woo in the same statements. He probably hasn't taken psychedelics before. Meditation is quite different as in it's more of a state of mind as opposed to an altered mentality. With psychedelics there is a clear distinction between when you are tripping and when you aren't tripping. With meditation, it's not so clear when you are meditating and when you aren't. Woo is just putting certain ideas into words, which has nothing to do with different mindset/mentalities.
Steelman arguments against the idea that AGI is inevitable and will arrive soon

Possible small correction: GPT-2 to GPT-3 was 16 months, not 6. The GPT-2 paper was published in February 2019 and the GPT-3 paper was published in June 2020.

3Quintin Pope3moWhoops! Corrected.
Effective Altruism Virtual Programs Nov-Dec 2021

I can't tell from the descriptions, but it seems like these programs have been run before -- is that right? Are there any reviews or other writeups about participants' experiences anywhere?

1yiyang2moHi wunan, yes we run the programs monthly. Unfortunately, we don't have any reviews/testimonies written up for the public, but here's a broad feel of our program [https://forum.effectivealtruism.org/posts/BuR7RLQAtJtC5jK5d/there-will-now-be-ea-virtual-programs-every-month] .
Raj Thimmiah's Shortform

That would make a good monthly open thread.

wunan's Shortform

If compute is the main bottleneck to AI progress, then one goalpost to watch for is when AI is able to significantly increase the pace of chip design and manufacturing. After writing the above, I searched for work being done in this area and found this article. If these approaches can actually speed up certain steps in this process from taking weeks to just taking a few days, will that increase the pace of Moore's law? Or is Moore's law mainly bottlenecked by problems that will be particularly hard to apply AI to?

4steven04614moThere was some previous discussion here [https://www.lesswrong.com/posts/9JTS67ChzK6HTyH9a/ml-is-now-automating-parts-of-chip-r-and-d-how-big-a-deal-is] .
alenglander's Shortform

Do you have some examples? I've noticed that rationalists tend to ascribe good faith to outside criticisms too often, to the extent that obviously bad-faith criticisms are treated as invitations for discussions. For example, there was an article about SSC in the New Yorker that came out after Scott deleted SSC but before the NYT article. Many rationalists failed to recognize the New Yorker article as a hit piece which I believe it clearly was, even more clearly now that the NYT article has come out.

1Aryeh Englander5moI am reluctant to mention specific examples, partly because maybe I've misunderstood and partly because I hate being at all confrontational. But regardless, I have definitely seen this outside the rationalist community, and I have definitely noticed myself doing this. Usually I only do it in my head though, where I feel upset when it's coming from outside my group but if someone inside the group says it then I'll mentally nod along.

Yeah, my main takeaway from that question was that a change in the slope of of the abilities graph was what would convince him of an imminent fast takeoff. Presumably the x axis of the graph is either time (i.e. the date) or compute, but I'm not sure what he'd put on the Y axis and there wasn't enough time to ask a followup question.

Why the technological singularity by AGI may never happen

Even without having a higher IQ than a peak human, an AGI that merely ran 1000x faster would be transformative.

How to turn money into AI safety?

Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren't the bottleneck right now, and the other problems we have are more interesting.

 

Can you clarify what you mean by this? I see two main possibilities for what you might mean:

  • There are many talented people who want to work on AI alignment, but are doing something else instead.
  • There are many talented people working on AI alignment, but they're not very productive.

If you mean the first one, I think it would be worth it to survey people who are interested in AI... (read more)

6WayZ5moI think that "There are many talented people who want to work on AI alignment, but are doing something else instead." is likely to be true. I met at least 2 talented people who tried to get into AI Safety but who weren't able to because open positions / internships were too scarce. One of them at least tried hard (i.e applied for many positions and couldn't find one (scarcity), despite the fact that he was one of the top french students in ML). If there was money / positions, I think that there are chances that he would work on AI alignment independently. Connor Leahy in one of his podcasts mentions something similar aswell. That's the impression I have.
2Charlie Steiner5moYeah, I mean the first. Good survey question ideas :)
1Maxwell Peterson6moThanks for that! I'd seen it mentioned in a tweet but we wondered about the veracity. Seeing another source is helpful.
2Dokler6moI saw someone tweet a successful report of using LSD for this purpose. Edit: went back to search for it and there seem to be many people discussing this and quite a few have found it helpful (much moreso for smell than taste, but of course the two are closely related and I believe that loss of smell was a more common experience).
A low-probability strategy to elminate suffering via hostile takeover of a publically traded corporation

What's the advantage of taking over an existing corporation rather than creating a new organization?

1jpgeiger6moTechnically, there's no difference in the long-term outcome, and if you had reason that creating a new company would be more likely to succeed, that would be a fine idea. I assume it would be easier to take over an existing company, because you'd gain access to their infrastructure, intellectual property, experienced employees, and existing customer base.
Sam Altman and Ezra Klein on the AI Revolution

What are some examples of makers who gained wealth/influence/status by having a huge negative impact on the world?

9artifex07moThe marketing company Salesforce was founded in Silicon Valley in '99, and has been hugely successful. It's often ranked as one of the best companies in the U.S. to work for. I went to one of their conferences recently, and the whole thing was a massive status display- they'd built an arcade with Salesforce-themed video games just for that one conference, and had a live performance by Gwen Stafani, among other things. ...But the marketing industry is one massive collective action problem. It consumes a vast amount of labor and resources, distorts the market in a way that harms healthy competition, creates incentives for social media to optimize for engagement rather than quality, and develops dangerous tools for propagandists, all while producing nothing of value in aggregate. Without our massive marketing industry, we'd have to pay a subscription fee or a tax for services like Google and Facebook, but everything else would be cheaper in a way that would necessarily dwarf that cost (since the vast majority of the cost of marketing doesn't go to useful services)- and we'd probably have a much less sensationalist media on top of that. People in Silicon Valley are absolutely willing to grant status to people who gained wealth purely through collective action problems.
Willa's Shortform

What I mean is that they haven't really considered it. As I'm sure you're aware, your mind does not work like most people's. When most people consider the question of whether they'd like to die someday, they're not really thinking about it as if it were a real option. Even if they give detailed, logically coherent explanations for why they'd like to die someday, they haven't considered it in in near mode.

 

I am very confident of this -- once they have the option, they will not choose to die. Right now they see it as just an abstract philosophical conve... (read more)

1Willa7moI wonder what the research area for "finding out whether people with strong beliefs in a religious afterlife of some kind change their minds near death regarding wanting to die vs wanting to go to that afterlife" is called? I do think you underestimate the strength of religious individual's convictions and the impact of that on their decisionmaking especially near death if they have significant faith in their religion's conception of an afterlife (assuming it has one). Still...staring imminent death in the face does spur many changes whenever an individual experiences that, so maybe that does shake things up...but, I'm not sure how much hope I place in that idea without seeing some research around the topic. I make this claim: Individuals with sufficiently strong religious beliefs in a religion that has an afterlife will more often than not prefer to die on the "mortal plane" and go to their faith's afterlife than continue living (even in good health and restored to youth, etc.) for [100 more years] [300 more years] [lots more years] [the rest of time]. I make that claim because of my experiences interacting with individuals who have very strong faith in their religious beliefs, including interacting with such individuals when they are near death. I need to figure out what kind of research people have already done that points in the direction of that claim and see what others have found, then I can pursue this inquiry further. I don't want to make a claim and forever rely on anecdote to support it!
Willa's Shortform

They'll almost definitely change their minds once we have good treatments for aging.

1Willa7moI know some will, but that's too optimistic and ignores the preferences / experiences of a huge amount of people, because there are categories of people who prefer death over immortality for whom the aging process doesn't factor in to their choice on that matter. Especially people with strong faith in their religion's afterlife.
Habryka's Shortform Feed

Those graphs all show the percentage share of the different variants, but more important would be the actual growth rate. Is the delta variant growing, or is it just shrinking less quickly than the others?

2CellBioGuy7moBecause there is no evidence that they mean any of that.
1Pablo Repetto9moIt is a direct response to a quotation from the article, so not really.
ozziegooen's Shortform

Precommitment for removal and optionality for adding.

2ozziegooen9moThanks! I think precommitement is too narrow (I don't see dying as a precommitement). Optionality seems like a solid choice for adding. "Options" are a financial term, so something a bit more generic seems appropriate.
hereisonehand's Shortform

There's a discord for Crypto+Rationalists you may be interested in if you're not already aware: https://discord.gg/3ZCxUt8qYw

[Letter] Advice for High School #1

To any high schoolers reading this: If I could send just one of the items from the above list back to myself in high school, it would be "lift weights." Starting Strength is a good intro.

9lsusr9moThe recommendation to lift weights applies to boys and girls. If you are a woman, lifting weights won't make you bulky. It will make you hot.
wunan's Shortform

I have a potential category of questions that could fit on Metaculus and work as an "AGI fire alarm." The questions are of the format "After an AI system achieves task x, how many years will it take for world output to double?"

supposedlyfun's Shortform

Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/

1supposedlyfun10moThis is great. Thank you. I'm fascinated by the fact that this problem was studied as far back as the 1960s.
MikkW's Shortform

I'm curious what cards people have paid to put in your deck so far. Can you share, if the buyers don't mind?

2MikkW10moI currently have three cards entered, and the other seven are being held in reserve by the buyer (and have already been paid for). They are: "Jeff's Friendly Snek", "Book: The Mathematical Theory of Communication by Claude Shannon", and "Maximize Cooperative Information Transfer for {{Learning New Optimization}}", where {{brackets}} indicate cloze deletions; these were all sponsored by jackinthenet, he described his intention as wanting to use me as a vector for propagating memes and maximizing cooperative information transfer (which prompted the card).
Voting-like mechanisms which address size of preferences?
Answer by wunanMar 18, 202112

Ralph Merkle's Dao Democracy addresses size of preferences because constituents only  "vote" by reporting their own overall happiness level. Everything else is handled by conditional prediction markets (like in futarchy) to maximize future happiness of the constituents. This means that if some issue is very important to a voter, it will have a greater impact on their reported happiness, which will have a greater impact on what proposals get passed.

5abramdemski10moUsing Futarchy is just cheating ;3 But you're right, this does negate all my issues. I was just looking for something closer to existing governments.
Why Productivity Systems Don't Stick

Has this new congruency-based approach led to less, the same, or more productivity than what you were doing before and how long have you been doing it?

It has led to more consistent productivity and more productivity overall.  It has been consistent for about two and a half years now.

Matt Goldenberg's Short Form Feed

Is losing weight one of your goals with this?

 

Like you said, since it hasn't been studied you're not going to find anything conclusive about it, but it may be a good idea to skip the fast once a month (i.e. 3 weeks where you do 88 hour fasts, then 1 week where you don't fast at all).

2Matt Goldenberg1yYes, it's definitelyone of the goals here, although equality about longevity, helping my acid reflux, and other immune system benefits
I object (in theory)

I object to the demonstration because it's based on the false assumption that there's a fixed amount of value (candy, money) to be distributed and that by participating in capitalism, you're playing a zero-sum game. Most games played in capitalism are positive-sum -- you can make more candy.

1Sunny from QAD1yGood point! I admit that although I've thought about this incident many times, this has never occurred to me.
Tweet markets for impersonal truth tracking?

Do you have a source for the 80% figure?

3ike1yhttps://knightfoundation.org/wp-content/uploads/2020/06/KnightFoundation_Panel6-Techlash2_rprt_061220-v2_es-1.pdf [https://knightfoundation.org/wp-content/uploads/2020/06/KnightFoundation_Panel6-Techlash2_rprt_061220-v2_es-1.pdf] Depends on the topic, but look at e.g. Figure 2 on page 6. 81% say never allow election related misinformation, 85% say never allow health misinformation.
Seek Upside Risk

I agree that this is a really important concept. Two related ideas are asymmetric risk and Barbell strategies, both of which are things that Nassim Nicholas Taleb writes about a lot.

Where is human level on text prediction? (GPTs task)

What is that formula based on? Can't find anything from googling. I thought it may be from the OpenAI paper Scaling Laws for Neural Language Models, but can't find it with ctrl+f.

7gwern1yIt's in the figure.
Where is human level on text prediction? (GPTs task)
Answer by wunanSep 20, 202010

In Steve Omohundro's presentation on GPT-3, he compares the perplexity of some different approaches. GPT-2 scores 35.8, GPT-3 scores 20.5, and humans score 12. Sources are linked on slide 12.

I think Omohundro is wrong here. His GPT-3 perplexity of 20.5 must be for Penn Tree Bank. However, his 'humans' perplexity of 12 is for a completely different dataset! Tracing his citations from his video to Shen et al 2017, which uses 1 Billion Word Benchmark. 1BW was not reported in the GPT-3 paper because it was one of the datasets affected by contamination and dropped from evaluation.

I've never read the Penn Tree Bank or 1BW so I can't compare. At best, I'd guess that if 1BW is collected from "English newspapers", that's less diverse than the Brown Cor... (read more)

2Veedrac1ySources: https://web.stanford.edu/~jurafsky/slp3/ [https://web.stanford.edu/~jurafsky/slp3/] https://www.isca-speech.org/archive/Interspeech_2017/abstracts/0729.html [https://www.isca-speech.org/archive/Interspeech_2017/abstracts/0729.html] The latter is the source for human perplexity being 12. I should note that it tested on the 1 Billion Words benchmark, where GPT-2 scored 42.2 (35.8 was for Penn Treebank), so the results are not exactly 1:1.
2Daniel Kokotajlo1yThanks! So... Gwern is wrong? I'm confused.
Escalation Outside the System

People are literally looting businesses and NPR is publishing interviews supporting it. They're not just interviewing people who support it -- the interviewer also supports it. What makes you think these aren't actual policy proposals?

They may only propose it for deep social-signalling reasons as you say, but that doesn't mean it's not actually a proposal. Historically, we've seen that people are willing to go through with mass murders.

9jefftk1yI was curious what tone changes NPR made between the archived version you linked and the current version. I ran a quick diff: * hand-wringing about looting. -> condemnation of looting * bemoaned the property damage -> denounced the property damage * "" -> Osterweil is a self-described writer, editor and agitator who has been writing about and participating in protests for years. And her book arrives as the continued protests have emerged as a bitter dividing point in the presidential race. * I spoke with Osterweil about this summer's riots, the common narratives surrounding looting, and why "nonviolence" can be a misleading term. -> I spoke with Osterweil * Now, as protests and riots continue to grip cities, she argues that looting is a powerful tool -> Now, as protests and riots continue to grip cities, she stakes out a provocative position: that that looting is a powerful tool * The rioters who smash windows and take items from stores, she says, are engaging in a powerful tactic -> The rioters who smash windows and take items from stores, she claims, are engaging in a powerful tactic* new Black and Brown nations -> new Black and brown nations * the Civil Rights bill -> the civil rights bill * You know, one of the causes of the L.A. riots was a Korean small-business owner murdering 15-year-old Latasha Harlins -> You know, one of the causes of the L.A. riots was a Korean small-business owner [killing] 15-year-old Latasha Harlins
Are we in an AI overhang?

In the Gwern quote, what does "Even the dates are more or less correct!" refer to? Which dates were predicted for what?

gwern1yΩ1238

Look at, for example, Moravec. His extrapolation assumes that supercomputer will not be made available for AI work until AI work has already been proven successful (correct) and that AI will have to wait for hardware to become so powerful that even a grad student can afford it with $1k (also correct, see AlexNet), and extrapolating from ~1998, estimates:

At the present rate, computers suitable for humanlike robots will appear in the 2020s.

Guess what year today is.

Are we in an AI overhang?

This was mentioned in the "Other Constraints" section of the original post:

Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you're at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with
I'm skeptical of this being a binding constraint too. $2/page is still very cheap.
My experience with the "rationalist uncanny valley"

It might help if you try to think less in terms of making rationality and EA part of your identity and instead just look at them as some things you're interested in. You could pursue the things you're interested in and become a more capable person even if you never read anything else from the rationality community again. Maybe reading stuff from people who have achieved great things and had great ideas and who have not been influenced by the rationality community (which, by the way, describes most people who have achieved great things and had gre... (read more)

Life can be better than you think

I remember reading SquirrelInHell's posts earlier and I'm really sorry to hear that. Is there any more public information regarding the circumstances of the suicide? Couldn't find anything with google.

2ChristianKl3yAs far as I know there's no public information. There's some nonpublic information ; )
What is a reasonable outside view for the fate of social movements?

The podcast Rationally Speaking recently had an episode on the Mohists, a "strikingly modern group of Chinese philosophers active in 479–221 BCE." They discuss what caused the movement to die out and draw comparisons between it and the Effective Altruism movement.

One night, without sleep

Have you heard about the EA Hotel? Or considered moving to a country with a very low cost of living?

Load More