All of sapphire's Comments + Replies

I didn't have time or energy! This shit was going parabolic! Whatever happened to Aumann's agreement theorem!

I will note this is both a cool as fuck project and the token is like 2x+ from when I posted. So idk why im getting downvoted. I guess lesswrong jsut isnt into finance for personal gain or to look at dope stuff.

7Jonas Vollmer5d
I think more like you don't argue why you believe what you believe and instead just assert it's cool, and the whole thing looks a bit sloppy (spelling mistakes, all-caps, etc.)

The lesswrong/EA communities 'inside game' strategy for AI safety is genuinely embarrassing to me as a longtime community member. Quokkas gonna quokka but this is ridiculous.

3greenrd10d
As one of the people who advocated this strategy early on, I want to point out what I pointed out then: working inside AI capabilities labs probably grants you access to insights about what is going on, how things work, and what is (going to be) important that you wouldn't necessarily get from outside. That said, it's not clear to me that the community's entire strategy is 'inside game'. I know at least one community member who is hostile to OpenAI, I guess Pause AI protesting outside OpenAI counts as those of us who are in Pause AI being hostile to it, and I guess you yourself also count as a community member who is hostile to OpenAI? Is MIRI working with OpenAI or other corporate labs?
3quetzal_rainbow11d
I downvoted this comment because it's genuinely incomprehensible.
4habryka13d
Do you have a rough estimate of how much it went up in the last 3 months?

Serious advice: Your close friends and partners should actively admire you and how you live your life. Don't settle for less. You can have less close relations across the divide. But your closest circle should see you as a hero.

2Dagon1mo
I don't exactly disagree, but I'd rather not frame it in terms of expectations for others' perception of you.  YOU should actively admire the way you live your life and make decisions.  That's somewhat different from admiring yourself, as you should also have the humility to recognize that a lot of contingent context went into any successes you have as well.

Emmett Shear reminds us that if you are playing Street Fighter [2 Turbo, presumably] then the solution to the so-called ‘cheese’ moves that seem overpowered is not to ban them, it is to use them until someone shows you or figures out the counter, then everything is fine.

This works exactly because the game is well-designed, with good counters to every such move. If that was not true, this would fail. It also relies on having enough data to find the counter-moves, and enough practice to learn them, to get to the new equilibrium. It does genuinely ruin a diff

... (read more)

I don't really get why this wouldn't get funded. 

2momom21mo
I'm not sure either, but here's my current model: Even though it looks pretty likely that AISC is an improvement on no-AISC, there are very few potential funders: 1) EA-adjacent caritative organizations. 2) People from AIS/rat communities. Now, how to explain their decisions? For the former, my guess would be a mix of not having heard of/received an application from AISC and preferring to optimize heavily towards top-rated charities. AISC's work is hard to quantify, as you can tell from the most upvoted comments, and that's a problem when you're looking for projects to invest because you need to avoid being criticized for that kind of choice if it turns out AISC is crackpotist/a waste of funds. The Copenhagen interpretation of ethics applies hard there for an opponent with a tooth against the organization. For the latter, it depends a lot on individual people, but here are the possibilities that come to mind: - Not wanting donate anything but feeling like having to, which leads to large donations to few projects when you feel like donating enough to break the status quo bias. - Being especially mindful of one's finances and donating only to preferred charities, because of a personal attachment (again, not likely to pick AISC a priori) or because they're provably effective. To answer 2), you can say why you don't donate to AISC? Your motivations are probably very similar to other potential donators here.

A culture that denigrates being altruistic/nice in small ways makes it really hard to know who to trust. I understand their is an aversion to 'virtue signaling' but accepting rat/ea culture really feels like it disables my intuitive (and honestly decently reliable) intuition for who is trustworthy. Imo there is a good reason people check things like 'how do they treat waiters' when deciding who is a trustworthy partner or friend. That stuff cannot replace explicit logical altruism. But it seems like a mistake to jettison being kind in smaller ways throughout your life.

4Elizabeth1mo
I think is true but applies at least as strongly for a culture that disproportionately rewards small altruism. If tipping an extra 5% makes people much more willing to invest in your start-up, people will tip better. 

Progress is mostly illegal or at least extremely hobbled by regulations. When people are free to innovate there is usually quite rapid progress. There are a few exceptions like fundamental physics but existing physics explains things we actually interact with fairly well except at extremely high energies. I have all sorts of very viable and productive business ideas that would help people. But they are either illegal or regulations make them tooo expensive to start up. This is the norm. If you want to improve things you need to look for areas where innovation is still doable without incurring gigantic costs.

A browser extension that colors reddit usernames red if the user is a likely bot.

I must have a different relation to social media to most rats. Whenever I open twitter I get a constant dopamine rush from all the stuff I find cute or funny. Here are some examples to demonstrate that maybe I am just easy to please:

 

 

 

I dont even live in SF. but I am visiting right now and having so much fun. Filled with joy at the idea of tetra having a lovely trip. 

note: I have been informed by my hotel mate I cried writing this comment in the bathroom I had to reassure her the audible crying had nothing to do with her. 

I thought about this for awhile. My real opinion is that 'being afraid of a scar on your chest' just isn't a great way to live. And so I am not exactly against the traditional masculine gender role discouraging people from being afraid of that stuff. Arguable the 'traditional masc gender role' only discourages expressing such fears not having them. I think there is a place for expressing negative... (read more)

1mesaoptimizer1mo
I now see that your claims may mainly be exhortations to yourself, or a more direct reflection of how you relate to yourself. I feel like I understand you better now, and I'm glad about it.

Estrogen seems to make a lot of girls more interested in men over time. Possibly progesterone is involved idk. So you may be meeting girls who are still figuring out their new sexuality with a new hormone balance. I don't think it's well understood. Both certainty a surprising number of trans girls predict they will be lesbian but end up with some sexual interest in cis men .

1Michael Roe1mo
so, yes, I know at least one person who reports becoming attracted to men after starting hrt. but I was more referring to the social dark matter effect, where the true amount of something is greater than is reported.

I will say I tried extremely hard to be a good EA. It basically drove me insane. Community is extremely unsupportive. I basically decided Im retired. I put in enough years in the misery mines and donated more than enough money (most of what I did ended in failure but I cashed out 7 figs of crypto and donated a lot). I will be a nice friendly generous person by the extremely low standards of my actually existing society. But otherwise I will just do what I want. I cried enough tears and sacrificed enough of the windfalls I was legally entitled to keep. 

1Karthik Tadepalli1mo
For what it's worth, your "small and vulnerable" post is what convinced me that people can really have an unbelievable amount of kindness and compassion in them, a belief that made me much more receptive to EA. Stay out of the misery mines pls!
7Chris_Leong1mo
Thank you for your service!

People metaphorically run parts of the code themselves all the time! Its quite common for people to work through proofs of major theorems themselves. As a grad student it is expected you will make an effort to understand the derivation of as much of the foundational results in your sub-field as you can. A large part of the rationale is pedagogical but it is also good practice. It is definitely considered moderately distasteful to cite results you dont understand and good mathematicians do try to minimize it. Its rare that an important theorem has a proof t... (read more)

4Garrett Baker1mo
Yeah, that seems like a feature of math that violates assumption 2 argument 1. If people are actually constantly checking each others’ work, and never citing anything they don’t understand, that leaves me much more optimistic. This seems like a rarity. I wonder how this culture developed.

Long complicated proofs almost always have mistakes. So in that sense you are right. But its very rare for the mistakes to turn out to be important or hard to fix. 

In my opinion the only really logical defense of Academic Mathematics as an epistemic process is that it does seem to generate reliable knowledge. You can read through this thread: https://mathoverflow.net/questions/35468/widely-accepted-mathematical-results-that-were-later-shown-to-be-wrong. There just don't seem to be very many recent results that were widely accepted but proven wrong lat... (read more)

2Garrett Baker1mo
If you had a lot of very smart coders working on a centuries old operating system, and never once running it, every function of which takes 1 hour to 1 day to understand, each coder is put under a lot of pressure to write useful functions, not so much to show that others' functions are flawed, and you pointed out that we don't see many important functions being shown to be wrong, I wouldn't even expect the code to compile, nevermind run even after all the syntax errors are fixed! The lack of important results being shown to be wrong is evidence, and even more & interesting evidence is (I've heard) when important results are shown to be wrong, there's often a simple fix. I'm still skeptical though, because it just seems like such an impossible task!

I used to feel CONSTANTLY gaslit reading the EA forum. For some reason this has substantially reduced in recent months. It feels like people actually lay out their actual reasoning. This is an example of a post that feels 'honest' to me and would be really unusual a few years ago: https://forum.effectivealtruism.org/posts/qCF5kETxnk3HkfiEf/cause-generality-is-hard-if-some-causes-have-higher-roi. 

For example: For years people would comment that EA seemed increasingly 'about' AI Risk. the response was almost always to point out most EA money goes to nea... (read more)

I don't have much trouble being empathetic or loving to Biden/Trump/Putin either.  They are all people with many good and bad qualities. I can imagine myself trying to help. For example if they were ever subjected to solitary confinement I think I would at least call my Senator and ask they receive genuinely humane treatment. It is hard to imagine more idk 'personal' situations since im a normal person and they (current or former) world leaders.

I genuinely feel terrible for Adolf Hitler.

Adolf was close with his mother but she died when he was seventeen. His father was a horribly abusive person to almost everyone. Dad was certainly not kind to his illegitimate son Adolf. Adolf wanted to be an artist. He tried and failed twice to enter the Academy of fine arts. He was deemed physically unfit for the army and had to ask for special permission to serve.  He was injured while serving and later gassed! Despite his injuries and seeming lack of aptitude as a soldier he was at the front lines a huge ... (read more)

That poem was not written by Hitler.

According to this website and other reputable-seeming sources, the German poet Georg Runsky published that poem, "Habe Geduld", around 1906.

On 14 May 1938 a copy of this poem was printed in the Austrian weekly Agrarische Post, under the title 'Denke es'. It was then falsely attributed to Adolf Hitler.

In the Hitler biography of John Toland (1976) it appeared for the first time in English translation. Toland made the mistake in identifying it as a true Hitler poem, supposedly written in 1923.

0Feel_Love1mo
I hope you feel better with time. I think it's important to note that universal love, including compassion for those suffering, is always a pleasant feeling. It doesn't hurt the way pity or lamenting might; there's no grief in it, just well-wishing. This is an important point. While unconditional love has no boundaries, including time, it can be a major complication to start the effort with past beings or hypothetical future ones as the object. It's usually easier to start with one (or many) of the countless beings who are experiencing life right now. For the exercise of this post, a better case study than Hitler might be Putin or Trump (or Biden, etc.). This way, we don't have to additionally posit time travel, simulations, alternate universes, or what death entails. I love this sentiment and the personal details you shared. Learning about Hitler's good qualities was great too. Thank you!

The best version of this I know is Fractal in NYC. The main idea is everyone lives in the same apartment building in Brooklyn. There is also common area in 1G. Works great. Not very family focused though. But the founders of fractal are looking into a sister project with more family focus.

Good book value. It might trade under book but its presumably not going to zero when it has decent book value. 

1MichaelStJules2mo
Unless I'm misreading, it looks like there's a bunch of volume+interest in put options with strike prices of around $5, but little volume+interest in options with lower strike prices (some in $2.50, but much less). $5.5 for January 5th, $5 for January 19th, $5 for February 16th. Much more volume+interest for put options in general for Feb 16th. So if we take those seriously and I'm not misunderstanding, the market expects a chance it'll drop below $5 per share, so a drop of at least ~70%. There's more volume+interest in put options with strike prices of $7.50 and even more for $10 for February 16th.

As you know I don't find the EMH consistently true. The argument for why its more than ~40% to go through are linked.

1MichaelStJules2mo
Why is the downside only -60%?

Yes. You can read the sources to see why its more than 40%.

People disagreeing with you, on public sites and especially on their own blogs, is not abuse! 

3Cornelius Dybdahl2mo
It is the bad faith engagement which I deem abusive, especially given the context, not the disagreement.

I have no idea sorry.

I don't think we should help him convince other people of a position that seems to have driven him kinda insane.

It is also kind of funny to me the post references clarity in the title but I honestly don't even know what Zach thinks about when people should transition. To be clear I think we should be supportive of people who transition. And people should transition iff they think it will make them happier. But whatever the best practical policies are I seriously doubt Zach's philosophical point of view is going to be prudent to promote or adopt.

2tailcalled2mo
Not sure what "a position" is referring to. Do you mean his beliefs about categorization? His distrust of rationalists? I think lots of people agree with both of these without obsessively writing blog posts and losing sleep, so I don't think you can attribute his problems solely to this.
9Said Achmiz2mo
Zack actually has a post which addresses this sort of question quite directly: http://unremediatedgender.space/2021/Sep/i-dont-do-policy/

"This means rationalists should be in favor of life extension, getting rich as individuals, and getting personal access to the most powerful artificial general intelligence tools that can be controlled. "

Uhhhh yes they should do this instead of becoming obsessed with this type of stuff. Though 'can be controlled' is certainly load bearing.

2Cornelius Dybdahl2mo
The insanity is more reasonably attributed to being met with constant abuse (which your comment is ostensibly an example of) than to his positions on epistemology or the ontology of gender. Also, Zack has already explained that he has something to protect, which is existentially threatened by his detractors. The implication of your sentiment seems to be that he should simply give up on what is precious to him and pick the winning side. This is not the standard you would be applying if you were engaging in good faith.

Here you're appealing to winning on an individual level, which creates coordination problems. If Zack is doing something wrong because he is losing at an individual level, then sufficiently powerful coalitions get to control what is right or wrong by controlling the individual incentives, which seems like A Problem.

If we think Zack has a point on the object level, but some force is preventing him from winning, then it seems logical for rationalists to coordinate to help him win. If we think Zack is wrong on the object level, then it seems like it would be more appropriate to explain to him his mistake on the object level, rather than to appeal to the political challenges he faces.

1Gerald Monroe2mo
Following this logically, to win the most you make the best bets, and you need more resources (more time to live, more money) so that you can make more total bets and thus win more. This means rationalists should be in favor of life extension, getting rich as individuals, and getting personal access to the most powerful artificial general intelligence tools that can be controlled.  (this is why AI pause advocacy, at least at GPT-4 capability level, seems 'weird' for a 'rational' individual to advocate for.  A much strong model can likely be controlled, and if you think it can't, how do you know this?)

If a friend or partner wanted to have a child with me Id potentially be down. Though we would need to have extremely aligned view on non-coercive parenting. Also im a sleepy boy so we gotta be aligned on low effort parenting too.

You can buy GDR common shares via LSE on Interactive Brokers.

Farmed animals are currently inside a non-extinction X-risk. 

My current allocation to AI is split something like this:

 

AMAT4.25
AMD3.55
ANET4.30
ASML8.07
CDNS4.04
GFS1.28
GM1.84
GOOG7.90
INTC6.17
KLAC2.33
LLY2.88
LRCX4.62
MRVL1.87
MSFT14.65
MSFT Calls2.78
MU4.38
ONTO0.24
RMBS1.00
SMSN7.88
SNPS2.77
TSM10.45
TXN2.74
4sapphire13d
Obviously this is up a fuckton
2Jonas Vollmer4mo
How did you get SMSN exposure?
5Jonas Vollmer4mo
This looked really reasonable until I saw that there was no NVDA in there; why's that? (You might say high PE, but note that Forward PE is much lower.)

I recommend sgov for getting safe interest. It effectively just invests in short term treasuries for you. very simple and straightforward. Easier than buying bonds yourself. I do not think 100 percent or more equities is a good idea right now given that we might get more rate increases. Obviously do not buy long term bonds. Im not a prophet just saying how I am handling things

I have supported myself for almost a decade now via speculation / gambling / arbitrage. I almost never find the Kelly criterion all that useful in my own life. If a bet is really juicy go as hard as you can while finding the downside tolerable. If a bet isn't QUITE JUICY I usually pass.

2Garrett Baker4mo
Yeah, I'd expect that for that strategy you would not want to use the Kelly criterion, and it seems more useful when you're relatively uncertain about the quality of your bet.
2the gears to ascension4mo
that seems like saying "alignment will not be solved" to me.

Where do you live in Ohio? In columbus poly isnt very taboo ime living in cbus. But Columbus is the biggest city.

Multi, aren't you a fellow leftist? Why did you summon these demons.

This is a specific claim about what specific people should do

-2trevor5mo
We're basically doomed to continue talking past eachother here. You don't seem to be willing to give tons of detail here about how, exactly, the Community Health And Special Projects team is too corrupt to function. I'm not willing to give tons of detail here about external threats that are vastly more significant than any internal drama within EA, which means I can't explain the details demonstrating why external threats to EA actually dominate the calculus of how important is the Community Health And Special Projects team or whether it should be disbanded.
5Rebecca5mo
I would not consider CEA to be part of the rationality community

Every retrospective I know of has shown them to do a terrible job. Note the failures are not even obviously ideological. They have protected the normal sort of abuser. But they also protected Kathy Forth who was a serial false accuser (yes they banned her from some events but she was still active in EA spaces until her suicide).

Would you expect to see retrospectives of cases where they did a good job? If an investigation concludes that "X made these accusations about Y but we determined them to be meritless", then there are good reasons for neither CEA nor X to bring further attention to those accusations by including them in a public retrospective. Or in cases where accusations are determined to have merit, it may still be that the victims don't want the case to be discussed in public any more than strictly necessary. Or there may be concerns of a libel suit from the wrongdoer, limiting what can be said openly.

I am extremely, extremely against disposing of important-sounding EA institutions based off of popular will, as this is a vulnerability that is extremely exploitable by outsiders and we should not create precedent/incentives to exploit that vulnerability.

If I'm wrong about this specific case, it's because of idiosyncratic details and because I didn't do as much research on this particular org relative to other people here. If I was wrong in this specific case, it would be a very weak update against my stance that EA orgs should be robust against sudden pub... (read more)

For at least five I have been telling victims* that there is no obvious advice on what they should do. But that they should absolutely avoid any formal community process. In particular totally avoid the community health team. 

I assume the community health team is mostly or entirely staffed by good well intentioned people. But I have personally treated it as way too corrupt in function. Whenever someone from the community health team messages me I just ignore them. Once you think an institution is corrupt the SAFEST response is to minimize contact. By ... (read more)

2Algon5mo
What community health team are you talking about?

Weight maintenance system I use and recommend: 

1 - Weigh yourself daily right in the morning 

2 - Have a narrow target band. IE 140-142 inclusive. 

3 - If you are below the band eat something decadent. Maybe cake or ice cream! 

4 - If below eat super low cal but have fun! Only fresh fruit day is dope!!!

2[comment deleted]5mo
2mako yass5mo
Isn't weight gained/maintained from eating sugary things all downside in terms of health? I currently don't have really any impulse to eat stuff like that. Get my fats from grass fed beef or mackerel, or olive oil. I have no noticeable body fat. But there are some mild dysfunctions indicating that I could have less than a body is supposed to. (reduced energy level when hungry, low gains, an annoying eustacean tube thing)

I would wire you guys 300-400K today if I wasn't still worried about the theory that 'AI Safety is actually a front for funding advancement of AI capabilities'. It is a quixotic task to figure out how true that theory is or what actually happened in the past, neverminded why. But the theory seems at least kind of true to me and so I will not be donating. 

Its unlikely to be worth your time to try to convince me to donate. But maybe other potential donors would appreciate a reassurance its not actively net-negative to donate. For example several people ... (read more)

I'm not sure which of the people "have ties to dangerous organizations such as Anthropic" in the post (besides Shauna Kravec & Nova DasSarma, who work at Anthropic), but of the current fund managers, I suspect that I have the most direct ties to Anthropic and OAI through my work at ARC Evals. I also have done a plurality of grant evaluations in AI Safety in the last month. So I think I should respond to this comment with my thoughts.

I personally empathize significantly with the concerns raised by Linch and Oli. In fact, when I was debating joining Eval... (read more)

FWIW, I am also very worried about this and it feels pretty plausible to me. I don't have any great reassurances, besides me thinking about this a lot and trying somewhat hard to counteract it in my own grant evaluations, but I only do a small minority of grant evaluations on the LTFF these days. 

I do want to clarify that I think it's unlikely that AI Safety is a front for advancing AI capabilities. I think the framing that's more plausibly true is that AI Safety is a memespace that has undergone regulatory capture by capability companies and people i... (read more)

My own professional opinion, not speaking for any other grantmakers or giving an institutional view for LTFF etc:

Yeah I sure can't convince you that donating to us is definitely net positive, because such a claim wouldn't be true.

So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future[1], and this number is even lower for people who don't place the majority of their credence on near- to medium-term extinction risk timelines.

I don't think... (read more)

7NicholasKross6mo
Have you looked at Orthogonal? They're pretty damn culturally inoculated against doing-capabilities-(even-by-accident), and they're extremely funding constrained.

FOUR ORIENTATIONS TOWARD LOVE

There is an emotion. It is something like 'the joy of connection'. Its the emotion that can make just sitting near a friend fun. Or hearing about their day. People definitely vary in how much they feel this emotion by default.

lets call this emotionC


Honestly emotionC + 'sexual interest' explains most of what almost everyone means by romance. But this has been, for various reasons, logical and historical, to have gotten quite confused.

There are really FOUR main ways people orient toward this situation:

1) TRAD solution: More or le... (read more)

2Vanessa Kosoy6mo
I'm confused about what your definition of the "Lib" solution. AFAIU your taxonomy is: * Trad: Emotion C is allowed with anyone of the same sex and (optionally) your monogamous heterosexual lover, sex is only allowed with your monogamous heterosexual lover. * "Lib": Emotion C is allowed with ???, sex is only allowed with your monogamous lover. * "Lefty": Both emotion C (beyond some threshold) and sex are only allowed with your monogamous lover. * "Communism": Emotion C and sex are allowed with anyone you want. Personally, I'm a fan of hierarchical poly. You can have unlimited emotion C and sex with multiple people, but when resources get scarce, your primary gets priority. Like you said, people like stability, hence once you merge your utility functions with your primary, you can do things like "I won't leave Alice for Bob, even if Bob seems a locally better option, because a priori both Alice and I prefer the world where I stay with Alice no matter what, to a world where with 50% probability I meet Bob and leave Alice for him and with 50% she meets Carol and leaves me for her". Also, it's true that hierarchical poly requires you navigating difficult questions like "how much time is it okay to spend with my secondary, if it comes at the price of spending time with my primary". But I don't think this is  fundamentally different from what happens in monogamy. In a monogamous relationship you also have to contend with questions like "how much time is it okay to spend with my platonic friends, or on hobbies that I don't share with my lover, or even on work, if it comes at the price of spending time with my lover". I don't think you can ever have clear-cut deonotological rules for this kind of thing, you have to do it the hard way and actually search for the mutually-optimal consequential solution. [Disclaimer: I'm poly for only approximately 5 years]

My honest opinion is that Ziz got several friends of mine killed. So i dont exactly have a high opinion of her. But I have never heard of Ziz referring to themselves as LaSota. Its honestly toxic not to use people's preferred names. Its especially toxic if they are trans but the issue isn't restricted to trans people. So Id strongly prefer people refer to Ziz as Ziz. 

I think when it comes to people who get people killed, it's justified to reveal all the names they go by in the interest of public safety, even if they don't like it. 

8ChristianKl8mo
What exactly do you mean by the word 'toxic'? 

I think this position has some merit, though I disagree. I think Ziz is a name that is hard to Google and get context on, and also feels like it's chosen with intimidation in mind. "LaSota" is me trying to actively be neutral and not choose a name that they have actively disendorsed, but while also making it a more unique identifier, not misgendering them (like their full legal name would), and not contributing to more bad dynamics by having a "cool name for the community villain", which I really don't think has good consequences.

-3drethelin8mo
Also for practical purposes it's much more clear who is being referred to in the local context especially since there's tons of writing from/about Ziz Plus it's just a much cooler name for a community villain. 

FWIW this is around the amount of progress I was expecting in 2016. For better or worse I updated very hard toward very short timelines once alphago was released in 2015. Amusingly at the time I figured we had 10-20 years until strong AGI, which gives an average of 2030. I prefer to say strong AGI since as far as I'm concerned publicly available AGI was released in late 2022. 

Arguably EA/Rationality needed much simpler and less nuanced messaging on how to deal with aI capabilities companies. We really should have gone with 'absolutely do not help or work for companies increasing ai capabilities. Only work directly on safety.' Nuance is cool and all but the nuanced messaging arguably just ended up enabling Anthropic and OpenAI.

Load More