Jalex Stark

I work as a quant in NYC to stay sane, have access to mentors, and fund some side projects. If you're undecided about a quant job, PM me and I am happy to talk. www.jalexstark.com

Wiki Contributions

Comments

Are long-form dating profiles productive?

Writing my dating profile was a good use of my time before I shared it with anybody. I had an insufficiently strong sense of what kind of relationship I want and why other people might want to have it with me. The exercise of "make a freeform document capturing all of that" was very helpful for focusing my mind towards figuring it out -- much moreso than the exercise of "fill in dating app textboxes in a way that seems competitive for the swiping game". (This is just a special case of "writing an essay teaches you a lot" -- something I'd like to take advantage of more often)

It took about 1 workday of writing effort to put mine together, and it's resulted in 2 high-quality dates (order of 10k micromarriages) in the past 5 months. This is competitive with the rest of my current tools for turning effort into dating prospects. 

ETH is probably undervalued right now

Which trade are you advocating for? "long crypto"? Reversion? (akak "buying the dip") Long ETH vs. short BTC? 

All of these are plausible opinions, and it's not crazy to allocate some of your portfolio based on them -- but a trade consists of a price and a size. Do you think you should have 0.1% of your net worth in ETH or 30%? Does that change if ETH goes to 100 or 3000 next week? Do your arguments apply equally well elsewhere? (solana?)

 

Beauty and the Beast

It's a piece of fiction about someone using a funky language model tool to write autobiographical fiction.

[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.

If you launch the nukes, you also die, and we spend a lot of time worrying about that. Why?

[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.

So you have a crisp concept called "unbounded utility maximizer" so that some AI systems are, some AI systems aren't, and the ones that aren't are safe. Your plan is to teach everyone where that sharp conceptual boundary is, and then what? Convince them to walk back over the line and stay there? 

Do you think your mission is easier or harder than nuclear disarmament?

Does the rationalist community have a membership funnel?

I think I get what you're saying know; let me try to rephrase. We want to grow the "think good and do good" community. We have a lot of let's say "recruitment material" that appeals to people's sense of do-gooding, so unaligned people that vaguely want to do good might trip over the material and get recruited. But we have less of that on the think-gooding side, so so there's a larger gap of unaligned people who want to think good that we could recruit. 

Does that seem right? 

Where does the Atlas fellowship fall on your scale of "recruits do-gooders" versus "recruits think-gooders"?

Does the rationalist community have a membership funnel?

I think the most important claim you make here is that trying to fit into a cultural niche called "rationality" makes you a more effective researcher than trying to fit into a cultural niche called "EA". I think this is a plausible claim, (e.g. I feel this way about doing a math or philosophy undergrad degree over doing an economics or computer science undergrad degree) but I don't intuitively agree with it. Do you have any arguments in favor?

[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.

Pushing which button? They're deploying systems and competing on how capable those systems are. How do they know the systems they're deploying are safe? How do they define "not-unbounded-utility-maximizers" (and why is it not a solution to the whole alignment problem)? What about your "alignment-pilled" world is different from today's world, wherein large institutions already prefer not to kill themselves?

[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.

How does that distinguish between AGI and not-yet-AGI? How does that prevent an arms race?

[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.

Is there any concrete proposal that meets your specification? "don't kill yourself with AGI, please"?

Load More