GeneSmith

Comments

We need a standard set of community advice for how to financially prepare for AGI

I had always assumed that any organization trying to destroy the world with an engineered pathogen would basically release whatever they made and then hope it did its work.

IDK, this topic gets into a lot of information hazard, where I don't really want to speculate because I don't want to spread ideas for how to make the world a lot worse.

We need a standard set of community advice for how to financially prepare for AGI

For example, if there's a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI?

No, which is why I "invest" in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.

We need a standard set of community advice for how to financially prepare for AGI

It does take substantially longer to get to Mars than to get to any isolated pockets on Earth. So unless the pandemic's incubation period is longer than the journey to Mars, it's likely that Martians would know that passengers aboard the ship were infected before it arrived.

We need a standard set of community advice for how to financially prepare for AGI

Wait, what? The sum of the net worth of those who consider themselves members of the rationalist community is MUCH greater due to crypto than it was before. What definition of "success" are you using which so devalues that outcome?

I'm mostly referring to the narrative from this post. There have been some successes, but those have mostly been due to a very small number of huge winners. And in the case of the biggest winner of all, Vitalik Buterin, he actually ended up joining the rationalist community AFTER he started Ethereum.

Do you really want default advice? I'd rather have correct advice, and I'd rather still have correct personal behavior, regardless of advice. "Correct", in this case, means "best possible experienced outcome", not "sounds good" or "best prediction at this point but still wrong".

I probably wasn't as clear as I could have been in the original post. What I mean by "default advice" is a set of actions people can take if they believe there is a decent chance AGI will be created in their lifetimes and want to prepare for it but are not willing to spend all the time to develop a detailed personal plan.

For example, if you believe the efficient market hypothesis, you can act on that belief by buying low-cost index funds. I'm thinking it would be useful to have a similar easy option for people who buy that we will likely see AGI in our lifetimes.

Why would the stock go up, as opposed to the employees in control just absconding with (or being absorbed into) the AGI and the stock becoming irrelevant? Or someone else learning from the success and turning it into an actual financial boon. Or any of a billion other sequences that would make it a dumb idea to pick a stock based on number of papers published in a narrow topic that may or may not correlate with AGI creation.

True, and this is why I said I am not particularly satisfied with my current strategy. I still think in the scenario where AGI has been created or is close to being created, Google's stock price is likely to go up more than an index fund of all stocks on the market.

Open and Welcome Thread – June 2021

It seems unlikely to me that the things we do post-AGI would remain the same. If you had the lamp from Aladdin and the genie actually worked as described, would your life remain the same? Would you still spend your time baking cakes?

I know for myself personally I would try to enhance the capabilities of myself and those I care about (assuming they were amenable). To comprehend calculus as Newton did, or numbers as Ramanujan did would I think be an experience far more interesting than baking cakes or taking part in my usual hobbies. And there are thousands of other ways in which I would use my allotment of AI power to change my own experience.

I suspect this would be true for many people, so that self-augmentation via AGI would fundamentally change the experience of being human.

What does such a world look like? I have a very hard time visualizing it. Would power tend to concentrate even more than it does now? How would AI deal reconcile competing human interests?

We need a standard set of community advice for how to financially prepare for AGI

I know from some interviews I've watched that Musk's main reason for investing in AI startups is to have inside info about their progress so he can monitor what's going on. Perhaps he's just not really paying that much attention? He always has like 15 balls in the air, so perhaps he just doesn't realize how bad Vicarious's safety work is.

Come to think of it, if you or anyone you know have contact with Musk, this might be worth mentioning to him. He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink. So perhaps he just doesn't know that Vicarious AI is being reckless when it comes to safety.

We need a standard set of community advice for how to financially prepare for AGI

Which biotech in particular?

As far as genetic engineering goes, I was thinking about writing up a post on that myself to the effect of "why you should [or should not] consider having your kids via IVF.

But I haven't done much research on transformative biohazards like engineered pandemics and am wary of writing such a post.

Open and Welcome Thread – June 2021

Anyone have reading recommendations for fiction or even just a summary description of what a positive future with AI looks like? I've been trying to decide what to work on for the rest of my career. I really want to work on genetics, but worry that, like every other field, it's basically going to become irrelevant since AI will do everything in the future.

Open and Welcome Thread - May 2021

Anyone have reading recommendations for fiction or even just a summary description of what a positive future with AI looks like? I've been trying to decide what to work on for the rest of my career. I really want to work on genetics, but worry that, like every other field, it's basically going to become irrelevant since AI will do everything in the future.

TEAM: a dramatically improved form of therapy

Your not completely fictitious example is interesting.

I have these types of self-critical thoughts fairly often. Just reading your “validation” of the thought makes me feel oddly calm.

My prior for this sort of “novel therapy they works so much better than others” is that it probably doesn’t. Scott Alexander wrote a blog post about this back on the day in which he noted the effect sizes of new therapies almost always decline over time as they are studied by outside researchers rather than the inventors of the new form of therapy.

But perhaps I am wrong. Or perhaps the technique works much better for people who like that type of therapy. That’s certainly something we see evidence of in the therapy literature.

Thankfully I am not at a point in my life where I actively need therapy anymore, but if I do get back to that point I will give this a look.

Load More