"This is strategically relevant because I'm imagining AGI strategies playing out in a world where everything is already going crazy, while other people are imagining AGI strategies playing out in a world that looks kind of like 2018 except that someone is about to get a decisive strategic advantage." -Christiano
This is a tangent but I don't know when else I would comment on this otherwise. I think one of the biggest potential effects in an acceleration timeline is that things get really memetically weird and unstable. This point was harder to make before covid but imagine the incoherence of institutional responses getting much worse than that. I think a world in which memetic conflict ramps up, local ability to do sense making with your peers gets worse as a side effect. People randomly yelling at you that you need to be paying attention to X. The best thing I know how to do (which may be wholly inadequate) is deciding to invest in high trust connections and joint meaning making. This seems especially likely to be undervalued in a community of high-decouplers. Spending time in peacetime practicing convergence on things that don't have high stakes, like working through each other's emotional processing back logs, practices some of the same moves that become critical in wartime, when it seems like lots of people are losing touch with any sort of consensus reality as there are now polarized competing consensus realities. To tie it back to the portfolio question, I do expect to see worse instability in peer groups when some people are making huge sums and others are getting wiped out by a high variance economy.
This is another reason to take /u/trevor1's advice and limit your mass media diet today. If you think propaganda is going to keep slowly ramping up in terms of effectiveness, then you want to avoid boiling the frog by becoming slightly crazier each year. Ideally you should really try to find some peers who prefer not to mindkill themselves either.
Yeah, that does also feel right to me. I have been thinking about setting up some fund that maybe buys up a bunch of the equity that's held by safety researchers, so that the safety researchers don't have to also blow up their financial portfolio when they press the stop button or do some whistleblowing or whatever, and that does seem pretty incentive wise.
I'm interested in helping with making this happen.
You are implying that it is hard to get Samsung expose. Why? On their website [1] they list several ISINs. Some of them I can buy in through my usual broker. They aren't special.
[1] https://www.samsung.com/global/ir/stock-information/listing-Info/
I think 'go to grad school' may be treated too harshly here. In particular
(NoahK) Also, for most readers I imagine that career capital is their most important asset. A consequence of AGI is that discount rates should be high and you can't necessarily rely on having a long career. So people who are on the margin of e.g. attending grad school should definitely avoid it.
but then,
(Zvi) Career capital is one form of human capital or social capital. Broadly construed, such assets are indeed a large portion of most people's portfolios. I'd rather be 'rich' in the sense of having my reputation, connections, family and skills than 'rich' in the sense of having my investment portfolio, in every possible sense. This is one reason not to obsess too much about your exact portfolio configuration per se, and worry more about building the right human and social capital instead.
(which point gets broadly acknowledged in the conversation.)
I think 'grad school' is getting treated as 'place to get skills which will pay off financially later', where the above analysis makes sense (skills-derived-income-later should be discounted somewhat). This also makes sense in the context of this conversation, which is mostly about wealth stuff. But grad school is also
On these grounds it plays pretty nicely for the right sort of person and place. (Having joined Oxford ~1yr ago I can already speak positively about these three factors.)
Grad school presumably also has a somewhat funding-dependent analysis, like, it's probably bad to go into a bunch of debt to go to grad school. In my case, I'm 'funded' but since I came from a very high tech salary, it's effectively a quite obscene paycut (and one which I may yet regret on a personal level).
There's some evidence from 2013 suggesting that long-dated, out-of-the-money call options have strongly negative EV; common explanations are that some buyers like gambling and drive up prices. See this article. I also heard that over the last decade, some hedge funds therefore adopted the strategy of writing OTM calls on stocks they hold to boost their returns, and also heard that some of these hedge funds disappeared a couple years ago.
Has anyone looked into whether 1) this has replicated more recently, 2) how much worse it makes some of the suggested strategies (if at all)?
Also, to be clear, nothing in this post constitutes investment advice or legal advice.
&
(Also I know enough to say up front that nothing I say here is Investment Advice, or other advice of any kind!)
&
None of what I say is financial advice, including anything that sounds like financial advice.
I usually interpret this sort of statement as an invocation to the gods of law, something along the lines of "please don't smite me", and certainly not intended literally. Indeed, it seems incongruous to interpret it literally here: the whole point of the discussion, as I'm understanding it, is to provide potentially useful ideas about investing strategies. Am I supposed to pretend that it's just, like, an interesting thought experiment? Or is there some other interpretation of your disclaimer I'm not seeing?
I think you should view "investment advice" here as a term of art for the kind of thing that investment advisors do, that comes with some of the legal guarantees that investment advisors are bound to.
I agree that in a colloquial sense this post of course contains advice pertaining to making investments.
I do feel pretty confused about the legal situation here and what liability one incurs for talking about things that are kind of related to financial portfolios and making investments.
Very interesting conversation!
I'm surprised by the strong emphasis of shorting long-dated bonds. Surely there's a big risk of nominal interest rates coming apart from real interest rates, i.e. lots of money getting printed? I feel like it's going to be very hard to predict what the Fed will do in light of 50% real interest rates, and Fed interventions could plausibly hurt your profits a lot here.
(You might suggest shorting long-dated TIPS, but those markets have less volume and higher borrow fees.)
I meant something like the Fed intervening to buy lots of bonds (including long-dated ones), without particularly thinking of YCC, though perhaps that's the main regime under which they might do it?
Are there strong reasons to believe that the Fed wouldn't buy lots of (long-dated) bonds if interest rates increased a lot?
if AGI goes well, economics won't matter much. helping slow down AI progress is probably the best way to purchase shares of the LDT utility function handshake: in winning timelines, whoever did end up solving alignment will have done that thanks to having the time to pay the alignment tax on their research.
if AGI goes well, economics won't matter much.
My best guess as to what you mean by "economics won't matter much" is that (absent catastrophe) AGI will usher in an age of abundance. But abundance can't be unlimited, and even if you're satisfied with limited abundance, that era won't last forever.
It's critical to enter the post-AGI era with either wealth or wealthy connections, because labor will no longer be available as an opportunity to bootstrap your personal net worth.
what i mean is that despite the foundamental scarcity of negentropy-until-heat-death, aligned superintelligent AI will be able to better allocate resources than any human-designed system. i expect that people will still be able to "play at money" if they want, but pre-singularity allocations of wealth/connections are unlikely to be relevant what maximizes nice-things utility.
it's entirely useless to enter the post-AGI era with either wealth or wealthy connections. in fact, it's a waste to not have spent it on increasing-the-probability-that-AGI-goes-well while money was still meaningful.
aligned superintelligent AI will be able to better allocate resources than any human-designed system.
Sure, but allocate to what end? Somebody gets to decide the goal, and you get more say if you have money than if you don't. Same as in all of history, really.
As a concrete example, if you want to do something with the GPT-4 API, it costs money. When someday there's an AGI API, it'll cost money too.
the GPT-4 API has not taken over the world. there is a singular-point-in-time at which some AI will take over everything with a particular utility function and, if AI goes well, create utopia.
Sure, but allocate to what end?
whatever utility function it's been launched with. which is particularly representative of who currently has money. it's not somebody who decides resource-allocation-in-the-post-singularity-future, it's some-utility-function, and the utility function is picked by whoever built the thing, and they're unlikely to type a utility function saying "people should have control over the future proportional to their current allocation of wealth". they're a lot more likely to type something like "make a world that people would describe as good under CEV".
It's true that if the transition to the AGI era involves some sort of 1917-Russian-revolution-esque teardown of existing forms of social organization to impose a utopian ideology, pre-existing property isn't going to help much.
Unless you're all-in on such a scenario, though, it's still worth preparing for other scenarios too. And I don't think it makes sense to be all-in on a scenario that many people (including me) would consider to be a bad outcome.
the point i was trying to make is that if you expect someone to reliably implement LDT, then you can expect to be rewarded for help them (actually helping them) solve alignment because they'd be the kind of agent who, if they solve alignment is solved, will retroactively allocate some of their utility function handshake to you.
LDT-ers reliably one-box, and LDT-ers reliably retroactively-reward people who help them, including in ways that they can't percieve before alignment is solved.
it's not about "doing something nice", it's about LDT agents who end do well, retroactively repaying the agents who helped them get there, because being the kind of agent who reliably does that causes them to more often do well.
The point i was trying to make is that if you expect someone to reliably implement LDT, then you can expect to be rewarded for help them because they'd be the kind of agent who, if they solve alignment is solved, will retroactively allocate some of their utility function handshake to you.
Yes, and the point I am making is that this is not what LDT is or how it works. LDT agents perform prudentbot, not fairbot. An AGI will only reward you with cooperation if you conditionally cooperate, on something you're unable to "condition" on because it would mean looking at the AGI's code and analyzing it beyond what anyone is capable of at present.
i have read that post before and i do not think that it applies here? can you please expand on your disagreement?
Tamsin Leake does not have the kind of info on who/what will control the lightcone that would allow them to cooperate in PDs.
you don't need to know this to probabilistically-help whoever will control the lightcone, right? if you take actions that help them-whoever-they-are, then you're getting some of that share from them-whoever-they-are. (i think?)
you don't need to know this to probabilistically-help whoever will control the lightcone, right? if you take actions that help them-whoever-they-are, then you're getting some of that share from them-whoever-they-are. (i think?)
My point is not that you can't affect the outcome of the future. That may also be impossible, but regardless, any intervention you make will be independent of whether or not the person you're rewarding gives you a share of the lightcone. You can't actually tell in advance whether or not that AI/person is going to give you that share, in the sense that would incentivize someone to give it to you after they've already seized control.
why wouldn't it be because they're maximizing their utility via acausal trade?
do you also think people who don't-intrinsically-value-reciprocity are doomed to never get picked up by rational agents in parfit's hitchhiker? or doomed to two-box in newcomb?
to take an example: i would expect that even if he didn't value reciprocity at all, yudkowsky would reliably cooperate as the hitchhiker in parfit's hitchhiker, or one-box in newcomb, or retroactively-give-utility-function-shares-to-people-who-helped-if-he-grabbed-the-lightcone. he seems like the-kind-of-person-who-tries-to-reliably-implement-LDT.
High growth rates means there is a higher opportunity cost in lending money, since you could invest it elsewhere and get a higher return, reducing the supply of loans, and more demand for loans, since if interests are low, people will borrow to buy assets that appreciate more than the interest rate.
Also, to be clear, nothing in this post constitutes investment advice or legal advice.
I often see this phrase in online posts related to investment, legal, medical advice. Why is it there? These posts obviously contain investment/legal/medical advice. Why are they claiming they don't?
I guess that the answer is related to some technical meaning of the word "advice", which is different from its normal language meaning. I guess there is some law that forbids you from giving "advice". I would like to know more details.
Edit: This question was answered in a previous comment.
I bought calls with approximately 30 delta since that is a region with relatively low IVs and also where volga - positive convexity with respect to implied volatility - is maximized.
My intention is to rebalance the calls when they have either 3 months to expiry, or when the cash delta drifts too far from the target cash delta. (Defining "too far" to be a high bar here).
I recommend sgov for getting safe interest. It effectively just invests in short term treasuries for you. very simple and straightforward. Easier than buying bonds yourself. I do not think 100 percent or more equities is a good idea right now given that we might get more rate increases. Obviously do not buy long term bonds. Im not a prophet just saying how I am handling things
I absolutely do not recommend shorting long-dated bonds. However, if I did want to do so a a retail investor, I would maintain a rolling short in CME treasury futures. Longest future is UB. You'd need to roll your short once every 3 months, and you'd also want to adjust the size each time, given that the changing CTD means that the same number of contracts doesn't necessarily mean the same amount of risk each expiry.
Current cryptocurrencies are useful because they might be the only vaguely legal way to make the financial agreements that the AI wants, and AIs might have an easier time extending and using them than humans. It's not about it being a good information platform, it's about it avoiding the use of institutional intermediaries that the government pretends are illegal.
Assuming AIs don't soon come up with even better crypto/decentralization solutions: I hadn't considered that the smart contracts being too complicated (and thus unsecure) might not hold true anymore once AI-assistants and cyberprotection scale up. Especially the ZK, a natural language for AIs.
Why's that? They seem to be going for AGI, can afford to invest billions if Zuckerberg chooses, their effort is led by one of the top AI researchers and they have produced some systems that seem impressive (at least to me). If you wanted to cover your bases, wouldn't it make sense to include them? Though 3-5% may be a bit much (but I also think it's a bit much for the listed companies besides MS and Google). Or can a strong argument be made for why, if AGI were attained in the near term, they wouldn't be the ones to profit from it?
So this all makes sense and I appreciate you all writing it! Just a couple notes:
(1) I think it makes sense to put a sum of money into hedging against disaster e.g. with either short term treasuries, commodities, or gold. Futures in which AGI is delayed by a big war or similar disaster are futures where your tech investments will perform poorly (and depending on your p(doom) + views on anthropics, they are disproportionately futures you can expect to experience as a living human).
(2) I would caution against either shorting or investing in cryptocurrency as a long-term AI play; as patio11 in his Bits About Money has discussed (most recently in A review of Number Go Up, on crypto shenanigans (bitsaboutmoney.com) ), cryptocurrency is absolutely rife with market manipulation and other skullduggery; shorting it can therefore easily result in losing your shirt even in a situation where cryptocurrencies otherwise ought to be cratering.
Go and make a brokerage account with Schwab or Fidelity (whichever seems less annoying to set up)
Personally I use Wealthfront because the UI is gorgeous. I say this after having used Vanguard. I haven't used the others though. Referral link.
Broad market effects of AGI
Career capital in an AGI world
Debt and Interest rates effects of AGI
Concrete example portfolio
Is any of this ethical or sanity-promoting?
How would you actually use a ton of money to help with AGI going well?
Please diversify your crypto portfolio
Should you buy private equity into AI companies?
Summarizing takeaways