Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
It does indeed work pretty well on desktop. However, doing this on my Iphone produced the following result screen:
I could not confidently tell you which comment I was supposed to be linked to. All three visible comments at the top of the screen strike me as candidates.
Did you read the relevant section of the FAQ I added? I could list more examples, but I feel like that section is relatively clear.
I don't know what design process led to this, but even after all these years it throws me off every time.
One of the reasons is search indexing. Another reason is that scrolling precisely to a position in a comment tree is just very hard. Users scroll before the full page is loaded, and whenever I do user interviews with the in-context link version people fail to actually identify what comment they were linked to like 30% of the time. I think there probably must be some clever and good UI solution, but I haven't found one after a few dozen hours of trying.
Relevant Patio11 tweet: https://x.com/patio11/status/1933975792721207316
I think the so-called Bitcoin treasury companies have just reinvented exchange tokens: there is an asset with X real world utility but not naturally leverageable. It should flow to place in world where most leverage is bolted onto it; immediately incentive compatible. Repeat 100x
And then “Holy %}*]^ how did so much of it end up in a place with grossly deficient risk management?!”
(I understand that MicroStrategy is the opposite of leveraged exposure from the common shareholder’s perspective but if someone with hands on keyboard believes they are allowed leverage if they hold more exchange tokens then the model happens regardless of whether that is true.)
(See, for example, the trading fund which believed that the more FTT it held the more cash it could licitly borrow from an affiliated entity’s depositors to deploy against many interesting aims. They were wrong about that, obviously.)
Sure! I don't think I said anywhere that buyers should be spending more effort sussing out lemons? The lemon market example is trying to introduce a simple toy environment in which a transition from a non-adversarial to an adversarial information environment can quickly cause lots of trades to no longer happen and leave approximately everyone worse off.
At least my model of the reader benefits from an existence proof of this, as I have found even this relatively simple insight to frequently be challenged.
You say you tried to narrow the scope to "creditworthiness" rather than "trustworthiness," but I don't know what that means.
By creditworthiness, in this post, I mean the literal degree to which you are happy to transfer some specific resource that you own into someone else's stewardship with the expectation that you will get them back (or make a positive return in-expectation). Creditworthiness is here specific to a resource that is transferrable. Dollars are the most obvious case. Social capital sometimes can also be modelled this way, though it gets more tricky. Creditworthiness does not need to extend into trustworthiness in general.
For example, as investors face very limited liability for investing in fradulent institutions, seeing someone willing to break the law (or be generally untrustworthy) can sometimes increase expected returns! In those situations creditworthiness (which I here try to measure in expectations of good stewardship or expected future profit) and trustworthiness (which would be measured in a broader propensity to not fuck people over) come strongly apart.
whether there's an intangible asset or verified capacity corresponding to the promise is exactly the controversy at issue.
I think I am confused what "controversy" you are talking about here. I agree with you that in-practice, the line here is very hard to identify (as one would expect in a high-level adversarial information game).
My main aim with this post is largely to create a model that explains some situations where in-retrospect there is IMO little uncertainty that something of this shape went wrong.
Like, the specific sentence I objected to was: "so it seems like the problem is in the currency conversion between money and nonfinancial credit".
And I think in the model and situations I outline, I am confused how you could end up with this impression? Like, I think the central dynamic with FTX was their ability to translate money[1] into more financial credit (in the form of customer deposits). Yes, there might have been some nonfinancial credit intermediary steps, and of course they also did lots of other things that are worth analyzing, but the thing that produced a positive feedback loop is the step where they could convert funds in their stewardship into more creditworthiness, which resulted in them getting more and more assets in their custody.
Trying to think harder about what you were saying, I thought your objection might be that there are too many legitimate cases in which you of course want to translate assets under your management into more assets under your management, i.e. by producing assets that are more valuable than the resources you were given stewardship over. So I tried to clarify that I was talking about a dynamic where you spend/irrecoverably lose resources to increase perceived creditworthiness, not where you make good use of resources that actually increase future expected returns.
Bribing third-party evaluators is of course an example of what I am talking about, but it strikes me as too narrow, and most importantly it doesn't capture the central feedback loop of this creditworthiness bubble that I think explains many of the relevant dynamics that I go into in my new last section. Yes, I agree you should pay attention to someone bribing third-party evaluators, but even in that situation, one of the key variable that determines how bad it is to bribe third-party evaluators is whether by bribing the third party evaluator with a dollar, you end up with more than one additional dollar under your stewardship. That returns ratio really matters and is what I am trying to draw attention to, and I am not sure whether you are objecting to is as a thing, or just don't find it interesting, or have some other objection.
Broadly construed here to include cryptocurrency
All good! I believe you that you were commenting in good faith.
Regarding formatting: I do really recommend you switch out your double paragraph breaks for single paragraph breaks. It looks somewhat broken.
FWIW, this comment isn't up the standards of comments I want on my post (and my guess is also LW in general, though this isn't a general site-wide mod comment). Please at least get the formatting right, but even beyond that, it seems pretty naively political in ways that I think tends to be unproductive. (I considered just deleting it, but it seemed better to leave a record of it).
You're going pretty far by saying if you like weighing tradeoffs, then Lightcone isn't for you.
That... is not a thing I said, or even anywhere close to what I said. I think you can do better than this and figure out what I meant to say.
We have a bunch of UI that I would need to modify only a tiny bit to get you #5, I think.
If you imagine the UI at lesswrong.com/autocompleteSettings, but with a "copy to clipboard" button at the bottom, and a user search menu at the top (instead of just having Gwern, Eliezer and Scott), would that work for you?
(Note that the list-plus icon button that you get when hovering over a list entry allows you to mark all elements above the item in the list you are hovering over as checked, so you don't have to click on dozens of comments manually)