hath

Teenager trying to become stronger. Operating by Crocker's Rules.

Wiki Contributions

Comments

Meritxell has made the serious error of mentioning that she didn't fully grasp some of what Keltham said earlier about stock companies.

Keltham is currently explaining how a Lawful corporation has an internal prediction market, which forecasts the observable results on running various possible projects that company could be trying, which in turn is used to generate an estimate of marginal returns on marginal internal investment; this prevents a corporation from engaging in obvious madness like accepting an internal project with 6% returns while turning down another internal project with expected 10% returns.

The wider market, obviously, would also like to invest all its money where it'll get the highest returns; but it's usually not efficient to offer the broader market a specialized sub-ownership of particular corporate subprojects, since the ultimate usefulness of corporate subprojects is usually dependent on many other internal outputs of the company.  It doesn't do any good to have a 'website' without something to sell from it.  Sure, if everyone was an ideal agent, they'd be able to break things down in such a fine-grained way.  But the friction costs and imperfect knowledge are such that it's not worth breaking companies into even smaller ownable pieces.  So the wider stock market can only own shares of whole corporations, which combine the outputs and costs of all that company's projects.

Thus any corporation continuously buys or sells its own stock, or rather, has standing limit orders into the stock market to buy various quantities if the price goes low or sell various quantities if the price goes high, at prices that company sets depending on its internal belief about the returns from investing or not investing in the marginal subprojects being considered.  If the company isn't considered credible by the wider market, its stock will go lower and the company will automatically buy that stock, which leaves them less money to invest in new projects internally, and means that they only invest in projects with relatively higher returns - doing less total investment, but getting higher returns on the internal investments that they do start.  Conversely if the wider market thinks a company's promises to do a lot with money are credible, the stock price will go up and money will flow into that company until they no longer have internal investment prospects that credibly beat the broader market.

This may sound complicated, and it is probably a relatively more complicated part of the machinery that is necessarily implied by the existence of distinct stock corporations in the first place.  But the alternative, if you zoom out and look at the whole planet of dath ilan, is that a corporation in one place would be investing in a project with internally expected returns of 6%, and somebody on the other side of the planet would be turning down a project with market-credible returns of 10%, which means you could reorganize the whole planet and do better in a predictable way.  So whatever does happen as a consequence of the existence of stock corporations, it has to be not that.

Some form of drastic action on Meritxell's part is obviously required if she wants to get back on track to having sex with this person.  What does she do, if anything?

Some quotes from Planecrash that I might collect into a full post:

Upcoming Posts

Now that I'm back from [Atlas Fellowship+SPARC+EAG+Future Forum], I have some post ideas to write up. A brief summary:

Agency and Authority, an actual in-depth, gears-level explanation of agency, parenting, the two kinds of respect, moral conflation with that respect, the fact that those in power are incentivized to make their underlings more legible and predictable to them, arbitrarily high punishments and outcome matrices, absolute control and concessions, incentives for those not in power and how those incentives turn you into less of an agent, and how the best solution is to create a "good person who follows orders" mask that hopefully never breaks or bleeds into the rest of your character, and then use that mask while you plot to get out of the situation.

I've gotten the same questions a couple times from different people, and want to just write up the responses as a post, so I don't have to go back and rewrite them.

  • How did you get into rationality/EA/alignment?
  • Why did you hate high school so much?
  • What was [Atlas/SPARC/EAG/FF] like?
  • What's it like being a minor in those communities?
  • So, what exactly do you do? Well, okay, what are you planning to do?

There was a lightning talk I gave at SPARC and Future Forum, which I ended up teaching as a full-on class at SPARC, called Memento, Memory, and Note-Taking. I want to develop that as a full post.

I also want to migrate all of my notes from Obsidian to Notion, and have some plans for what I want to include in my Notion; this will probably make it onto LW at some point.

I've also made some progress on what I call "putting myself back together" and this is, in retrospect, what I have spent the past month doing. I might publicly reflect on some of the personal growth and introspection I've done during that time.

I'd like to write "An Intro to Rationalist Culture" at some point, because it's incredible to see the different social norms that rationalists have developed, the most important prerequisite being the ability to see and talk about social norms on the meta-level, and changing said norms as a result.

There's also some other ideas that seem important to me:

  • Fleshing out what "hath culture" looks like
  • I need to figure out how to live in a world on fire; writing down how I cope with that now might help.

This isn't even a definitive list of all the post ideas I have (the actual list is like 5x the size) but these are the ones I plan on writing soon.

I'll be at Capla-Con this weekend, if anyone else here is going.

I’m really sorry to hear that, man. It’s honestly a horrible thing that this is what happens to so many people; it’s another sign of a generally inadequate civilization.

For what it’s worth, the first chapter of Smarter Faster Better is explicitly on motivation, and how to build it from nothing. It mentions multiple patients with brain injuries who were able to take back control over their own lives because someone else wanted to help them become agentic. I think reading that might help.

On another note, thank you for being open about this. I appreciate all comments on my posts, especially the ones actively related to the subject; your comment wasn’t complaining, and it was appreciated. Best of luck to you in the future.

Not only is this post great, but it led me to read more James Mickens. Thank you for that! (His writings can be found here).

Intercom doesn't change in Dark Mode. Also, the boxes around the comment section are faded, and the logo in the top left looks slightly off. Good job implementing it, though, and I'm extremely happy that LW has this feature.

If you are going to downvote this, at least argue why. 

Fair. Should've started with that.

To the extent that rationality has a purpose, I would argue that it is to do what it takes to achieve our goals,

I think there's a difference between "rationality is systematized winning" and "rationality is doing whatever it takes to achieve our goals". That difference requires more time to explain than I have right now.

if that includes creating "propaganda", so be it.

I think that if this works like they expect, it truly is a net positive.

I think that the whole AI alignment thing requires extraordinary measures, and I'm not sure what specifically that would take; I'm not saying we shouldn't do the contest. I doubt you and I have a substantial disagreement as to the severity of the problem or the effectiveness of the contest. My above comment was more "argument from 'everyone does this' doesn't work", not "this contest is bad and you are bad".

Also, I wouldn't call this contest propaganda. At the same time, if this contest was "convince EAs and LW users to have shorter timelines and higher chances of doom", it would be reacted to differently. There is a difference, convincing someone to have a shorter timeline isn't the same as trying to explain the whole AI alignment thing in the first place, but I worry that we could take that too far. I think that (most of) the responses John's comment got were good, and reassure me that the OPs are actually aware of/worried about John's concerns. I see no reason why this particular contest will be harmful, but I can imagine a future where we pivot to mainly strategies like this having some harmful second-order effects (which need their own post to explain).

You didn't refute his argument at all, you just said that other movements do the same thing. Isn't the entire point of rationality that we're meant to be truth-focused, and winning-focused, in ways that don't manipulate others? Are we not meant to hold ourselves to the standard of "Aim to explain, not persuade"? Just because others in the reference class of "movements" do something doesn't mean it's immediately something we should replicate! Is that not the obvious, immediate response? Your comment proves too much; it could be used to argue for literally any popular behavior of movements, including canceling/exiling dissidents. 

Do I think that this specific contest is non-trivially harmful at the margin? Probably not. I am, however, worried about the general attitude behind some of this type of recruitment, and the justifications used to defend it. I become really fucking worried when someone raises an entirely valid objection, and is met with "It's only natural; most other movements do this".

Can confirm that this is all accurate. Some of it is much less weird in context. Some of it is much, much weirder in context.

Yeah, my reaction to this was "you could have done a much better job of explaining the context" but:

"Your writing would be easier to understand if you explained things," the student said.

That was me, so I guess my opinion hasn't changed.

Load More