Interested in math, Game Theory, etc.
It does not seem economically possible to build a city that is cheap to live in without locking the price of land down in some way, at some point. It is not obvious how to do this well, but eminent domain seems to be a necessary component of it.
From their website:
They're interested in better solutions. "Impossible" just means "it's never been done".
Are there papers on this? (Or did you find it somewhere else?)
This post was long read, and well worth it. I looking forward to the rest of the series!
the way we our social environment
the way our social environment
Great things about Greaterwrong:
[On LW] if a comment is automatically minimized and buried in a long thread, then even with a link to it, it's hard to find the comment - at best the black line on the side briefly indicates which one it is. This doesn't seem to be a problem in greaterwrong.
Example: Buried comment, not buried.
I used a comment in my shortform without upvotes to do bookmarks*, prior to the bookmark feature being released, and I haven't switched to using the new feature yet.
*posts, questions, comments and pages**
**Some of these are hard to find.
Currently, posts can be bookmarked but not comments (or shortform posts). Will tags only be for posts as well?
You seem to be using the words "goal-directed" differently than the OP.
And in different ways throughout your comment.
Is a bottle cap goal directed? Sure, it was created to keep stuff in, and it keeps doing a fine job of that.
It is achieving a purpose. (State of the world.)
Conversely, am I goal directed? Maybe not: I just keep doing stuff and it's only after the fact that I can construct a story that says I was aiming to some goal.
You seem to have a higher standard for people. I imagine you exhibit goal-directed behavior with the aim of maintaining certain equilibria/homeostasis - eating, sleeping, as well as more complicated behaviors to enable those. This is more than a bottle cap does, and more difficult a job than performed by a thermostat.
Is a paperclip maximizer goal directed? Maybe not: it just makes paperclips because it's programmed to and has no idea that that's what it's doing, no more than the bottle cap knows it's holding in liquid or the twitch robot knows it's twitching.
This sounds like is a machine that makes paperclips, without optimizing - not a maximizer. (Unless the twitching robot is a maximizer.) "Opt" means "to make a choice (from a range of possibilities)" - you do this, the other things not so much.
goals are a feature of the map, not the unmapped territory.
You don't think that a map of the world (including the agents in it) would include goals? (I can imagine a counterfactual where someone is put in different circumstances, but continues to pursue the same ends, at least at a basic level - eating, sleeping, etc.)
It potentially looks to onlookers like I stopped because I couldn't find something to say any more.
This seems measurable, in principle.
For a way it plausibly might not have been a success: I suspect it's the case that having limited my total investment, I spent more effort on many of these comments than I would have otherwise. If these arguments would have ended just as quickly in any case, then this tactic caused me to spend more time and effort on them.
This seems unlikely given the number of times you didn't hit the limit.
Anyway, I haven't needed that clause yet.
Probably a result of the non-collaborative nature/feel.