If it’s worth saying, but not worth its own post, then it goes here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.


As well as trying out combining welcome threads and open threads, I thought I'd try highlighting some frontpage comments I found especially insightful in the last month, for further discussion:

  • Scott Garrabrant wrote a comment on how Embedded Agency and Agent Foundations research are like science in relation to ML approaches to AI alignment which are more like engineering. The comment helped me think about how I go about formalising and solving problems more generally.
  • Rohin Shah wrote a comment on basic definitions of the alignment problem, contrasting a motivation-competence split versus a definition-optimization split. (It is then followed by a convo on definitions between Paul and Wei which gets pretty deep into the weeds - I'd love to read a summary here from anyone else who followed along.)
New Comment
23 comments, sorted by Click to highlight new comments since:

Most of the land around you is owned by people who don't know you, who don't support what you're doing, who don't particularly want you to be there, and who don't care about your community. If they can evict the affordable vegan food dispensary and replace it with a cheesecake factory that will pay higher rent, they will do that, repeatedly, until your ability to profit from your surroundings as a resident is as close to zero as they can make it without driving you away to another city, and if you did go to another city, you would watch the same thing happen all over again. You are living in the lands of tasteless lords, who will allow you to ignore the land-war that's always raging, it's just a third of your income, they tell you, it's just what it costs.

That's not what it costs. We can get it a lot cheaper if we coordinate. And whatever we use to coordinate can probably be extended to arranging a much more livable sort of city.

So I've been thinking a lot about what it would take to build a city in the desert where members' proximity desires are measured, clustered and optimised over, where rights to hold land are awarded and revoked on the basis of that. There would be no optimal method, but we don't need an optimal method. All we need is something that works well enough to beat the clusterfuck of exploitation and alienation that is a modern city. The system would gather us all together and we would be able to focus on our work.

I'll need more algorithms before I can even make a concrete proposal. Has anyone got some theory on preference aggregation algorithms. I feel like if I can learn a simple, flexible preference graph order recovery algorithm I'll be able to do a lot with that.

It'll probably involve quadratic voting on some level. Glen Weyl has a lot of useful ideas.

I don't think you need to build a city from scratch. It's sufficient to converge on a (partially?) abandoned city with cheap real estate. This is basically what gentrification is.

Version 0.01 of a new city is to simply get together a group of people who want to work on projects uninterrupted, buy or rent a cheap house in a town the public has forgotten about, and live/work there. 10 or 20 housemates is plenty to feel a sense of community. The EA Hotel is a recent experiment with this. I just spent 6 months there and had a great experience. They're doing a fundraiser now if you want to contribute.

Experimenting with new gentrification strategies sounds like a cool idea, I'm just skeptical of building new real estate in the middle of nowhere if there's plenty of real estate in the middle of nowhere which is already available. (Also, I think your post would benefit from a more even-handed presentation.)

I'm certainly interested in playing with reallocation systems in existing cities, but if we can go beyond that, we must.

"Gentrification", for me includes the effect where land prices increase without any increase in value. That pricing does useful work by allocating land to its most profitable uses. It does that through costly bidding wars and ruthless extraction of rent, which have horrible side-effects of reducing the benefits regular people derive from living in cities by, I'd guess, maybe 80%? (Reminder: not only is your rent too damn high, but so is the rent of the businesses you frequent), allocating vast quantities of money to the landowning class, who often aren't producing anything (especially often in san fransisco). If we can make a system that allocates land to its most productive use without those side-effects, then we no longer need market-pricing as a civic mechanism, and we should be trying like hell to get away from it. Everyone should be trying like hell to get away from it, but people who believe they have a viable mostly side-effect-free substitute should be trying especially hard.

A large part of the reason I'm attracted to the idea of building in a rural or undeveloped area is it will probably be easier to gain the use of eminent domain, in that situation. If we're building amid farmland, and we ask the state for the right to buy land directly adjacent to the city at a price of say... double the inflation-adjusted price of local farmland as of the signing of the deal, it's hard to argue that anyone loses out much in that situation. There wasn't really much of a chance that land was going to rise to that price on its own, any rise would have been an obvious exploitation of the effects of the city. If you ask for a similar privilege on urban land, forced sale at capped price is a lot more messy (and, of course, the price cap will be like 8x higher), for one, raising land prices in response to adjacent development is just what land-owners are used to in cities and they will throw a very noisy fit if someone threatens that.

No comment on the voting strategy, just wanted to focus on the idea that "the value of the land is mostly the proximity of other people, so why not coordinate and move to a new cheap place together?"

First, I wonder whether it is actually true. As far as I know, most cities are at a place that has some intrinsic value, such as a crossing of trade roads, a port, or a mine. I wonder how much this is necessary, and how much it is just history's way to solve the chicken-and-egg problem of coordination by saying "first movers come here because of the intrinsic advantage, everyone else moves here because someone already moved here before them".

On one hand, for many people "the value is the proximity of neighbors" is true. If you have a shop, you want to have many customers near you. If you are an employee, you want many employers near you, and vice versa. People move to e.g. the Silicon Valley because of everything that already is in the Silicon Valley; if you could somehow teleport the whole Silicon Valley into a not-very-awful place, this dynamics would probably remain. On the other hand, you have cities like Detroit, where removing an important piece (jobs in car industry) made everything fall apart; the "proximity to many neighbors" was not enough to save it. So having many people at the same place is not necessarily a recipe for success; the whole "ecosystem" needs to be in some kind of balance, which would be difficult to achieve with a new city.

Second, yeah, coordinating people is hard. Look at the Free State Project, where people coordinated to move to the same US state. It took them a few years to coordinate 20 000 people, just to move to existing cities, with existing infrastructure and job opportunities, within USA. How long would it take to coordinate people to move somewhere to a desert, and how many people would actually go there?

There are many kinds of commerce I don't know much about. I'm going to need help with figuring out what a weird city where the cost of living is extremely low is going to need to become productive. The industries I do know about are fairly unlikely to require proximity to a port, but even in that set.. a lot of them will want proximity to manufacturing and manufacturing in turn will want to be near a port?

Can you think of any reasons we couldn't make the coordinated city's counterpart to the FSP's Statement of Intent contract legally binding, imposing large fines on anyone who fails to keep to their commitment? (while attempting to provide exceptions for people who can prove they were not in control of whatever kept them from keeping their commitment, where possible) Without that, I'd doubt those commitments will amount to much.

For a lot of people a scheme like this will be the only hope they'll ever have of owning (a share in) any urban property, if they can be convinced of the beneficence of the reallocation algorithms (I imagine there will be many opportunities to test them before building a fully coordinated city), I don't really understand what it is about the FSP that libertarians find so exciting, but I feel like coordinated city makes more concrete promises of immediate and long-term QoL than the FSP ever did. Note, the allocator includes the promise of finding ourselves surrounded by like-minded individuals

Can you think of any reasons we couldn't make the coordinated city's counterpart to the FSP's Statement of Intent contract legally binding, imposing large fines on anyone who fails to keep to their commitment?

Because then even fewer people would sign it. And the remaining ones will be looking for loopholes.

For a lot of people a scheme like this will be the only hope they'll ever have of owning (a share in) any urban property

Unfortunately, those would be most scared of the "large fines".

They have very little to be afraid of if their commitment is true, and if it's not, we don't want it. The commitment thing isn't just a marketing stunt. It's a viability survey. The data has to be good.

I guess I should add, on top of the process for forgiving commitments under unavoidable mitigating circumstances, there should be a process for deciding whether the city met its part of the bargain. If the facilities are not what was promised, fines must be reduced or erased.

Update on preference graph order recovery

I decided to stop thinking about the Copeland method (method where you count how many victories each candidate has had and sort everyone according to that). They don't mention it in the analysis (pricks!) but the flaw is so obvious I'm not gonna be humble about this

Say you have a set of order judgements like this:

< = { (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (s p) (p u) (p u) (p u) (p u) }

It's a situation where the candidate "s" is a strawman. No one actually thinks s is good. It isn't relevant and we probably shouldn't be discussing it. (But we must discuss it, because no informed process is setting the agenda, and this system will be responsible for fixing the agenda. Being able to operate in a situation where the attention of the collective is misdirected is mandatory)

p is popular. p is better than the strawman, but that isn't saying much.

u is the ultimate, and is known by some to be better than p in every way. There is no controversy about that, among those who know u.

Under the copeland method, u still loses to p because p has fought more times and won more times.

The Copeland method is just another popularity contest. It is not meritocratic. It cannot overturn an incumbency by helping a few trusted seekers to spread word about their finding. It does not spread findings. It cannot help new things rise to prominence. Disregard the Copeland method.

---

A couple days ago I started thinking about defining a metric by thinking of every edge in the graph (every judgement) as having a "charge" and then defining a way of reducing serial wires and a way of reducing parallel wires, then getting the total charge between each pair of points (it'll have time complexity n^3 at first but I can think of lots of ways to optimise that. I wouldn't expect much better from a formal objective measure), then assembling that into a ranking.

Finding serial and parallel reducers with the right properties didn't seem difficult (I'm currently looking at parallel(a, b)→ a + b and serial(a, b)→ 1/(1/a + 1/b)). That was very exciting to realise. The current problem is, it's not clear that every tangle can be trivially reduced to an expression of parallels and serials, consider the paths between the top left and bottom right nodes in a network shaped like "▥", for instance.

Calculating the conductance between two points in a tangled circuit may be a good analogy here... and I have a little intuition that this would be NP hard in the most general case despite being deceptively tractable in real-world cases. Someone here might be able to dismiss or confirm that. I'm sure it's been studied, but I can't find a general method, nor a proof of hardness.

If true, it would make this not so obviously useful as a formal measure sufficient for use in elections.

I vaguely remember a comment, possibly from a post in the last year or two, where someone said something like, "The highest return, under appreciated, life improvements you could make right now are fixing the relationships with your family and those close to you [... some other stuff...]". Does anyone remember this comment and or have a link to it?

I don't remember the comment, but it reminds me of something I think I might have read in Crucial Confrontations... which might have been referred to me by someone in the community, so that might be a clue??? haha, idk at all

Why is it that philosophical zombies are unlikely to exist? In Eliezer's article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Also, what are LessWrong's views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?

In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Nope, your "in other words" summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.

This is mostly just arguing over semantics. Just replace "philosophical zombie" with whatever your preferred term is for a physical human who lacks any qualia.

This is mostly just arguing over semantics.

If an argument is about semantics, this is not a good response. That is...

Just replace "philosophical zombie" with whatever your preferred term is for

An important part of normal human conversations is error correction. Suppose I say "three, as an even number, ..."; the typical thing to do is to silently think "probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen." But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it's worth flagging the discrepancy and raising it.

Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that "three is even," using that term to also refer to the belief that "three is odd" will be the source of no end of confusion. ("Threevenism is false!" "What do you mean? Of course Threevenism is true.") So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!

That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience 'qualia' and other people don't. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn't, then the standard argument doesn't go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.

This variability suggests it's likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because "only the qualia of being tortured is bad," because it seems to me like torturing humans is likely bad for different reasons.

[-]TAG10

That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused.

Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn't that amount to the question of whether or not it was a zombie?

Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him.

Or it's terminological confusion.

Suppose there was some doubt about whether it was genuinely conscious. Wouldn't that amount to the question of whether or not it was a zombie?

No. There are a few places this doubt could be localized, but it won't be in 'whether or not zombies are possible.' By definition we can't get physical evidence about whether or not it's a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes "what it is like to be a non-zombie," and zombies don't), in exactly the same way we can't get physical evidence about whether or not we're zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.

The doubt will likely be localized in 'what it means to be conscious' or 'how to measure whether or not something is conscious' or 'how to manufacture consciousness', where one hopes that answers to one question inform the others.

Perhaps instead the doubt is localized in 'what decisions are motivated by facts about consciousness.' If there is 'something it's like to be Alexa,' what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don't in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.

Also, what are LessWrong's views on the idea of a continuous consciousness?

It's kind of against the moderation guidelines of "Make personal statements instead of statements that try to represent a group consensus" for anyone to try to answer that question hahah =P

But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn't know.

An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.

P-zombies are indeed all about epiphenomenalism. Go check out David Chalmers' exposition for the standard usage. I think the problem with epiphenominalism is that it's treating ignorance as a positive license to intoduce its epiphenomenal essence.

We know that the brain in your body does all sorts of computational work, and does things that function like memory, and planning, and perception, and being affected by emotions. We might even use a little poetic language and say that there is "someone home" in your body - that it's convenient and natural to treat this body as a person with mental attributes. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can't prove we don't!

When it comes to denying qualia, I think Dennett would bring up the anecdote about magic from Lee Siegel:

"I'm writing a book on magic”, I explain, and I'm asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and supernatural powers. “No”, I answer: “Conjuring tricks, not real magic”. Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic."

Dennett thinks peoples' expectations are that "real qualia" are the things that live in the space of epiphenomenal essences and can't possibly be the equivalent of a conjuring trick.

[-]TAG10

P-zombies are indeed all about epiphenomenalism.

No, they are primarily about explanation.

. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can’t prove we don’t!

It has virtually nothing to do with personal identity.

Dennett thinks peoples’ expectations are that “real qualia” are the things that live in the space of epiphenomenal essences and can’t possibly be the equivalent of a conjuring trick.

If they are a trick,. no one has explained how it is pulled off.

Zombie Dennett: which is more likely? That philosophers could interpret the same type of experience in fundamentally different ways, or that Dennett has some neurological defect which has removed his qualia but not his ability to sense and process sensory information?

Consciousness continuity: I know I’m a computationalist and [causalist?], and I am weakly confident that most LWers share at least one of these beliefs. (Speaking for others is discouraged here, so I doubt you’ll be able to get more than a poll of beliefs, or possibly a link to a previous poll.)

Definitions of terms: computationalism is the view that cognition, identity, etc. are all computations or properties of computations. Causalist is a word I made up to describe the view that continuity is just a special form of causation, and that all computation-preserving forms of causation preserve identity as well. (That is, I don’t see it as fundamentally different if the causation from one subjective moment to the next is due to the usual evolution of brains over time or due to somebody scanning me and sending the information to a nanofactory, so long as the information that makes me up isn’t lost in this process.)

Seems to me that when we think about animals, there are two opposite mistakes one can make. First is too much anthropomorphism: "the dog that is looking at the moon must be thinking about its existential problems, because that is what I would do during a sleepless night". Second is treating the animals as animal p-zombies: "yeah, the pig seems to suffer, but don't make a mistake, only humans can really suffer; the pig makes the suffering-like movements and noises for a completely unrelated reason".

As usual, the easiest way to get into one of these extremes is trying hard to avoid the other one.

"The Soul of an Octopus: A Surprising Exploration into the Wonder of Consciousness" seemed to me to avoid it better than many books (I only read the first half, unfortunately), but of course, the humans in it are kind of weird themselves :)

[+]seed-160