Cole Wyeth

I am a PhD student in computer science at the University of Waterloo. 

My current research is related to Kolmogorov complexity. Sometimes I build robots, professionally or otherwise.

See my personal website colewyeth.com for an overview of my interests.

Sequences

Deliberative Algorithms as Scaffolding

Wiki Contributions

Comments

I think it's worth considering that Jaynes may actually be right here about general agents. His argument does seem to work in practice for humans: it's standard economic theory that trade works between cultures with strong comparative advantages. On the other hand, probably the most persistent and long running conflict between humans that I can think of is warfare over occupancy of Jerusalem. Of course there is an indexical difference in utility function here - cultures disagree about who should control Jerusalem. But I would have to say that under many metrics of similarity this conflict arises from highly similar loss/utility functions. Certainly I am not fighting for control of Jerusalem, because I just don't care at all about who has it - my interests are orthogonal in some high dimensional space. 

The standard "instrumental utility" argument holds that an unaligned AGI will have some bizarre utility function very different from ours, but the first step towards most such utility functions will be seizing control of resources, and that this will become more true the more powerful the AGI. But what if the resources we are bottlenecked by are only bottlenecks for our objectives and at our level of ability? After all, we don't go around exterminating ants; we aren't competing with them over food, we used our excess abilities to play politics and build rockets (I think Marcus Hutter was the first to bring this point to my attention in a lasting way). I think the standard response is that we just aren't optimizing for our values hard enough, and if we didn't intrinsically value ants/nature/cosmopolitanism, we would eventually tile the planet with solar panels and wipe them out. But why update on this hypothetical action that we probably will not in fact take? Is it not just as plausible that agents at a sufficiently high level of capability tunnel into some higher dimensional space of possibilities where lower beings can't follow or interfere, and never again have significant impact on the world we currently experience? 

I can imagine a few ways this might happen (energy turns out not to be conserved and deep space is the best place to build a performant computer, it's possible to build a "portal" of some kind to a more resource rich environment (interpreted very widely), the most effective means of spreading through the stars turns out to be just skipping between stars and ignoring planets) but the point is that the actual mechanism would be something we can't think of.  

I have no reason to question your evidence but I don't agree with your arguments. It is not clear that a million LLM's coordinate better an a million humans. There are probably substantial gains from diversity among humans, so the identical weights you mentioned could cut in either direction. An additional million human level intelligences would have a large economic impact, but not necessarily a transformative one. Also, your argument for speed superintelligence is probably flawed; since you're discussing what happens immediately after the first human level AGI is created, gains from any speedup in thinking should already be factored in and will not lead to superintelligence in the short term.  

You have pointed out some important tradeoffs. Many of my closest friends and intellectual influences outside of lesswrong are religious, and often have interesting perspectives and ideas (though I can't speak to whether this is because of their religions, caused by a common latent variable, or something else). However, I do not think that the purpose of lesswrong is served by engaging with religious ideology here, and I think that avoiding this is probably worth the cost of losing some valuable perspectives.

As you've said, @Jeffrey Heninger does participate in the lesswrong community, at its current level of hostility towards religion. I have read some of his other posts in the past and found them enjoyable and valuable, though I think I am roughly indifferent to this one being published. Why does this suggest to you that the community needs to be less hostile to religion, instead of more or roughly the same amount? Presumably if it were less hostile towards religion, there would be more than the current level of religious discussion - do you think that would be better on the margin? I would also expect an influx of religious people below Jeffrey's level, not above it.

I'm open to starting a dialogue if you want to discuss this further. 

Perhaps AGI but not human level. A system that cannot drive a car or cook a meal is not human level. I suppose it's conceivable that the purely cognitive functions are at human level, but considering the limited economic impact I seriously doubt it. 

Hostility towards Others may be epistemically and ethically corrosive, but the kind of hostility I have discussed is also sometimes necessary. For instance, militaristic jingoism is bad, and I am hostile to it. I am also wary of militaristic jingoists, because they can be dangerous (this is an intentionally extreme example; typical religions are less dangerous).

There is a difference between evangelizing community membership and evangelizing an ideology or set of beliefs. 

Usually, a valuable community should only welcome members insofar as it can still maintain its identity and reason for existing. Some communities, such as elite universities, should and do have strict barriers for entry (though the specifics are not always ideal). The culture of lesswrong would probably be erased (that is, retreat to other venues) if lesswrong were mainstreamed and successfully invaded by the rest of the internet. 

I generally agree that (most) true beliefs should be shared. Ideologies however are sometimes useful to certain people in certain contexts and not to other people in other contexts. Also, evangelism is costly, and it's easy to overestimate the value of your ideology to others. 

The distinctions you're pointing out are subtle enough that it may be better to ask people instead of trying to infer their beliefs from such a noisy signal. I reject painting religious people as uniformly or even typically villainous, but it is probably fine to share a common frustration with some religions and religious institutions shutting down new ideas. I was not at secular solstice so I can't say for sure which better describes it, but based on the examples given the former seems at least as accurate.  

Cole Wyeth1mo3135

I appreciate the perspective. Personally I don't really see the point of a secular solstice. But frankly, the hostility to religion is a feature of the rationalist community, not a bug. 

Rejection of faith is a defining feature of the community and an unofficial litmus test for full membership. The community has a carefully cultivated culture that makes it a kind of sanctuary from the rest of the world where rationalists can exchange ideas without reestablishing foundational concepts and repeating familiar arguments (along with many other advantages). The examples you point to do not demonstrate hostility towards religious people, they demonstrate hostility towards religion. This is as appropriate here as hostility towards factory farming is at a vegan group.

Organizations (corporate, social, biological) are all defined by their boundaries. Christianity seems to be unusually open to everyone, but I think this is partially a side effect of evangelism. It makes sense to open your boundaries to the other when you are trying to eat it. Judaism in contrast carefully enforces the boundaries of its spaces.    

Lesswrong hates religion in the way that lipids hate water. We want it on the outside. I don't know about other rationalists, but I don't have a particular desire to seek it out and destroy it everywhere it exists (and I certainly wish no harm to religious people). I agree with you that too much hostility is harmful; but I don't agree that good organizations must always welcome the other.  

Cole Wyeth1moΩ010

I don't find these intuitive arguments reliable. In particular, I doubt it is meaningful to say that reflective oracle AIXI takes the complexity of its own counterfactual actions into account when weighing decisions. This is not how its prior works or interacts with its action choice. I don't fully understand your intuition, and perhaps you're discussing how it reasons about other agents in the environment, but this is much more complicated than you imply, and probably depends on the choice of reflective oracle. (I am doing a PhD related to this). 

I think about this a lot too. One of my biggest divergences with (my perception of) the EA/rationalist worldview is a desire to leave "wilderness"/"wildness" in the world. I think that all of us should question our wisdom. 

Load More