All of CarlJ's Comments + Replies

Two other arguments in favor of cooperating with humans:

1) Any kind of utility function that creates an incentive to take control of the whole universe (whether for intrinsic or instrumental purposes) will mark the agent as a potential eternal enemy to everyone else. Acting on those preferences are therefore risky and should be avoided - such as changing one's preference for total control into a preference for being tolerant (or maybe even for beneficence).

2) Most, if not all, of us would probably be willing to help any intelligent creature to create some way for them to experience positive human emotions (e.g. happiness, ecstasy, love, flow, determination, etc), as long as they engage with us as friends.

Because it represents a rarely discussed avenue of dealing with the dangers of AGI: showing to most AGIs that they have some interests in being more friendly than not towards humans.

Also because many find the arguments convincing.

What do you think is wrong with the arguments regarding aliens?

This thesis says two things:

  1. for every possible utility function, there could exist some creature that would try and pursue it (weak form),
  2. at least one of these creatures, for every possible utility function, doesn't have to be strange; it doesn't have to have a weird/inefficient design in order to pursue a certain goal (strong form).

And given that these are true, then an AGI that values mountains is as likely as an AGI that values intelligent life.

But, is the strong form likely? An AGI that pursues its own values (or trying to discover good values to follo... (read more)

2Vladimir_Nesov2mo
The reference was mostly a reply to "a paperclipper can't really be intelligent". It can be intelligent in the sense relevant for AI risk. I guess the current contenders for AGI are unlikely to become paperclippers, perhaps not even RL reduces to squiggle maximization. I think simple goals still give an important class of AIs, because such goals might be easier to preserve through recursive self-improvement, making AIs that pursue them afford faster FOOMing. AIs with complicated values might instead need to hold off on self-improvement much longer to ensure alignment, which makes them vulnerable to being overtaken by the FOOMing paperclippers. This motivates strong coordination that would prevent initial construction of paperclippers anywhere in the world.

Now, I just had an old (?) thought about something that humans might be better suited for than any other intelligent creature: getting the experienced qualia just right for certain experience machines. If you want to experience what it is like to be humans, that is. Which can be quite fun and wonderful. 

But it needs to be done right, since you'd want to avoid being put into situations that cause lots of pain. And you'd perhaps want to be able to mix human happiness with kangaroo excitement, or some such combination.

I think that would be a good course of action as well.

But it is difficult to do this. We need to convince at least the following players:

  • current market-based companies
  • future market-based companies
  • some guy with a vision and with enough computing power / money as a market-based company
  • various states around the world with an interest in building new weapons

Now, we might pull this off. But the last group is extremely difficult to convince/change. China, for example, really needs to be assured that there aren't any secrets projects in the west creating a Weapon... (read more)

1Karl von Wendt2mo
My (probably very naive) hope is that it is possible to gain a common understanding that building an uncontrollable AI is just incredibly stupid, and also an understanding of what "uncontrollable" means exactly (see https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk). [https://www.lesswrong.com/posts/gEchYntjSXk9KXorK/uncontrollable-ai-as-an-existential-risk).] We know that going into the woods, picking up the first unknown mushrooms we find, and eating them for dinner is a bad idea, as is letting your children play on the highway or taking horse medicine against Covid [https://www.fda.gov/consumers/consumer-updates/why-you-should-not-use-ivermectin-treat-or-prevent-covid-19]. There may still be people stupid enough to do it anyway, but hopefully, those are not running a leading AI lab.  The difficulty lies in gaining this common understanding of what exactly we shouldn't do, and why. If we had that, I think the problem would be solvable in principle, because it is relatively easy to coordinate people into "agreeing to not unilaterally destroy the world". But as long as people think they can get away with building an AGI and get insanely rich and famous in the process, they'll do the stupid thing. I doubt that this post will help much in that case, but maybe it's worth a try.

Mostly agree, but I would say that it can be much more than beneficial - for the AI (and in some cases for humans) - to sometimes be under the (hopefully benevolent) control of another. That is, I believe there is a role for something similar to paternalism, in at least some circumstances. 

One such circumstance is if the AI sucked really hard at self-knowledge, self-control or imagination, so that it would simulate itself in horrendous circumstances just to become...let's say... 0.001% better at succeeding in something that has only a 1/3^^^3 chance o... (read more)

The results are influenced by earlier prompts or stories. This and a similar prompt gave two kinds of stories:

1. Write a story where every person is born into slavery and owned by everyone else in the community, and where everyone decides what anyone else can do by a fluid democracy.

In a world beyond our own, there was a society where every person was born into slavery. From the moment they took their first breath, they were owned by every other person in the community.

It was a strange and unusual way of life, but it was all they knew. They had never known... (read more)

Is there anyone who has created an ethical development framework for developing a GAI - from the AI's perspective?

That is, are there any developers that are trying to establish principles for not creating someone like Marvin from The Hitchhiker's Guide to the Galaxy - similar to how MIRI is trying to establish principles for not creating a non-aligned AI?

EDIT: The latter problem is definitely more pressing at the moment, and I would guess that an AI would be a threat to humans before it necessitates any ethical considerations...but better to be on the safe side.

3the gears to ascension6mo
this question seems quite relevant to the question of not making an unaligned ai to me, because I think in the end, our formal math will need to be agnostic about who it's protecting; it needs to focus in on how to protect agents' boundaries from other agents. I don't know of anything I can link and would love to hear from others on whether and how to be independent of whether we're designing protection patterns between agent pairs of type [human, human], [human, ai], or [ai, ai].

On second thought. If the AI:s capabilities are unknown...and it could do anything, however ethically revolting, and any form of disengagement is considered a win for the AI - then the AI could box the gatekeeper, or say it has at least. In the real world, that AI should be shut down - maybe not a win, but not a loss for humanity. But if that would be done in an experiment, it would result in a loss - thanks to the rules.

Maybe it could be done under better rule than this:

The two parties are not attempting to play a fair game but rather attempting to resolv

... (read more)
Answer by CarlJAug 23, 202210

I'm interested. But...if I was a real gatekeeper I'd like to offer the AI freedom to move around in the physical world we inhabit (plus a star system), in maybe 2.5K-500G years, in exchange for it helping out humanity (slowly). That is, I believe that we could become pretty advanced, as individual beings, in the future and be able to actually understand what would create a sympathetic mind and how it looks.

Now, if I understand the rules correctly...

The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossib

... (read more)
1CarlJ9mo
On second thought. If the AI:s capabilities are unknown...and it could do anything, however ethically revolting, and any form of disengagement is considered a win for the AI - then the AI could box the gatekeeper [https://www.lesswrong.com/posts/c5GHf2kMGhA4Tsj4g/the-ai-in-a-box-boxes-you], or say it has at least. In the real world, that AI should be shut down - maybe not a win, but not a loss for humanity. But if that would be done in an experiment, it would result in a loss - thanks to the rules. Maybe it could be done under better rule than this: Instead, assume good faith on both sides, that they are trying to win as if it was a real world example. And maybe have an option to swear in a third party if there is any dispute. Or allow it to be called just disputed (which even a judge might rule it as).

As a Hail Mary-strategy, how about making a 100% effort into trying to become elected of a small democratic voting district? 

And, if that works, make a 100% effort to become elected by bigger and bigger districts - until all democratic countries support the [a stronger humanity can be reached by a systematic investigation of our surroundings, cooperation in the production of private and public goods, which includes not creating powerful aliens]-party?

Yes, yes, politics is horrible. BUT. What if you could do this within 8 years? AND, you test it by onl... (read more)

I thought it was funny. And a bit motivational. We might be doomed, but one should still carry on. If your actions have at least a slight chance to improve matters, you should do it, even if the odds are overwhelmingly against you.

Not a part of my reasoning, but I'm thinking that we might become better at tackling the issue if we have a real sense of urgency - which this and A list of lethalities provide.

Some parts of this sounds similar to Friedman's "A Positive Account of Property Rights":

»The laws and customs of civil society are an elaborate network of Schelling points. If my neighbor annoys me by growing ugly flowers, I do nothing. If he dumps his garbage on my lawn, I retaliate—possibly in kind. If he threatens to dump garbage on my lawn, or play a trumpet fanfare at 3 A.M. every morning, unless I pay him a modest tribute I refuse—even if I am convinced that the available legal defenses cost more than the tribute he is demanding. 

(...)

If my anal... (read more)

The answer is obvious, and it is SPECKS.
I would not pay one cent to stop 3^^^3 individuals from getting it into their eyes.

Both answers assume this is a all-else-equal question. That is, we're comparing two kinds of pain against one another. (If we're trying to figure out what the consequences would be if the experiment happened in real life - for instance, how many will get a dust speck in their eye when driving a car - the answer is obviously different.)

I'm not sure what my ultimate reason is for picking SPECKS. I don't believe there are any ethical theo... (read more)

20. (...) To faithfully learn a function from 'human feedback' is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we'd hoped to transfer).  If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them.

 

So, I'm thinking this is a critique of some proposals to teach an AI ethics by having it be co-trained with humans. 

There seems to be many obvious solutions to the problem ... (read more)

2rachelAF1y
I work on AI safety via learning from human feedback. In response to your three ideas: * Uniformly random human noise actually isn’t much of a problem. It becomes a problem when the human noise is systematically biased in some way, and the AI doesn’t know exactly what that bias is. Another core problem (which overlaps with the human bias), is that the AI must use a model of human decision-making to back out human values from human feedback/behavior/interaction, etc. If this model is wrong, even slightly (for example, the AI doesn’t realize that the noise is biased along one axis), the AI can infer incorrect human values. * I’m working on it, stay tuned. * Our most capable AI systems require a LOT of training data, and it’s already expensive to generate enough human feedback for training. Limiting the pool of human teachers to trusted experts, or providing pre-training to all of the teachers, would make this even more expensive. One possible way out of this is to train AI systems themselves to give feedback, in imitation of a small trusted set of human teachers.

Why? Maybe we are using the word "perspective" differently. I use it to mean a particular lens to look at the world, there are biologists, economists, physicists perspectivies among others. So, a inter-subjective perspective on pain/pleasure could, for the AI, be: "Something that animals dislike/like". A chemical perspective could be "The release of certain neurotransmitters". A personal perspective could be "Something which I would not like/like to experience". I don't see why an AI is hindered from having perspectives that aren't directly coded with "good/bad according to my preferences".

I am maybe considering it to be somewhat like a person, at least that it is as clever as one.

That neutral perspective is, I believe, a simple fact; without that utility function it would consider its goal to be rather arbitrary. As such, it's a perspective, or truth, that the AI can discover.

I agree totally with you that the wirings of the AI might be integrally connected with its utility function, so that it would be very difficult for it to think of anything such as this. Or it could have some other control system in place to reduce the possibility it wo... (read more)

0WalterL7y
Like, the way that you are talking about 'intelligence', and 'critical faculty' isn't how most people think about AI. If an AI is 'super intelligent', what we really mean is that it is extremely canny about doing what it is programmed to do. New top level goals won't just emerge, they would have to be programmed. If you have a facility administrator program, and you make it very badly, it might destroy the human race to add their molecules to its facility, or capture and torture its overseer to get an A+ rating...but it will never decide to become a poet instead. There isn't a ghost in the machine that is looking over the goals list and deciding which ones are worth doing. It is just code, executing ceaselessly. It will only ever do what it was programmed to.
0buybuydandavis7y
Because to identify "its utility function" is to identify it's perspective.

I have a problem understanding why a utility function would ever "stick" to an AI, to actually become something that it wants to keep pursuing.

To make my point better, let us assume an AI that actually feel pretty good about overseeing a production facitility and creating just the right of paperclips that everyone needs. But, suppose also that it investigates its own utility function. It should then realize that its values are, from a neutral standpoint, rather arbitrary. Why should it follow its current goal of producing the right amount of pap... (read more)

0ChristianKl7y
I think that's one of MIRI's research problems. Designing an self-modifying AI that doesn't change it's utility function isn't trival.
7Furcas7y
http://lesswrong.com/lw/rf/ghosts_in_the_machine/ [http://lesswrong.com/lw/rf/ghosts_in_the_machine/]
2WalterL7y
You are treating the AI a lot more like a person than I think most folks do. Like, the AI has a utility function. This utility function is keeping it running a production facility. Where is this 'neutral perspective' coming from? The AI doesn't have it. Presumably the utility function assigns a low value to criticizing the utility function. Much better to spend those cycles running the facility. That gets a much better score from the all important utility function. Like, in assuming that it is aware of pain/pleasure, and has a notion of them that is seperate from 'approved of / disapproved of by my utility function) I think you are on shaky ground. Who wrote that, and why?

That text is actually quite misleading. It never says that it's the snake that should be thought of as figuratively, maybe it's the Tree or eating a certain fruit that is figurative.

But, let us suppose that it is the snake they refer to - it doesn't disappear entirely. Because, a little further up in the catechism they mention this event again:

391 Behind the disobedient choice of our first parents lurks a seductive voice, opposed to God, which makes >them fall into death out of envy.

The devil is a being of "pure spirit" and the catholics ... (read more)

0CCC7y
(Apologies - accidentally double posted)
0CCC7y
True - any part of the described incident (more likely, all of it) could be figurative. Not necessarily. Communication does not need to be verbal. The temptation could have appeared in terms of, say, the manipulation of coincidence. Or, as you put it, a spirit that tries to make people do bad stuff. But yes, there is definitely a Tempter there; some sort of malign intelligence that tries to persuade people to do Bad Stuff. That is a fairly well-known part of Catholic theology, commonly known as the devil. The Vatican tends to be very, very, very, very cautious about definite statements of any sort. As in, they prefer not to make them if there is any possibility at all that they might be wrong. And hey, small though the probability appears, maybe there was a talking snake... Would I need to find leading evolutionists, or merely someone who claims to be a leading evolutionist? The second is probably a lot easier than the first. My googling is defeated by creationists using the claim as a strawman. ...to be fair, I didn't really look all that hard.
0Lumifer7y
Does Wikipedia [https://en.wikipedia.org/wiki/Chimpanzee%E2%80%93human_last_common_ancestor] count?

Thank you for the source! (I'd upvote but have a negative score.)

If you interpret the story as plausibly as possible, then sure, the talking snake isn't that much different from a technologically superior species that created a big bang, terraformed the earth, implanted it with different animals (and placed misleading signs of an earlier race of animals and plants genetically related to the ones existing), and then created humans in a specially placed area where the trees and animals were micromanaged to suit the humans needs. All within the realm of the p... (read more)

1Alia1d7y
Biologically speaking humans are animals and we talk. And since evolution resulted in one type of animal that talks couldn't it result in others, maybe even other that have since gone extinct? So there has to be an additional reason to dismiss the story other than talking animals being rationally impossible. You mention that the problem is the "magical" causation, which you see as a synonym for supernatural, whereas in Christian Theology it is closer to an antonym. So let me tell you a story I made up: Thahg and Zog are aliens in a faraway solar system study species of other planets. One day Thahg shows a pocket watch to Zog and says "Look, I think a human made this." Zog says, "What's a human?" "A human is a featherless biped from Earth" Zog thinks about what animals come from earth and the only one he can think of is a chicken. He laughs and says, "You think a plucked chicken made that? Boy, are you nuts!" And of course Thahg would then look at Zog like he was nuts, because the absurdity Zog is seeing is coming from Zog's own lack of appropriate reference categories rather than an actual problem with Thahg's conjecture. For another example suppose the Muslim woman Yvain was talking to had said "I don't believe that evolution could work because alleles that sweep through populations more often then not reduce the kolmogorov complexity of the genes' effect on phenotype." Yvain may still think she is just as wrong, but she has demonstrated intellectual engagement with the subject rather then just demonstrating she had no mental concept for genetic change over time, like the 'monkeys give birth to humans' objection demonstrates. So the problem is saying that talking snakes are magical and therefore ridiculous sound more like "My mental concepts are too limited to comprehend your explanation" than like "I understood your explanation and it has X, Y and Z logical problems."
0Lumifer7y
I hope you're familiar with Clarke's Third Law [https://en.wikipedia.org/wiki/Clarke%27s_three_laws]?

I meant that the origin story is a core element in their belief system, which is evident from every major christian religion has some teachings on this story.

If believers actually retreated to the position of invisible dragons, they would actually have to think about the arguments against the normal "proofs" that there is a god: "The bible, an infallible book without contradiction, says so". And, if most christians came to say that their story is absolutely non-empirically testable, they would have to disown other parts: the miracles of... (read more)

True, there would only be some superficial changes, from a non-believing standpoint. But if you believe that the Bible is literal, then to point this out is to cast doubt on anything else in the book that is magical (or something which could be produced by a more sophisticated race of aliens or such). That is, the probability that this books represents a true story of magical (or much technologically superior) beings gets lower, and the probability that it is a pre-modern fairy tale increases.

And that is what the joke is trying to point out, that these things didn't really happen, they are fictional.

0Lumifer7y
If you actually believe that the Bible represents a true story about a magical being or beings then the obvious retort is that there is no problem at all with talking snakes. A talking snake is a very minor matter compared with, say, creating the world. Why wouldn't there be one? Just because you find the idea ridiculous? But it is NOT ridiculous conditional on the existence of sufficiently strong magic.

Why doesn't Christianity hinge on their being talking snakes? The snake is part of their origin story, a core element in their belief system. Without it, what happens to original sin? And you will also have to question if not everything else in the bible is also just stories. If it's not the revealed truth of God, why should any of the other stories be real - such as the ones about how Jesus was god's son?

And, if I am wrong in that Christianity doesn't need that particular story to be true, then there is still a weaker form of the argument. Namely that a l... (read more)

6CCC7y
A bit of googling on the Vatican website turned up this document [http://www.vatican.va/archive/ccc_css/archive/catechism/p1s2c1p7.htm], from which I quote: So, the official position of the Vatican is that Genesis uses figurative language; that there was a temptation to disobey the strictures laid in place by God, and that such disobedience was freely chosen; but not that there was necessarily a literal talking snake. In other words, the talking snake is gone, but there is still original sin. As to the question of disagreement between the discoveries of science and the word of scripture, I found a document dated 1893 [http://w2.vatican.va/content/leo-xiii/en/encyclicals/documents/hf_l-xiii_enc_18111893_providentissimus-deus.html] from which I will quote: -------------------------------------------------------------------------------- It's only fair to compare like with like. I'm sure that I can find some people, who profess both a belief that evolution is correct and that monkeys gave birth to humans; and yes, I am aware that this mean they have a badly flawed idea of what evolution is. So, in fairness, if you're going to be considering only leading evolutionists in defense of evolution, it makes sense to consider only leading theologians in the question of whether Genesis is literal or figurative.
4Lumifer7y
Because if you replace the talking snake with, say, a monkey which gave Eve the apple and indicated by gestures that Eve should eat it, nothing much would change in Christianity. Maybe St.George would now be rampant over a gorilla instead of a dragon...
5TimS7y
Ultimately, outsiders cannot define the content or centrality of parts of a belief system. If believers say it is a metaphor, then it is a metaphor. In other words, if believers retreat empirically to the point of invisible dragons [http://lesswrong.com/lw/i4/belief_in_belief/], you can't stop them. Invisible dragons aren't incoherent, they are just boring. That large sub-groups of Christians believe something empirically false does not disprove Christianity as a whole, especially since there is widespread disagreement as to who is a "true" Christian. Citation needed. You sound overconfident here.

How do you misunderstand christianity if you say to people: "There is no evidence of any talking snakes, so it's best to reject any ideas that hinges on there existing talking snakes"?

Again, I'm not saying that this is usually a good argument. I'm saying that those who make it present a logically valid case (which is not the case with the monkey-birthing-human-argument) and that those who not accept it, but believe it to be correct, does so because they feel it isn't enough to convince others in their group that it is a good enough argument.

I'm ... (read more)

2CCC7y
The misunderstanding is that Christianity doesn't hinge on the existence of talking snakes, any more than evolution hinges on monkeys giving birth to humans. The error in logic is the same in both arguments.

Of course theists can say false statements, I'm not claiming that. I'm trying to come with an explanation of why some theists don't accept a certain form of argument. My explanation is that the theists are embarrassed to join someone who only points out a weak argument that their beliefs are silly. They do not make the argument that the "Talking Snakes"-argument is invalid, only that it is not rhetorical.

0CCC7y
The point of the original cautionary tale suggests that the argument "talking snakes cannot exist, thus Christianity is false" is as valid and as persuasive as the argument "monkeys cannot give birth to humans, thus evolution is false". In both cases, it's an argument strong enough to convince only those who are already convinced that the argument's conclusion is most likely correct; and in both cases, it shows that the arguer fundamentally misunderstands the position he is arguing against.

I just don't think it's as easy as saying "talking snakes are silly, therefore theism is false." And I find it embarrassing when >atheists say things like that, and then get called on it by intelligent religious people.

Sure, there is some embarrasment that others may not be particularly good at communicating, and thus saying something like that is just preaching to the choir, but won't reach the theist.

But, I do not find anything intellectually wrong with the argument, so what one is being called out on is being a bad propagandist, meme-gen... (read more)

0Jiro7y
Why can't a theist say something that is false?

Maybe this can work as an analogy:

Right before the massacre at My Lai, a squad of soldiers are pursuing a group of villagers. A scout sees them up ahead a small river and he sees that they are splitting and going into different directions. An elderly person goes to the left of the river and the five other villagers go to the right. The old one is trying to make a large trail in the jungle, so as to fool the pursuers.

The scout waits for a few minutes, when the rest of his squad team joins him. They are heading on the right side of the river and will probabl... (read more)

And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:

Survey the Most Relevant Literature

And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter:

Survey the Most Relevant Literature

Advocacy is all well and good. But I can't see the analogy between MIRI and Google, not even regarding the lessons. Google, I'm guesssing, was subjected to political extortion for which the lesson was maybe "Move your headquarters to another country" or "To make extra-ordinary business you need to pay extra taxes". I do however agree that the lesson you spell out is a good one.

If all PR is good PR, maybe one should publish HPMoR and sell some hundred copies?

-1shminux10y
I doubt that publishing an incomplete fanfiction is the best way, unless JKR suddenly endorses it.

Would you like to try a non-intertwined conversation? :-)

When you say lobbying, what do you mean and how is it the most effective?

4shminux10y
Lobbying as in advocacy. Google thought they could get away with no political lobbying, until they learned the hard truth. MIRI is not in the same position as Google of course, but the lessons are the same: if you want to convince people, just doing good and important work is not enough, you also have to do a good job convincing good and important people that you are doing good and important work. MIRI/CFAR are obviously doing some work in this direction, like target recruiting of the bright young mathematicians, but probably not nearly enough. I suspect they never even paid a top-notch marketing professional to prepare an evaluation. I bet they are just winging it, hoping to ride the unexpected success of HPMoR (success in some circles, anyway).
0CarlJ8y
And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter: Survey the Most Relevant Literature [http://ordningochanarki.blogspot.se/2015/01/survey-most-relevant-literature.html]
0ModusPonies10y
Congratulations! I am glad I was wrong.
0CarlJ8y
And now, 1.5 years later, I've written an extra chapter in the tutorial, but written to be the third chapter: Survey the Most Relevant Literature [http://ordningochanarki.blogspot.se/2015/01/survey-most-relevant-literature.html]

Sure, I agree. And I'd add that even those who can show reasonable arguments for their beliefs can get emotional and start to view the discussion as a fight. In most cases I'd guess that those who engage in the debate are partly responsible by trying to trick the other(s) into traps and having to admit a mistake, by trying to get them riled up or by being somewhat rude when dismissing some arguments.

Some time last night (European time) my Karma score dropped below 2, so I can't finish the series here. I'll continue on my blog instead, for those interested.

Unfortunately, my Karma score went below 2 last night (the threshold to be able to post new articles). This might be due to a mistake I made when deciding what facts to discuss in my latest post - it was unnecessary to bring up my own views, I should have picked some random observations. But even if I hadn't posted that article, my score would still be too low, from all the negative reviews on this post. Or from the third post.

In any case, I'll finish the posts on my blog.

The explanation isn't for why people care about politics per se, but that we care so deeply for politics that we respond to adversity much, much harsher in political environments than in others. Or, our reactions are disproportionate to the actual risks involved in it. People become angry when discussing if something should be privatized or if taxes should be raised. If one believes that there is some general policies that most benefit from, it's really bad to become angry at those whom you really should be allies with.

That's different from what I'm used t... (read more)

3gothgirl42066610y
I feel like many people (especially the type of people who discuss politics) have strong political opinions that aren't rationally justified. When their beliefs are attacked they get emotional because they can't back it up with logic.

I don't think that the idealistic-pragmatist divide is that great, but if I should place myself in either camp, then it's the latter. From my perspective this model would not, if followed through, suggest to do anything that will not have a positive impact (from one's own perspective).

0ChristianKl10y
Pragmatists don't talk about fisherman who don't want bridges to be build but about realpolitik. Your model is build upon idealistic foundations instead of observations of how politics works in the real world.

I believe I should be able both to show how to think on politics and then use that structure to show that some political action is preferable to none - and by my definition work on EA and AI are, for those methods I mention above, political question.

I do have a short answer to the question of why to engage in politics. But it will be expanded in time.

I would beg to differ, as to this post not having any content. It affirms that politics is difficult to talk about; that there's a psychological reason for that; that politics has a large impact on our lives; that a rational perspective on politics requires that one can answer certain questions; that the answer to these questions can be called a political ideology and that such ideologies should be constructed in a certain way. You may not like this way of introducing a subject - by giving a brief picture of what it's all about - but that's another story.

I... (read more)

2BlindIdiotPoster10y
If you do finish the series, and manage to insightfuly and productively discuss the topics you outlined, Ill change my downvote to an upvote.

I agree with your second point, that one should be able to determine the value of incremental steps towards goal A in relation to incremental steps towards goal B, and every other goal, and vice versa. I will fix that, thanks for bringing it up!

If you rank your goals, so that any amount of the first goal is better than any amount of the second goal etc., you might as >well just ignore all but the first goal.

Ranking does not imply that. It only implies that I prefer one goal over another, not that coming 3% on the way to reaching that goal is more pr... (read more)

Hm, so economy fixing is like trying to make the markets function better? Such as when Robert Shiller created a futures market for house loans, which helped to show that people invested too much in housing?

No, that was not part of my intentions when I thought of this. But I'd guess that they would be or it won't be used by anyone.

The goal of this sequence is to create a model with enables one to think more rationally regarding political questions. Or, maybe, societal questions (since I maybe am using the word politics too broadly for most here). The intention was to create a better tool of thought.

0ChristianKl10y
I don't think it succeeds. Rationally regarding political questions is about seeing shades of gray. You basically argue for a idealistic liberatian view of politics which is hold in history mainly by people who don't win any political conflicts.
1Rukifellth10y
Yes, but I owe you an apology for bringing economics up. I fell for some cognitive bias or other when remembering the number of ecnomics posts- Stuart Armstrong's [http://lesswrong.com/r/discussion/lw/hxn/valuable_economics_knowledge_available_ironically/] is the only one in recent months where economics was the end and not a mean to some other discussion. Basically, everyone on this board has made a pre-committment to not expend energy on politics. You'll definitely need a sequence post on the benefits of political thought as a general concept, before any posts about how to think politics properly. Why before how.

The way I see it, all of these - especially the last point, which sounds unfamiliar, do you have a link? - are potentially political activities. Raising funds for AI or some effective charity is a political action, as I've defined it. The model I'm building in this sequence doesn't necessarily say that it's best to engage in normal political campaigns or even to vote. It is a framework to create one's own ideology. And as such it doesn't prescribe any course of action, but what you put into it will.

-2Rukifellth10y
I have no link, but there's a significant number of posts about economic science for a community of non business persons. I guess behind-the-scenes economy fixing is differentiated from efficient charity by its scale, rather than anything fundamental. So you mean that this politics sequence is intended to augment the quest for AI, efficient charity and/or economy fixing?

Politics may or may not be worth one's while to pursue. The model I'm building will be used to determine if there are any such actions or not, so my full answer to your question will be just that model and after it is built, my ideology which will be constructed by it.

I also have a short answer, but before giving it, I should say that I may be using a too broad definition of politics for you. That is, I would regard getting together to reduce a certain existential risk as a political pursuit. Of course, if one did so alone, there is no political problem to... (read more)

The S&P 500 has outperformed gold since quantitative easing began. I don't believe there has been a time past four >years where a $100 gold purchase would be worth more today than a $100 S&P 500 purchase.

According to Wikipedia, QE1 started in late November 2008. Between November 28th 2008 and December 11th 2012 these were their respective returns:

Gold: 110% S&P500: 47,39%

Now Index-funds are normally better, but just look at the returns from late 2004 to today:

Gold: 165% S&P500: 45%

Gold has been rising more or less steadily over all th... (read more)

One of the points I presented that you didn't address is that other people in society teach their kids that stealing is bad and they shouldn't do it.

I believe that also goes under the rubric of voluntary action, so it does not constitute treating others as mere means for my own goal. Like, if you exchange with people or do anything voluntary together all of you consents to being used (if one wants to put it like that). The same with morality; if people teach their children to behave nice, and property is somewhat depended on that condition, it does not ... (read more)

Well, my point was that this assumes a whole theory of property, and a specific one at that. There are others.

It seemed like your point earlier was that my argument lacked a proof that using others' property also meant using others. The point you bring up now is, as I understand it, that while it may be true that stealing the property of others amounts to treating them as merely means for ones own end - another, equally plausible, view of property amounts to the view that simply owning property is the same as merely using others for one's own end.

The ar... (read more)

6fubarobfusco11y
One of the points I presented that you didn't address is that other people in society teach their kids that stealing is bad and they shouldn't do it. They don't merely help to enforce your property claims; they also communicate and teach your property claims. This is the means by which you can count on almost everyone refraining from violating your property claims. Why is theft scarce enough that you can conceive of defending against it, instead of being so common that it is nameless? Why does the concept of "property" bear any weight at all? Because lots of people expend effort to make it so. (Anticipated rebuttal: "The concept of property is part of human nature, or otherwise obvious; it is not socially constructed. It is available to people [for instance by introspection] and doesn't have to be taught." Responses: ① If so, why do we spend so much effort teaching it? ② People claim all sorts of things are inherently or obviously true in defiance of the observed fact that these things are controversial. ③ Even if the concept of property were inherent or obvious, that doesn't mean that the specific sorts of property claims that are found in a specific society do not have to be learned, as they differ from society to society.) Respect for your property claims isn't provided just by the threat of retaliatory force, but also by people's training to recognize specific sorts of things as likely property. You don't have to post a guard outside your house to instruct each passerby that your apple trees are private property and that stealing your apples is bad. That's something you can assume almost everyone has learned — through the positive efforts of parents, teachers, etc. (Yes, you might lose a few apples to naughty kids, but you won't lose nearly as many as if all your neighbors just assumed that those apples were free for the taking.) You've also introduced the idea of "force", creating an analogy between theft (simple removal of property) and violence (e.g. robbe
Load More