All of Riothamus's Comments + Replies

What's up with Arbital?

I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.

The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary conten... (read more)

What's up with Arbital?

I see finding high-quality content producers was a problem; you reference math explanations specifically.

I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.

I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who want... (read more)

1Alexei5yHmm, I'm skeptical a barter system would work. I don't think I've seen a successful implementation of it anywhere, though I do hear about people trying. Yes, we've considered paying people, but that's not scalable. (A good 3-5 page explanation might take 10 hours to write.)
OpenAI makes humanity less safe

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.

Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

OpenAI makes humanity less safe

I disagree, for two reasons.

  1. AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.

  2. Defense is a fundamentally harder problem than offense.

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. T... (read more)

3roystgnr5yBut attacking a territory requires long supply lines, whereas defenders are on their home turf. But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack. But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into their armor. But defending against violence requires you to keep targets in good repair, whereas attackers have entropy on their side. But attackers have to break a Schelling point, thereby risking retribution from otherwise neutral third parties, whereas defenders are less likely to face a coalition. But defenders have to make enough of their military capacity public for the public knowledge to serve as a deterrent, whereas attackers can keep much of their capabilities a secret until the attack begins. But attackers have to leave their targets in an economically useful state and/or in an immediately-militarily-crippled state for a first strike to be profitable, whereas defenders can credibly precommit to purely destructive retaliation. I could probably go on for a long time in this vein. Overall I'd still say you're more likely to be right than wrong, but I have no confidence in the accuracy of that.
0bogus5yWhat matters is not whether defense is "harder" than offense, but what AI is most effective at improving. One of the things AIs are expected to be good at is monitoring those "360 * 90 degrees" for early signs of impending attacks, and thus enabling appropriate responses. You can view this as an "offensive" solution since it might very well require some sort of "second strike" reaction in order to neuter the attack, but most people would nonetheless regard such a response as part of "defense". And "a huge surplus of distributed offensive power" is of little or no consequence if the equilibrium is such that the "offensive" power can be easily countered.
OpenAI makes humanity less safe

I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

1whpearson5yAlso as a result every country that has a computer science department can try and build something to protect itself if any other country messes up the control problem. If you have a moderate take off scenario that can be pretty important.
0bogus5y"Powerful AI" is really a defense-favoring technique, in any "belligerent" context. Think about it, one of the things "AIs" are expected to be really good at is prediction and spotting suspicious circumstances (this is quite true even in current ML systems). So predicting and defending against future attacks becomes much easier, while the attacking side is not really improved in any immediately useful way. (You can try and tell stories about how AI might make offense easier, but the broader point is, each of these attacks plausibly has countermeasures, even if these are not obvious to you!) The closest historical analogy here is probably the first stages of WWI, where the superiority of trench warfare also heavily favored defense. The modern 'military-industrial complexes' found in most developed countries today are also a 'defensive' response to subsequent developments in military history. In both cases, you're basically tying up a whole lot of resources and manpower, but that's little more than an annoyance economically. Especially compared to the huge benefits of (broadly 'friendly') AI in any other context!
April 2017 Media Thread

I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.

The site looks very good. How do you find the rest of it?

Jocko Podcast

Here is a method I use to good effect:

1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.

2) Find a substitution for those pros.

Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and ... (read more)

[Link] How the Simulation Argument Dampens Future Fanaticism

On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?

The progressive case for replacing the welfare state with basic income

I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.

  1. This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?

  2. The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while bei

... (read more)
Open thread, Jul. 25 - Jul. 31, 2016

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others

I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, whic... (read more)

0TheAncientGeek5yWho's "we"? Lesswrongians seem pretty motivated to assert the correctness of physicalism and wrongness of dualism, supernaturalism,, etc. I'm not following that. Can you give concrete examples? What I had in mind was Aristotelean metaphysics, not Aristotelean physics. The metaphysics, the accident/essence distinction and so on, failed separately.
Open thread, Jul. 25 - Jul. 31, 2016

This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.

  1. We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
  2. We don't have a way of searching for new ontologies.

So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.

1TheAncientGeek5yIf that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others ... then I dont think the situation is quite that bad: it;'s partly true, but there are also criteria that span ontologies, like parsimony. The point is that we don't have a mechanical, algorithmic way of searching for new ontologies. (It's a very lesswronging piece of thinking to suppose that means there is no way at all). Clearly , we come up with new ontologies from time to time. In the absence of an algorithm for constructing ontologies, doing so is more of a createive process, and in the absence of algorithmic criteria for evaluating them, doing so is more like an aesthetic process. My overall points are that 1) Philosophy is genuinely difficult..its failure to churn out results rapidly isn't due to a boneheaded refusal to adopt some one-size-fits all algorithm such as Bayes... 2) ... because there is currently no algorithm that covers everything you would want to do. it's a one word difference, but it's very significant difference in terms of implications. For instance, we can;t quantify how far the best available explanation is from the best possible explanation. That can mean that the use of probablistic reasoning does't go far enough.
Open thread, Jul. 25 - Jul. 31, 2016

Echo chamber implies getting the same information back.

It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.

Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?

0TheAncientGeek5yWithout having a way of ranging across ontologyspace, how can we distinguish the merits of different ontologies? But we don't have such a way. In its absence, we can pursue an ontology to the point of breakdown, whereupon we have no clear path onwards. It can also be a slow of process ... it took centuries for scholastic philosophers to reach that point with the Aristotelian framework. Alternatively, if an ontology works, that is no proof that it ia the best possible ontology, or the final answer...again because of the impossibility of crawling across ontologyspace.
Superintelligence via whole brain emulation

If the artificial intelligence from emulation is accomplished through tweaking an emulation and/or piling on computational resources, why couldn't it be accomplished before we start emulating humans?

Other primates, for example. Particularly in the case of the destructive-read and ethics-of-algorithmic-tweaks, animal testing will surely precede human testing. To the extent a human brain is just a primate brain with more computing power, another primate with better memory and clock speed should serve almost as effectively.

What about other mammals with culture and communication, like a whales or dolphins?

Something not a mammal at all, like Great Tits?

Open Thread, Aug. 15. - Aug 21. 2016

Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?

I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.

Advice to new Doctors starting practice

This is enough of a problem for small medical practices in the US that it outweighs a good bedside manner and confidence in the doctor's medical ability.

I am confident that this has a large effect on the success of an individual practice; it may fall under the general heading of business advice for the individual practitioner. Even for a single-doctor office, a good secretary and record system will be key to success.

This information comes chiefly from experience of and interviews with specialists (dermatology and gynaecology) in the US.

Advice to new Doctors starting practice

I know this is banal, but ensure excellent administration.

Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.

Scheduling, filing and communication. Lacking these, medical expertise is meaningless. So get the best damn admin and IT you can possibly afford.

2anandjeyahar5yVery valid and good point(added). I briefly touched on it before too, but mostly had individual practitioners in mind than organized hospitals with administration and support. (India is moving towards a lot more of the organized hospitals model, but IT is non-existent, administration is most seat-in-the-ass jobs)
Open thread, Jul. 25 - Jul. 31, 2016

Let me try to restate, to be sure I have understood correctly:

We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

2TheAncientGeek5yMaybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber. [*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.
Superintelligence and physical law

So am I correct in inferring that this program looks for any mathematical correlations in the data, and returns the simplest and most consistent ones?

Open thread, Jul. 25 - Jul. 31, 2016

This is a useful bit of clarification, and timely.

Would that change if there was a mechanism for describing the criteria for the best explanation?

For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?

0TheAncientGeek5yEquivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem. Another part is that you don't have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.
Open thread, Jul. 25 - Jul. 31, 2016

There is a LessWrong wiki entry for just this problem:

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusio... (read more)

0TheAncientGeek5yIf inference to the best explanation is included, we can't do that. We can know when we have exhausted all the prima facie evidence, but we can't know when we have exhausted every possible explanation for it. What you haven't thought of yet, you haven't thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,
Open thread, Jul. 25 - Jul. 31, 2016

Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.

As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.

0TheAncientGeek5yDoes evidence have to be direct evidence? Or can something like inference to the best explanation be included? That is exactly the sort of situation where direct evidence is useless.
0Arielgenesis5yHow do we not fall into the rabbit hole of finding evidence that we are not in a simulation?
Open thread, Jul. 18 - Jul. 24, 2016

Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?

Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?

4Lumifer5yGenerally speaking, for this you need a meta-model, that is, a model of how your model will change (e.g. become outdated) with the arrival of new information. Plus, if you want to compare costs, you need a loss function which will tell you how costly the errors of your model are.
0MrMind5yUnfortunately to pull this off you need to look closely to both your model and the model of the error, there's no general method AFAIK.
Unofficial Canon on Applied Rationality

That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.

Agreed. The more I consider the problem, the higher my confidence that investing enough energy in the process is a bad investment for them.

Another romantic solution waiting for the appropriate problem. I should look into detaching from the idea.

Unofficial Canon on Applied Rationality

I should amend my assumption to uncontrolled transmission is inevitable. The strategy so far has been to use the workshops, and otherwise decline to distribute the knowledge.

The historical example should be considered in light of what the goals are. The examples you give are strategies employed by organizations trying to deny all knowledge outside of the initiated. Enforcing secrecy and spreading bad information are viable for that goal. CFAR is not trying to deny the knowledge, only to maximize its fidelity. What is the strategy they can use to maximize ... (read more)

0ChristianKl5yI think most of the organsiation I'm talking about don't have a binary intiate/non-initiate criteria whereby the initiated get access to all knowledge. As people learn more they get access to more knowledge. Most scientologists haven't heard of Xenu. At least that was the case 10 years ago. LW-Dojo are a way for knowledge to be transmitted outside of workshops. I also think that alumni are generally encouraged to explain knowledge to other people. Peer-to-peer instruction has natural filter that reduce completely passive consumption. That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.
Unofficial Canon on Applied Rationality

You have just described the same thing Duncan cited as a concern, only substituted a different motive; I am having trouble coming to grips with the purpose of the example as a result.

I propose that the method of organizing knowledge be considered. The goal is not to minimize the information, but to minimize the errors in its transmission. I assume transmission is inevitable; given that, segregating the information into lower-error chunks seems like a viable strategy.

0ChristianKl5yYou refered to historical techniques that are used. Generally historical groups actually have defenses against lay people accessing knowledge even if those lay people think they are experts and should be able to access the knowledge. Whether it's sworn secrecy, hiding knowledge in plain sight or simple lies to mislead uninitiated readers, there's a huge toolbox. Presumably CFAR thinks that their workshop is a low error chunk of consuming their material.
Unofficial Canon on Applied Rationality

We aren't at a point yet where we distinguish "basic" from "advanced" practices.

This is a good point; I have assumed that there would eventually be a hierarchy of sorts established. I was allowing for instruction being developed (whether by CFAR or someone else) even down below the levels that are usually assumed in-community. When Duncan says,

Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic > selection and evolution.

I interpret this to mean even by people who hav... (read more)

0ChristianKl5y"Eventually" is a key word. I think in ten years CFAR's curriculum will be more settled than it is today. Take triggers action plans (TAP). They are considered a basic. In the scientific literature and in CFAR's first workshops they were called "implementation intentions". CFAR found that it's useful to have a short word with TAP to be the concept more usable. That's not a change in something basic. A while ago someone in this community tried to write a guide for a self-help technique. Let's call him Bob. Bob read the official guide. The offical guide for the techique referenced a few ingridents that Bob didn't kow. To do the technique properly the person doing it needs to do X and Y. Bob didn't know what X and Y were supposed to be. Bob however was widely read in the self-help literature so he simply replaced X and Y with M and N while writting his own guide for the technique. Bob also didn't have much experience with actually using the technique. In that case I told Bob, don't publish that guide and I think the draft for the guide didn't circulate further or got more work. I think before I went to me first self-help seminar I was like Bob. I spent 4 years spending 3 hours per day at a personal development forum where I was a moderator, so I thought I know what I was talking about. LessWrong draw people like that you read a lot but who often don't practice techniques enough. From this writeup take the part about CoZE exercises. The writeup says that it's good to develop a playful attitude but if you look at it's step by step list the steps it gives likely don't develop a playful attitude. I don't know the quality of CFAR's CoZE teachings but if CFAR knows how to actually teach people to do them with a playful attitude, CFAR alumni do something that person trying to follow the writeup won't do. Having read the writeup might make it harder for someone who comes to CFAR to actually understand what CFAR teaches as CoZE because the person has already a preconve
Unofficial Canon on Applied Rationality

Sigh. I continue to forget how much of a problem that is. It is meant in the historical, rather than colloquial, meaning of the word. Since it apparently does not go without saying, the easily misunderstood term should never be used in official communication of any sort.

I apologize for the lack of clarity.

0ChristianKl5yDo you know what the historical techniques happen to be? Let's take Maimonidies whose behavior is well described by Leo Strauss. There's a law in the Torah against teaching the secrets in the Torah outside of 1-to-1 teaching. If Leo Strauss is to be believed Maimonidies purpusefully writes wrong things to mislead naive readers and keep advanced knowledge from them. If CFAR would write purposefully misleading things in their public material to pander down to naive readers and keep advanced knowledge from them, that would produce problems. In the time of the internet don't use words publically that you wouldn't use in official communication.
0Lumifer5yCall it something like "gnostic practices" so that hoi polloi have no idea what it means, but it sounds respectable :-)
Unofficial Canon on Applied Rationality

I wonder if it would be possible to screen out some of the misinterpretation and recombination hazards by stealing a page from mystery religions.

Adherents were initiated by stages into the cult; mastery of the current level of mysteries was expected before gaining access to the next.

Rather than develop a specific canon or doctrine, CFAR could build the knowledge that new practices supersede the old, basic practices must come before advanced practices, and precisely what practices should have been tackled previously and will be tackled next into everything ... (read more)

0ChristianKl5yWe aren't at a point yet where we distinguish "basic" from "advanced" practices. Most of what CFAR teaches is a 4-day workshop. CFAR doesn't try to teach anything that takes a year to understand. The idea that basics are somehow easy to understand also mistakes a lot about what learning deep knowledge is about. Basics are hard because they are fundamentals and affect a lot. When dancing Salsa there was the saying: "At congresses beginners take the intermediate classes, intermediates take the advanced classes and the advanced people take the beginners classes". Today I was at my meditation/movement class and the teacher (with ~15 years in the method and likely much more than 10000 hours of meditation) was saying that she still fails to have a good grasp on the basic of rhythm and that it alludes her.
1Lumifer5yYes, we should definitely make CFAR/LW look more like a cult :-/
The map of cognitive biases, errors and obstacles affecting judgment and management of global catastrophic risks

Thank you for doing this work. I think that a graphical representation of the scope of the challenge is an excellent idea, and merits continuous effort in the name of making communication and retention easier.

That being said, I have questions:

1) What is the source of that text document? The citations consist almost exclusively of works concerning nanomachines. None of the citations concern biases, and do not reference people like Bostrom or Kahneman despite clearly being familiar with their work (at least second hand).

2) Am I correct to infer that the divi... (read more)

0turchin5yThanks for your comment. I think if the document as draft, and I published it to get some valuable feedback. In the text version of the document there is literature after each chapter, and Kanneman is there, may be not as often as he should be. But most biases was "reinvented" by me, as well as idea to use X and Y axis for typology and timing. It is interesting idea to add collective biases. I am also thinking about adding a block of biases which impede scientific research, like publication bias. It will be collective biases. My first language is Russian and I used the help of an editor to spell-check and rewrite some parts of the map.
Zombies Redacted

This gives us these options under the Chalmers scheme:

Same input -> same output & same qualia

Same input -> same output & different qualia

Same input -> same output & no qualia

I infer the ineffable green-ness of green is not even wrong. We have no grounds for thinking there is such a thing.

Zombies Redacted

They are meant to be arbitrarily accurate, and so we would expect them to include qualia.

However, in the Chalmers vein consciousness is non-physical, which suggests it cannot be simulated through physical means. This yields a scenario very similar to the identical-yet-not-conscious p-zombie.

-1TheAncientGeek5yWhose "we"? They are only mean to be functional (input-output) duplicates, and a large chunk of the problem of qualiia is that qualia are not in any remotely obvious way functions. If you think consciousness is non physical , you would think the sims are probably zombies. You would also think that if you are a physicalist but not a computationalist. Physicalism does not guarantee anything about the nature of computational simulations. Chalmers actual position is that consciousness supervenes on certain kinds of information processing, so that a sufficiently detailed simulation would be conscious: he's one of the "we".
Zombies Redacted

What do people in Chalmer's vein of belief think of the simulation argument?

If a person is plugged into an otherwise simulated reality, do all the simulations count as p-zombies, since they match all the input-output and lack-of-qualia criteria?

-1TheAncientGeek5yThey don't exactly count as p-zombie, since they are functional simulations, not atom-by-atom duplicates. I call such zombies c-zombies, for computational zombies.

Do they lack qualia? How accurate are these simulations meant to be?

Zombies Redacted

I do not think we need to go as far as i-zombies. We can take two people, show them the same object under arbitrarily close conditions, and get the answer of 'green' out of both of them while one does not experience green on account of being color-blind.

7gjm5yWhat do you infer from being able to do this? (Surely not that qualia are nonphysical, which is the moral Chalmers draws from thinking about p-zombies; colour-blindness involves identifiable physical differences.)
Market Failure: Sugar-free Tums

This looks like an information problem.

It is useful to remember that the market is an abstraction of aggregated transactions. The basic supply and demand graphs they teach us in early econ rely on two assumptions: rational agents, and perfect information.

I expect the imperfect information problem dominates in cases of new products, because producers have a hard time estimating return, and customers don't even know it exists. VCs are largely about developing a marginal information advantage in this space. Interestingly, all of the VCs I have personally inte... (read more)

Powering Through vs Working Around

It is worth keeping in mind that how to defeat X is not well-defined. The usual method for circumventing the planning fallacy is to use whatever the final cost was last time. What about cases where there isn't a body of evidence for the costs? Rationality is just such a case; while we have many well-defined biases, we have few methods for overcoming them.

As a consequence, I determine whether to workaround or defeat X primarily based on how frequently I expect it to come up. The cost of X I find less relevant for two reasons: one, I have a preference agains... (read more)

Meme: Valuable Vulnerability

Military bonding is an interesting comparison. Training in a professional military relies on shared suffering to build a strong bond between the members of a unit. If we model combat as an environment so extreme that vulnerability is inescapable, the function of vulnerability as a bonding trait makes sense.

It also occurs to me that we almost universally have more control over how we signal emotions than how we feel them. The norm would therefore be that we feel more emotions than we show; by being vulnerable and signaling our emotions, other people can empathize instinctively and may feel greater security as a result.

Open Thread, January 11-17, 2016

What are your criteria for good foreign policy choices then? You have conveyed that you want Iraq to be occupied, but Libya to be neglected, so non-intervention clearly is not the standard.

My current best guess is 'whatever promotes maximum stability'. Also, how do you expect these decisions are currently made?

0sight6yI wouldn't object nearly as much to occupying Libya as to what Obama actually did. Namely, intervene just enough to force Gaddafi out and leave a huge mess. Actually I would still object, but that's because Gaddafi had previously abandoned his WMD program under US pressure. So getting rid of him now sends a very bad message to other thrid world dictators contemplating similar programs.
Is Spirituality Irrational?

I would also have an easier time with ASCII, but that's because I (and presumably you also) have been trained in how to produce instructions for machines. This is a negligible chunk of humanity, so I thought it was equally discountable.

I suppose the spiritual analogy would be an ordained priest praying on behalf of another person!

4gjm6yI reckon I could teach anyone of average or better intelligence to read books in hexadecimal ASCII codes in a day. I suspect a substantial majority of highly intelligent and musically inclined people could not learn to "read" pictures of vinyl records in a day, no matter how well taught.
Open Thread, January 11-17, 2016

As compared to what alternative? There is no success condition for large scale ground operations in the region. If the criticism of the current administration is "failed to correct the lack of strategic acumen in the Pentagon" then I would agree, but I wonder what basis we have for expecting an improvement.

It seems to me we can identify problems, but have no available solutions to implement.

0sight6yWell, not intervening in Libya for starters.
Is Spirituality Irrational?

A correct analogy between records and books would be the phonograph and the text of the book written in ASCII hexadecimal. Both are designed to be interpreted by a machine for presentation to humans.

0gjm6yNot a bad analogy, but for me at least interpreting hexadecimal ASCII is much, much easier than interpreting images of vinyl records. [EDITED to add:] More explicitly, I can do the former, though it would be boring and greatly reduce my enjoyment of reading, but I'm not at all sure that I can do the latter at all without electronic assistance.
"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"

Until someone demonstrates the utility of engaging in criticism of particular political groups, I will continue to treat it as noise.

We already know out groups don't use the word rationality the way we do. We also know that assuming others share our information and frame of reference is an error. There is no new information here.

Open Thread, January 11-17, 2016

The thing to consider about the economy is that the president is not only not responsible, but mostly irrelevant. An easy way to see this is the 2008 stimulus packages. Critics of the president frequently share the graph of national debt which grows sharply immediately after he took office - ignoring that the package was demanded by congress and supported by his predecessor, who wore a different color shirt.

A key in evaluating a president is the difference between what he did, what he could have done, and what people think about him. Consider that the part... (read more)

0sight6ySo what about Libya? What about the fight against ISIS? The former was a quick-strike operation that caused the country in question to go to hell fast. The latter is an example of things going to hell so badly after a "successfully ended operation" that we had to intervene again.
0[anonymous]6yWhat like Libya? Or the fight against ISIS? The former is an example of a fast intervention that caused things to go straight to hell. The latter is an example of him "ending an operation" and things going to hell so badly that he had to intervene again.
[Link] Introducing OpenAI

If I am reading that right, they plan to oppose Skynet by giving everyone a Jarvis.

Does anyone know their technical people, and whether they can be profitably exposed to the latest work on safety?

[Link] A rational response to the Paris attacks and ISIS

According to historical analysis of every resolved insurgency since 1944, conducted by RAND, the best predictor of success in defeating one is achieving conventional military superiority. Details here:

[Link], an argument analysis platform

This looks to be a very good example of the dangers of a little bit of rationality, or a little bit of intelligence. The layout encourages deploying Fully General Counter Arguments. There appears to be no mechanism to ensure the information on which the arguments are based is either good, or agreed upon.

Stupid questions thread, October 2015

Ah - I appear to have misread your comment, then.

Would I be correct in limiting my reading of your remarks to rebutting the generalization you quoted?

Stupid questions thread, October 2015

I find it most relevant to planning and prediction. It helps greatly with realizing that I am not an exception, and so I should take the data seriously.

In terms of things that changed when my beliefs did, I submit the criminal justice system as an example. I now firmly think about crime in terms of individuals being components of a social system, and I am exclusively interested in future prevention of crime. I am indifferent to moral punishment for it.

Stupid questions thread, October 2015

You have oversimplified to uselessness.

A common counter-example is people who do not want this job, for example because it pays less than their current lifestyle costs to support. It isn't lazy, it is making the smart economic decision.

You are also assuming that the trouble of traveling to and from an interview is where the stress and effort lies. I would only credit that as the case if they had a high-demand skill set and were traveling across the country for the in-person interview, which is highly unlikely to apply to someone drawing unemployment benefits. The stress and effort stems from preparation before and performance during an interview, neither of which apply if the goal is to fail at it.

1Jiro6yA counterexample is useful to rebut a generalization. But I didn't say that all people who are punished are people who don't play fair; I said that some people who are punished are people who don't play fair. You can't use a counterexample against a point which says "there are some examples of X"; it's perfectly consistent for there to be some examples, and some other cases that are not examples. I am assuming that that stress is enough to discourage some lazy people. It needn't be a large percentage of the total stress to discourage lazy people; it could be that deliberately failing an interview is only 10% of the stress of a normal interview, but a sufficiently lazy person is unwilling to undergo even 10%.
Rationality Reading Group: Part K: Letting Go

The most interesting segment of this section was The Ritual. I find the problem of how to go about making an effective practice very interesting. I would also like to draw attention to this section:

"I concede," Jeffreyssai said. Coming from his lips, the phrase was spoken with a commanding finality. There is no need to argue with me any further: You have won.

I experienced a phenomenon recently that trends to act as a brake on letting go: the commentary following concession. I was having a conversation with someone, and expressed an opinion. T... (read more)

[Link] Tetlock on the power of precise predictions to counter political polarization

How does this idea square with elections in the United States? Consider pollsters; their job is to make specific predictions based on understood methods using data gathered with also understood methods.

Despite what was either fraud or tremendous incompetence in the last Presidential election cycle on the part of ideological pollsters, and the high degree of public attention paid to it, polarization has not meaningfully decreased in any way I can observe.

I therefore expect that making the candidates generate specific predictions would have little overall effect on polarization.

Load More