All of taygetea's Comments + Replies

Why Do You Keep Having This Problem?

A point I think others missed here is that in the TV example, there's more data than the situations the OP talks about, so mscottveach can say there's a disparity instead of just having the hatemail. Maybe more situations should involve anonymous polling.

The First Fundamental
Crossbow is closer to Mars than pen

If you treat war and conflict as directed intentionality along the lines of the Book of Five Rings, then this is something akin to a call to taking actions in the world rather than spilling lots of words on the internet.

3mbzrl3yThat's how I interpreted this sentence. A little more like "you can't get to Mars by thinking, but by doing," but the war reference makes sense with Mars and the crossbow. The quote that comes to mind is from Miyamoto Musashi, and appears in several places in R:A-Z: "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this."

I think people tend to need a decent amount of evidence before they start talking about someone looking potentially abusive. Then the crux is "does this behavior seem normal or like a predictive red flag?". In those cases, your lived experience directly influences your perception. Someone's actions can seem perfectly fine to most people. But if some others experience spooky hair-raising flashes of their questionably abusive father or a bad ex, that's evidence. The people who didn't think anything was weird brush off the others as oversensitive, ... (read more)

1Duncan_Sabien4yStrong endorsement of all of this.

This post puts me maybe 50% the way to thinking this is a good idea from my previous position.

My largest qualm about this is well-represented by a pattern you seem to show, which starts with saying "Taking care of yourself always comes first, respect yourself", then getting people to actually act on that in simple, low-risk low-involvement contexts, and assuming that means they'll actually be able to do it when it matters. People can show all the signs of accepting a constructed social norm when that norm is introduced, without that meaningfully ... (read more)

4Duncan_Sabien4yI'm unlikely to revise the aesthetics, but a) the particular operationalization/expression of those aesthetics, and b) the boundary/balance between both the aesthetics and other people's agency are fully open to debate, iteration, and consensus. The whole point is to test out the aesthetic as it exists, to see whether it produces a better life for people, so it's important not to compromise it until some actual testing has taken place. But imagine e.g. a constructed social norm is approved of, proves to be problematic twice, and has one week left before its originally established "re-evaluate" point—I posit you get much better data out of seeing what happens if you keep the norm firmly in place, see the fallout for a week, watch people grumble and adjust, and then re-evaluate on schedule, than if you just constantly say "NOPE, DIDN'T WORK, SCREW THAT." I think there's a strong instinct to buck norms and update in the moment, and that this is a pendulum swing thing—it's good that we do this a lot more than we did two decades ago, but it's bad that we do it as much as we do. There's value in learning to live with rules that don't change, or rules that are slightly stupid, and by setting rules firmly in place for e.g. three weeks at a time, I think you capture some of that value, at a low price in terms of loss of the flexibility thing. Does that seem coherent/a valid response to your qualm? Another way to say this is that I think the bar for "discard this norm" should be raised one notch higher from (straw description) "it bothered one of us once" to "it bothered several of us several times." If you keep it past the former, I think you see interesting effects in how people shape themselves around one another, and I think there's some valuable effect from transferring some sovereignty back from the individual to the social fabric (i.e. everybody's not just quittable at all times).
The Adventure: a new Utopia story

Would you expect to be able to achieve that - maybe eventually - within the world described?

Definitely. I expect the mindspace part to actually be pretty simple. We can do it in uncontrolled ways right now with dreams and drugs. I guess I kind of meant something like those, only internally consistent and persistent and comprehensible. The part about caring about base reality is the kind of vague, weak preference that I'd probably be willing to temporarily trade away. Toss me somewhere in the physical universe and lock away the memory that someone's keep... (read more)

3Stuart_Armstrong4yThat doesn't seem too hard. Actually being in the physical universe might be deemed to be too expensive (or you might have to go to great lengths to earn that possibility). Removing that memory is fine, especially with your permission, but the Powers might add a weak superstition or belief in providence to replace the specific knowledge that someone is watching. That was partially Boon's role here, but that was exploring increased intelligence rather than more quirky possibilities. Basically there are some areas of mindspace that are completely ruled out (continual pain without enjoyment but not motivation to change that), some that are permitted only if they're very rare, some that are allowed for most but not all, and some that are allowable for anyone (eg pleasure sensations). As usual, the Powers prefer to use social tools and tricks to enforce those proportions, coercively intervening very rarely.
The Adventure: a new Utopia story

This was great. I appreciate that it exists, and I want more stories like it to exist.

As a model for what I'd actually want myself, the world felt kind of unsatisfying, though the bar I'm holding it to is exceptionally high-- total coverage of my utility-satisfaction-fun-variety function. I think I care about doing things in base reality without help or subconscious knowledge of safety. Also, I see a clinging to human mindspace even when unnecessary. Mainly an adherence to certain basic metaphors of living in a physical reality. Things like space and direction and talking and sound and light and places. It seems kind of quaintly skeuomorphic. I realize that it's hard to write outside those metaphors though.

6Stuart_Armstrong4yCheers :-) Would you expect to be able to achieve that - maybe eventually - within the world described? It's partially that, and partially indicative of the prudence in the approach. Because a self-modifying human mind could end up almost anywhere in mindspace, I conceived of the Powers going out of their way to connect humans with their "roots". There's the extended "humanish" mindspace, where agents remain moral subjects, but I'm conceiving the majority of people to remain clustered to a smaller space around baseline human (though still a huge mindspace by our standards). But you're right, I could have been less skeuomorphic (a word to savour). I can only plead that a) it would have meant packing more concepts into a story already over-packed with exposition, and b) I would have had to predict what metaphors and tools people would have come up with within virtual reality, and I'm not sure I'd have come up with convincing or plausible ones (see all those "a day in the life of someone in 50 years time" types of stories).
A quick note on weirdness points and Solstices [And also random other Solstice discussion]

For context, calling her out specifically is extremely rare, people try to be very diplomatic, and there is definitely a major communcation failure Elo is trying to address.

I could be more diplomatic. But I'd still have to name katie for the sake of sparing Alicorn the confusion about her own child-raising.

A quick note on weirdness points and Solstices [And also random other Solstice discussion]

Replied above. There's a strong chilling effect on bringing up that you don't want children at events.

A quick note on weirdness points and Solstices [And also random other Solstice discussion]

From what I've seen, it's not rare at all. I count... myself and at least 7 other people who've expressed the sentiment in private across both this year and last year (it happened last year too). It is, however, something that is very difficult for people to speak up about. I think what's going on is that different people care about differing portious of the solstice (community, message, aesthetics, etc) to surprisingly differing degrees, may have sensory sensitivites or difficulty with multiple audio input streams, and may or may not find children positiv... (read more)

6ChangeMyMind4yI typically don't mind children being present at events (if taken outside if they begin screaming) and don't have particularly strong sensory issues. I imagine that people with either of those would have had an even worse time than I did.
How does personality vary across US cities?

Ah, I spoke imprecisely. I meant what you said, as opposed to things of the form "there's something in the water".

How does personality vary across US cities?

I think you have the causality flipped around. Jonah is suggesting that something about Berkeley contributes to the prevalence of low conscientiousness among rationalists.

0John_Maxwell4yPreviously on LW: Self control may be contagious [http://lesswrong.com/lw/1ml/self_control_may_be_contagious/]
6JonahS4yWhat I had in mind was that the apparent low average conscientiousness in the Bay Area might have been one of the cultural factors that drew rationalists who are involved in the in-person community to the location. But of course the interpretation that you raise is also a possibility.
Open thread, Dec. 12 - Dec. 18, 2016

Nicotine use and smoking are not at all the same thing. Did you read the link?

0Pimgd4yI did not read the link. But I also think that drugging myself like that for this is not OK.
CFAR’s new focus, and AI Safety

To get a better idea of your model of what you expect the new focus to do, here's a hypothetical. Say we have a rationality-qua-rationality CFAR (CFAR-1) and an AI-Safety CFAR (CFAR-2). Each starts with the same team, works independently of each other, and they can't share work. Two years later, we ask them to write a curriculum for the other organization, to the best of their abilities. This is along the lines of having them do an Ideological Turing Test on each other. How well do they match? In addition, is the newly written version better in any case? I... (read more)

Making Less Wrong Great Again

I logged in just to downvote this.

Why CFAR? The view from 2015

I could very well be in the grip of the same problem (and I'd think the same if I was), but it looks like CFAR's methods are antifragile to this sort of failure. Especially considering the metaethical generality and well-executed distancing from LW in CFAR's content.

1Lumifer5yWhat does that mean?
Why CFAR? The view from 2015

There are a few people who could respond who are both heavily involved in CFAR and have been to Landmark. I don't think Alyssa was intending for a response to be well-justified data, just an estimate. Which there is enough information for.

Ask and ye shall be answered

Unrelated to this particular post, I've seen a couple people mention that all your ideas as of late are somewhat scattered and unorganized, and in need of some unification. You've put out a lot of content here, but I think people would definitely appreciate some synthesis work, as well as directly addressing established ideas about these subproblems as a way of grounding your ideas a bit more. "Sixteen main ideas" is probably in need of synthesis or merger.

5Gunnar_Zarncke6yI don't think this is a very charitable view. I admit that I did propose to add a Wiki page for structure, but not because of a lack of quality but rather the opposite because I see that this as a very valuable albeit dry matter. I wished more people would pick up on this important FAI (or rather UFAI-prevention) work. Can somebody propose ideas how to improve takeup? I will start with one: Reduce perceived dryness by adding examples or exercises.
7Stuart_Armstrong6yI agree. I think I've got to a good point to start synthesising now.
Stupid Questions September 2015

To correct one thing here, the Bussard ramjet has drag effects. It can only get you to about 0.2c, making it pretty pointless to bother if you have that kind of command over fusion power.

Rudimentary Categorization of Less Wrong Topics

I would not call this rudimentary! This is excellent. I'll be using this.

Didn't someone also do this for each post in the sequences a while back?

7ScottL6yDo you mean the article summaries [http://wiki.lesswrong.com/wiki/Less_Wrong/Article_summaries]?
Lesswrong real time chat

There's been quite a bit of talk about partitioning channels. And the #lesswrong sidechannels sort of handle it. But it's nowhere near as good. I'm starting to have ideas for a Slack-style interface in a terminal... but that would be a large project I don't have time for.

0metaperture6yI haven't tried it, but this looks like it could be useful for that project: https://github.com/evanyeung/terminal-slack [https://github.com/evanyeung/terminal-slack]
Open Thread August 31 - September 6

Alright, I'll be a little more clear. I'm looking for someone's mixed deck, on multiple topics, and I'm looking for the structure of cards, things like length of section, amount of context, title choice, amount of topic overlap, number of cards per large scale concept.

I am really not looking for a deck that was shared with easily transferrable information like the NATO alphabet, I'm looking for how other people do the process of creating cards for new knowledge.

I am missing a big chunk of intuition on learning in general, and this is part of how I want t... (read more)

0eeuuah6yI could send you some of my anki cards, but I don't know that you'll get useful structural information out of them. They tend to be pretty random bits that I think I'll want to know or phrases I want to build associations between. For most things, I take actual notes (I find that writing things down helps me remember the shape of the idea better, even if I never look at them), and only make flashcards for the highest value ideas. It took me several months of starting and quitting anki to start to get the hang of it, and I'm still learning how to better structure cards to be easier to remember and transmit useful information. I found this blog post [http://www.jackkinsella.ie/2011/12/05/janki-method.html] and the two it links to at the top to be useful descriptions of an approach to learning, which incorporates anki among other things
0Barry_Cotter6yBased on my own experience I strongly suspect the only way to do this is to fail repeatedly until you succeed. That said the following rules are very, very good. If you really, really want an example I can send you my Developmental Psychology and Learning and Behaviour Deck. It consists of the entirety of a Cliff's Notes kind of Developmental Psychology book, a better dev psych's summary section and an L&B book's summary section. In retrospect the Cliff's Notes book was a mistake but I've invested enough in it now that I may as well continue it, most of the cards are mature anyway. I would recommend finding a decent book on the topic you're learning, and writing your own summaries or heavily rewording their summaries and using lots and lots of cloze deletions. I just found this guide to using Anki. http://alexvermeer.com/anki-essentials/ [http://alexvermeer.com/anki-essentials/] It's possible it may be worth looking at. If you really want my deck pm me your email address. http://super-memory.com/articles/20rules.htm [http://super-memory.com/articles/20rules.htm] Here again are the twenty rules of formulating knowledge. You will notice that the first 16 rules revolve around making memories simple! Some of the rules strongly overlap. For example: do not learn if you do not understand is a form of applying the minimum information principle which again is a way of making things simple: Do not learn if you do not understand Learn before you memorize - build the picture of the whole before you dismember it into simple items in SuperMemo. If the whole shows holes, review it again! Build upon the basics - never jump both feet into a complex manual because you may never see the end. Well remembered basics will help the remaining knowledge easily fit in Stick to the minimum information principle - if you continue forgetting an item, try to make it as simple as possible. If it does not help, see the remaining rules (cloze deletion, graphics, mnemonic techniques, conve
0Elo6yI don't know if this question will help: What is the least-bad way of doing the thing you want to do that you can think of? (apologies I can be no help because I don't anki; but I wonder if answering this question will help you)
Open Thread August 31 - September 6

Is anyone willing to share an Anki deck with me? I'm trying to start using it. I'm running into a problem likely derived from having never, uh, learned how to learn. I look through a book or a paper or an article, and I find it informative, and I have no idea what parts of it I want to turn into cards. It just strikes me as generically informative. I think that learning this by example is going to be by far the easiest method.

4Vaniver6yThere are many shared Anki decks [https://ankiweb.net/shared/decks/]. In my experience, the hardest thing to get correct in Anki is picking the correct thing to learn, and seeing someone else's deck doesn't work all that well for it because there's no guarantee that they're any good at picking what to learn, either. Most of my experience with Anki has been with lists, like the NATO phonetic alphabet, where there's no real way to learn them besides familiarity, and the list is more useful the more of it you know. What I'd recommend is either picking selections from the source that you think are valuable, or summarizing the source into pieces that you think are valuable, and then sticking them as cards (perhaps with the title of the source as the reverse). The point isn't necessarily to build the mapping between the selection and the title, but to reread the selected piece in intervals determined by the forgetting function.
Magnetic rings (the most mediocre superpower) A review.

Does anyone have or know anyone with a magnetic finger implant who can compare experiences? I've been considering the implant. If the ring isn't much weaker, that would be a good alternative.

1Username6yI have two magnetic implants, and would be happy to answer questions (see also the AMA I did about two years ago: https://reddit.com/r/IAmA/comments/1vvg7j/ [https://reddit.com/r/IAmA/comments/1vvg7j/] ). The sensations are as OP described, though mine are small enough that I don't have any issues with knives/ferrous materials moving to stick to my fingers. Judging by OP's 20cm range on microwaves, this smaller size is negated by the fact that my magnets sit a lot closer to the nerves - I believe we feel just about the same strength of fields.
0Elo6yI have asked two to comment. will post up their replies.
MIRI Fundraiser: Why now matters

So, to my understanding, doing this in 2015 instead of 2018 is more or less exactly the sort of thing that gets talked about when people refer to a large-scale necessity to "get there first". This is what it looks like to push for the sort of first-mover advantage everyone knows MIRI needs to succeed.

It seems like a few people I've talked to missed that connection, but they support the requirement for having a first-mover advantage. They support a MIRI-influenced value alignment research community, but then they perceive you asking for more mone... (read more)

Bragging Thread July 2015

That's a pretty large question. I'd love to, but I'm not sure where to start. I'll describe my experience in broad strokes to start.

Whenever I do anything, I quickly acclimate to it. It's very difficult to remember that things I know how to do aren't trivial for other people. It's way more complex than that... but I've been sitting on this text box for a few hours. So, ask a more detailed question?

1Elo6yWhat did you mean at first when you described "smashed a bunch of imposter syndrome"? I have suggested to a friend that the feelings they were experiencing were a vein of imposter syndrome and the response I received was along the lines of, "I can't have imposter syndrome, in order to have imposter syndrome I would have to have done something worthwhile compared to others". Of course it comes from a person with Honours in Psychology and a concert pianist. I just have a really big ego and can't relate. I am no imposter because I don't work like that. If I was in a room where I felt I was an imposter I would actually be an imposter - hanging out to gather all the secret-room insider-information I was trying to gather. Can you describe the things that changed your imposter syndrome from "screaming at me about how I am not good enough" to "background noisy noise about things".
Bragging Thread July 2015

This month (and a half), I dropped out of community college, raised money as investment in what I'll do in the future, moved to Berkeley, got very involved in the rationalist community here, smashed a bunch of impostor syndrome, wrote a bunch of code, got into several extremely promising and potentially impactful projects, read several MIRI papers and kept being urged to involve myself with their research further.

I took several levels of agency.

2[anonymous]6yDetails man! What code did you write? What projects are you involved in? What did you raise money for?
5Elo6yI know a few people with varying forms of Imposter syndrome. I have never felt the similar experience and would like to bridge the gap of understanding, and see if I can pull some advice out of your experience. Can you explain more?
2shminux6yWow!
Open Thread, May 11 - May 17, 2015

Hi. I don't post much, but if anyone who knows me can vouch for me here, I would appreciate it.

I have a bit of a Situation, and I would like some help. I'm fairly sure it will be positive utility, not just positive fuzzies. Doesn't stop me feeling ridiculous for needing it. But if any of you can, I would appreciate donations, feedback, or anything else over here: http://www.gofundme.com/usc9j4

Open Thread, May 11 - May 17, 2015

I've begun to notice discussion of AI risk in more and more places in the last year. Many of them reference Superintelligence. It doesn't seem like a confirmation bias/Baader-Meinhoff effect, not really. It's quite an unexpected change. Have others encountered a similar broadening in the sorts of people you encounter talking about this?

5Manfred6yYup. Nick Bostrom is basically the man. Above and beyond being the man, he's a respectable focal point for a sea change that has been happening for broader reasons.
If you could push a button to eliminate one cognitive bias, which would you choose?

Typical Mind Fallacy. Allows people to actually cooperate for once. One of the things I've been thinking about is how one person's fundamental mind structure is interpreted by another as an obvious status grab. I want humans to better approximate Aumann's Agreement Theorem. Solve the coordination problem, solve everything.

0MalcolmOcean6yI really like this. There would be, I think, a strong and weak version of not having typical mind fallacy. The weak one is where you simply stop short of unreasonable assumptions of how much others will be like you (less certainty but fewer mistakes). The strong version would be having actually really accurate and precise models of other minds. It seems plausible that someone who had the weak one might grow towards the strong one if they were very curious and attentive.
In what language should we define the utility function of a friendly AI?

Determining the language to use is a classic case of premature optimization. No matter what the case, it will have to be provably free of ambiguities, which leaves us programming languages. In addition, in terms of the math of FAI, we're still at the "is this Turing complete" sort of stage in development. So it doesn't really matter yet. I guess one consideration is that the algorithm design is going to take way more time and effort than the programming, and the program has essentially no room for bugs (Corrigibility is an effort to make it easie... (read more)

Against the internal locus of control

I think I see the problem. Tell me what your response to this article is. Do you see messy self-modification in pursuit of goals at the expense of a bit of epistemic rationality to be a valid option to take? Is Dark == Bad? In your post, you say that it is generally better not to believe falsehoods. My response to that is that things which depend on what you expect to happen are the exception to that heuristic.

Life outcomes are in large part determined by your background that you can't change, but expecting to be able to change that will lead you to ignor... (read more)

Futarchy and Unfriendly AI

I can't say much about the consequences of this, but it appears to me that both democracy and futarchy are efforts to more closely approximate something along the lines of a CEV for humanity. They have the same problems, in fact. How do you reconcile mutually exclusive goals of the people involved?

In any case, that isn't directly relevant, but linking futarchy with AI caused me to notice that. Perhaps that sort of optimization style, of getting at what we "truly want" once we've cleared up all the conflicting meta-levels of "want-to-want", is something that the same sorts of people tend to promote.

Bitcoin value and small probability / high impact arguments

Nitpick: BTC can be worth effectively less than $0 if you buy some then the price drops. But in a Pascalian scenario, that's a rounding error.

More generally, the difference between a Mugging and a Wager is that the wager has low opportunity cost for a low chance of a large positive outcome, and the Mugging is avoiding a negative outcome. So, unless you've bet all the money you have on Bitcoin, it maps much better to a Wager scenario than a Mugging. This is played out in the common reasoning of "There's a low chance of this becoming extremely valuable.... (read more)

2Ander6yNo, that would mean that you have an investment loss. Bitcoin is still worth $X each, whatever the new market price is. When you buy something and it goes down in value, its not worth less than $0, its just worth less than you paid for it.
0vbuterin6ySo a wager is about a positive outcome, but there is a standard knockdown argument saying that the wager argument is incorrect precisely because of the possibility of negative outcomes, ie. G' sending you to hell for worshipping G, if it turns out the G' and not G is real. A mugging is about avoiding a negative outcome, but my proposed argument shows how not cooperating with the mugging can also avoid a negative outcome. Bitcoin is actually a third category: investing in BTC has a probability of a very positive outcome, but it is not the case that either (i) investing in BTC has a probability of a very negative outcome (well ok some future government may do a witch hunt of BTC holders, but everyone agrees that's 5 orders of magnitude less likely than BTC taking over), or (ii) not investing in BTC has a probability of a very positive outcome. It's very specifically a question of how to weigh a small probability of a large gain ($34k per coin) versus a very high probability of a small loss (-$245 per coin from BTC dropping to zero). Precisely.
Michael Oakeshott's critique of something-he-called-rationalism

The entire point of "politics is the mind-killer" is that no, even here is not immune to tribalistic idea-warfare politics. The politics just get more complicated. And the stopgap solution until we figure out a way around that tendency, which doesn't appear reliably avoidable, is to sandbox the topic and keep it limited. You should have a high prior that a belief that you can be "strong" is Dunning-Kruger talking.

0[anonymous]6yOkay, but feeling no passion, literally, no blood pressure rising isn't a strong evidence there with few false positives? Does it have many false positives? Sandboxing is okay, better than total taboo, this is why I recommended a quarantine. Or a biweekly thread.
The great decline in Wikipedia pageviews (condensed version)

This would rely on a large fraction of pageviews being from Wikipedia editors. That seems unlikely. Got any data for that?

1Xerographica6yNo data, like I said... I did find this... That confirms a decline in editors... and by extension... a decline in edits/pageviews... but no idea what fraction of the total pageviews decline it represents. It's probably pretty small. The Google explanation probably represents a much higher fraction. For a while Wikipedia seemed to frequently be at the top of numerous search results. This would of course equate to considerable pageviews. Now it seems like Wikipedia results aren't as frequently as high as they used to be.
Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice"

You could construct an argument about needing to reinforce explicitly using system-2 ethics on common situations to make sure that you associate those ethics implicitly with normal situations, and not just contrived edge cases. But that seems to be even a bit too charitable. And also easily fixed if so.

Where can I go to exploit social influence to fight akrasia?

From my experiences trying similar things over IRC, I have found that the lack of anything holding you to your promises definitely is a detriment to most people. I have found a few for whom that's not the case, but that's very much the exception. That's definitely a failure mode to look out for, doing this online (especially in text) won't work for many people. In addition, this discrepancy can create friction between people.

The general structure of the failure tends to be one person feeling vaguely bad about not talking as much, or missing a session. And... (read more)

The Fermi paradox as evidence against the likelyhood of unfriendly AI

Relating to your first point, I've read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that's solvable with a wider definition of the sort of stuff it's supposed to be Friendly to, and I'd hope aliens would think of that, but it's certainly possible.

1Vladimir_Nesov8y(Terminological nitpick: You can't usually solve problems by using different definitions.) Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn't decide on object level what's "Friendly". See also Complex Value Systems are Required to Realize Valuable Futures [http://intelligence.org/files/ComplexValues.pdf].
Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96

According to Quirrel, yes, they are. "Anything with a brain". And I notice that you've only looked at what we've directly seen. The presence of spells like all the ones you mentioned lead me to think that you can do more directed things with spells harry hasn't come across yet.

Norbert Wiener on automation and unemployment

the second, cybernetic, industrial revolution “is [bound] to devalue the human brain, at least in its simpler and more routine decisions

It certainly seems like he considered it, at least on a basic level, enough to be extrapolated.

Open thread, July 23-29, 2013

Well, I did say it far outweighed it. Even that's less of an inconvenience in my mind, but that's getting to be very much a personal preference thing.

Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96

Creating arbitrary animals that are barely alive, don't need food, water, air, or movement, and made of easily workable material which is also good as armor seems like a good place to start, and also within the bounds of magic. This isn't as absurd as it seems. Essentially living armor plates. You'd want them to be thin so you could have multiple layers, and to fall off when they die, and various similar things. Or maybe on a different scale, like scale or lamellar armor.

1CAE_Jones8yOogely Boogely? Summoning a desk and transfiguring it into a pig? Petrifying numerous terminally ill people, transfiguring them into something small and stable (aka the ringmione hypothesis), and using a finite to turn them into a shield? Filling a mokeskin pouch with chilled snakes? (Imagines Voldemort constantly casting AK at Harry, who constantly shouts snake and pulls something out of his pouch). Or maybe even Serpensortia, if the conjured snake counts for purposes of AK (it can be finited, after all). Or one could just summon a cloud of spiders ("The Amazing Spider-Mage! Not to be confused with Spider-Muggle!) In canon, Faux-Moody demonstrated AK on a spider. Are spiders still vulnerable to AK in MoR?
0ikrase8yOh god. What if you used nanotech to make your skin be made out of patches with very small brains...
Open thread, July 23-29, 2013

The messiness and potential for really unpleasant sounds, in my mind, far outweighs the need for a specific type of dry-erase marker. Though that might be related to how easily sounds can be unpleasant to me in particular.

0[anonymous]8yI meant that it's obvious that a given piece of chalk will work, whereas a given dry-erase marker may have dried up without obviously looking like it's dried up.