All of TsviBT's Comments + Replies

TsviBT6dΩ11168

So for example, say Alice runs this experiment:

Train an agent A in an environment that contains the source B of A's reward.

Alice observes that A learns to hack B. Then she solves this as follows:

Same setup, but now B punishes (outputs high loss) A when A is close to hacking B, according to a dumb tree search that sees whether it would be easy, from the state of the environment, for A to touch B's internals.

Alice observes that A doesn't hack B. The Bob looks at Alice's results and says,

"Cool. But this won't generalize to future lethal systems because it doe... (read more)

The main way you produce a treacherous turn is not by "finding the treacherous turn capabilities," it's by creating situations in which sub-human systems have the same kind of motive to engage in a treacherous turn that we think future superhuman systems might have.

When you say "motive" here, is it fair to reexpress that as: "that which determines by what method and in which directions capabilities are deployed to push the world"? If you mean something like that, then my worry here is that motives are a kind of relation involving capabilities, not somet... (read more)

2paulfchristiano7d
I think if you train AI systems to select actions that will lead to high reward, they will sometimes learn policies that behave well until they are able to overpower their overseers, at which point they will abruptly switch to the reward hacking strategy to get a lot of reward. I think there will be many similarities between this phenomenon in subhuman systems and superhuman systems. Therefore by studying and remedying the problem for weak systems overpowering weak overseers, we can learn a lot about how to identify and remedy it for stronger systems overpowering stronger overseers. I'm not exactly sure how to cash out your objection as a response to this, but I suspect it's probably a bit too galaxy-brained for my taste.

Creating in vitro examples of problems analogous to the ones that will ultimately kill us, e.g. by showing agents engaging in treacherous turns due to reward hacking or exhibiting more and more of the core features of deceptive alignment.

 

A central version of this seems to straightforwardly advance capabilities. The strongest (ISTM) sort of analogy between a current system and a future lethal system would be that they use an overlapping set of generators of capabilities. Trying to find an agent that does a treacherous turn, for the same reasons as a f... (read more)

6paulfchristiano7d
The main way you produce a treacherous turn is not by "finding the treacherous turn capabilities," it's by creating situations in which sub-human systems have the same kind of motive to engage in a treacherous turn that we think future superhuman systems might have. There are some differences and lots of similarities between what is going on in a weaker AI doing a treacherous turn and a stronger AI doing a treacherous turn. So you expect to learn some things and not others. After studying several such cases it seems quite likely you understand enough to generalize to new cases. It's possible MIRI folks expect a bigger difference in how future AI is produced. I mostly expect just using gradient descent, resulting in minds that are in some ways different and in many ways different. My sense is that MIRI folks have a more mystical view about the difference between subhuman AI systems and "AGI." (The view "stack more layers won't ever give you true intelligence, there is a qualitative difference here" seems like it's taking a beating every year, whether it's Eliezer or Gary Marcus saying it.)

Also, "No one knows how to make AI systems that try to do what we'd want them to do."

I'm asking what reification is, period, and what it has to do with what's in reality (the thing that bites you regardless of what you think).

2Gordon Seidoh Worley17d
This seems straightforward to me: reification is a process by which our brain picks out patterns/features and encodes them so we can recognize them again and make sense of the world given our limited hardware. We can then think in terms of those patterns and gloss over the details because the details often aren't relevant for various things. The reason we reify things one way versus another depends on what we care about, i.e. our purposes. [https://www.lesswrong.com/posts/agvmvrzM6um462DC2/the-purpose-of-purpose]

How do they explain why it feels like there are noumena? (Also by "feels like" I'd want to include empirical observations of nexusness.)

2Gordon Seidoh Worley17d
To me this seems obvious: noumena feel real to most people because they're captured by their ontology. It takes a lot of work for a human mind to learn not to jump straight from sensation to reification, and even with training there's only so much a person can do because the mind has lots of low-level reification "built in" that happens prior to conscious awareness. Cf. noticing [https://www.lesswrong.com/tag/noticing]

In those scenarios, does Half-Ass seem like more of a "thing"?

IDK, but I like the question.

I'd say that what does seem like a thing is [insertion f-sort] where the fraction f is a parameter. Then [insertion 1/2-sort] is like [this particular instance of me picking up my water bottle and taking a drink], and [insertion f-sort] is like [me picking up my water bottle and taking a drink, in general].

Unless there's something interesting about [insertion 1/2-sort] in particular, like for example if there's some phase transition at 1/2 or something. Then I'd e... (read more)

Yeah, VoI seems like a better place to defer. Another sort of general solution, which I find difficult but others might find workable, is to construct theories of other perspectives. That lets there be sort of unlimited space to defer: you can do something that looks like deferring, but is more precisely described as creating a bunch of inconsistent theories in your head, and deferring to people about what their theory is, rather than what's true. (I run into trouble because I'm not so willing to accept others's languages if I don't see how they're using words consistently.)

(See my response to the parent comment.)

What I mean is, suppose the deferred-to has some belief X. This X is a refined, theoretically consilient belief to some extent, and to some extent it isn't, but is instead pre-theoretic; intuitive, pragmatic, unreliable, and potentially inconsistent with other beliefs. What happens when the deferred-to takes practical, externally visible action, which is somehow related to X? Many of zer other beliefs will also play a role in that action, and many of those beliefs will be to a large extent pre-theoretical. Pre-theoreticalness is contagious, in action: theo... (read more)

But you should be clear about just what it is you're studying

The post discusses the thingness of things. Seven, for example, seems like a thing--an entity, an object. I naturally mentally relate to seven in many of the same ways that I naturally mentally relate to a table. So the question is, what if anything is in common between how we relate to each of those entities that seem like things?

Half-Ass is a reasonable example of a somewhat non-thing, according to the hypothesis in the post. It refers one fairly strongly to "half" and to "ass" "insertion sort", but "insertion sort" barely refers one to Half-Ass, and likewise "half".

1localdeity24d
Hmmm... perhaps the concept you're going for is "a thing that my brain thinks is worth remembering (and/or categorizing, naming, or otherwise having a handle for)"?  Which, of course, would be highly subjective and context-specific. Suppose you were teaching CS students, and it was for some reason very common for your students to make that specific error of replacing "i < length(A)" with "i < length(A)/2", because ... maybe you're working with a high-level language, but you're implementing this part in assembly for speed, and the language runtime actually represents the integer N as 2*N because of pointer tagging [https://en.wikipedia.org/wiki/Tagged_pointer] and it's common for students to forget that part; or maybe this is a compiler bug, triggered in rare but known circumstances (e.g. when the compiler decides to put the length into a certain register which is treated specially).  Then you find it useful to know how Half-Ass behaves, so you can better test and diagnose your students' programs, or create a test case to detect systems with the buggy version of that compiler (perhaps for white-hat or black-hat security purposes). In those scenarios, does Half-Ass seem like more of a "thing"?  And if those scenarios were real, but also there were plenty of "civilian" programmers who'd never used that language or that buggy compiler and probably never will, would it be valid for those programmers to say "No, I don't think that qualifies as a thing"? Hmm, you seem to be attaching significance to the words I used in the name.  I would have thought that "whether something qualifies as a thing" was mostly independent of whatever words people had come up with when trying to name it.  (The most descriptive name would be "insertion half-sort", incidentally.)

Things are reified out of sensory experience of the world (though note that "sensory" is redundant here), and the world is the unified non-thing

Okay, but the tabley-looking stuff out there seems to conform more parsimoniously to a theory that posits an external table. I assume we agree on that, and then the question is, what's happening when we so posit?

2Gordon Seidoh Worley21d
Yep, so I think this gets into a different question of epistemology not directly related to things but rather about what we care about, since positing a theory that what looks to me like a table implies something table shaped about the universe requires caring about parsimony. (Aside: It's kind of related because to talk about caring about things we need reifications that enable us to point to what we care about, but I think that's just an artifact of using words—care is patterns of behavior and preference we can reify call "parsimonious" or something else, but exist prior to being named.) If we care about something other than parsimony, we may not agree that the universe is filled with tables. Maybe we slice it up quite differently and tables exist orthogonal to our ontology.

You might be interested in Quine's collection of essays Theories and Things, where he defends some version of "things as space-time regions". I'm pretty skeptical of your version though; or at least, I'm interested in why 7 seems like an object.

2the gears to ascenscion25d
chess is a grammar for physical systems, which I started trying to write out, before realizing I don't know the rules well enough. but anyway it defines a network of position representations connected in a grid pattern with state transitions constraints; the chess grammar can be implemented by many physical substrates, eg a physical board with a grid on it, or a list of randomly shuffled board location names on a whiteboard - changing the projected geometry doesn't change the game unless the movement metric changes, so I'd still classify the aggregate physical system as chess. It could be quite valid to argue that the variety of possible instances of that grammar includes many physical systems which are not single objects, due to binding the word "object" only to physical systems that have connected internal molecular bonds or such things. I'll have to check out the reference.

if you define the central problem as something like building a system that you'd be happy for humanity to defer to forever.

[I at most skimmed the post, but] IMO this is a more ambitious goal than the IMO central problem. IMO the central problem (phrased with more assumptions than strictly necessary) is more like "building system that's gaining a bunch of understanding you don't already have, in whatever domains are necessary for achieving some impressive real-world task, without killing you". So I'd guess that's supposed to happen in step 1. It's debata... (read more)

4davidad1mo
I’d say the scientific understanding happens in step 1, but I think that would be mostly consolidating science that’s already understood. (And some patching up potentially exploitable holes where AI can deduce that “if this is the best theory, the real dynamics must actually be like that instead”. But my intuition is that there aren’t many of these holes, and that unknown physics questions are mostly underdetermined by known data, at least for quite a long way toward the infinite-compute limit of Solomonoff induction, and possibly all the way.) Engineering understanding would happen in step 2, and I think engineering is more “the generator of large effects on the world,” the place where much-faster-than-human ingenuity is needed, rather than hoping to find new science. (Although the formalization of the model of scientific reality is important for the overall proposal—to facilitate validating that the engineering actually does what is desired—and building such a formalization would be hard for unaided humans.)

Oh I see, the haploid cells are, like, independently viable and divide and stuff.

4Metacelsus7mo
Yes, you can let them divide and then use the usual (destructive) sequencing.   And also yes, sequencing meiotic cousins of sperm (or polar bodies, in the case of eggs) is a promising concept. Unfortunately primary spermatocytes won't do meiosis if you isolate them; the environment of the seminiferous tubules is very important. So you would have to be able to track them within the tubules in an organ culture system.   Polar body biopsies for eggs are much more feasible (I have done them, although I haven't sequenced the polar bodies). Unfortunately the efficacy of selection is limited by the number of eggs.

Cool! Is it known how to sequence the haploid cell? Can you get a haploid cell to divide so you can feed it into PCR or something? (I'm a layperson w.r.t. biology.) I just recently had an idea about sequencing sperm by looking at their meiotic cousins and would be interested in talking in general about this topic; email at gmail, address tsvibtcontact. https://tsvibt.blogspot.com/2022/06/non-destructively-sequencing-gametes-by.html

5TsviBT7mo
Oh I see, the haploid cells are, like, independently viable and divide and stuff.

I haven't looked really, seems worth someone doing. I think there's been a fair amount of experimentation, though maybe a lot of it is predictably worthless (e.g. by continuing to inflict central harms of normal schooling), I don't know. (This post is mainly aimed at adding detail to what some of the harms are, so that experiments can try to pull the rope sideways on supposed tradeoffs like permissiveness vs. strictness or autonomy vs. guidance.) I looked a little. Aside from Montessori (which would take work to distinguish things branded as Montessori vs... (read more)

[To respond to not the literal content of your comment, in case it's relevant: I think some teachers are intrinsically bad, some are intrinsically great, and many are unfortunately compelled or think they're compelled to try to solve an impossible problem and do the best they can. Blame just really shouldn't be the point, and if you're worried someone will blame someone based on a description, then you may have a dispute with the blamer, not the describer.]

criticism of schools unrealistic

Well, it's worth distinguishing (1) whether/what harms are being ... (read more)


Afterthoughts:

-- An attitude against pure symbolism is reflected in the Jewish prohibition against making a bracha levatala (= idle, null, purposeless). That's why Jews hold their hands up to the havdalah candle: not to "feel the warmth of Shabbat" or "use all five senses", but so that the candle is being actually used for its concrete function.

-- An example from Solstice of a "symbolic" ritual is the spreading-candle-lighting thing. I quite like the symbolism, but also, there's a hollowness; it's transparently symbolic, and on some level what's communicat... (read more)

2Gordon Seidoh Worley1y
Interesting, I really love it and miss when Solstice doesn't include the candle lighting. But then I just quite like rituals in general and in fact enjoy the rituality of everyday life (the ritual of walking into a room and turning on the light, the ritual of sitting down at my desk and opening my laptop and typing in my password, etc.). I wonder what causes some folks to like rituals so much and others to find them deeply uncomfortable? This seems quite relevant to figuring out how to design something like a Solstice celebration that people will like.
Answer by TsviBTMar 31, 2021Ω721

I speculate (based on personal glimpses, not based on any stable thing I can point to) that there's many small sets of people (say of size 2-4) who could greatly increase their total output given some preconditions, unknown to me, that unlock a sort of hivemind. Some of the preconditions include various kinds of trust, of common knowledge of shared goals, and of person-specific interface skill (like speaking each other's languages, common knowledge of tactics for resolving ambiguity, etc.).
[ETA: which, if true, would be good to have already set up before crunch time.]

In modeling the behavior of the coolness-seekers, you put them in a less cool position.

It might be a good move in some contexts, but I feel resistant to taking on this picture, or recommending others take it on. It seems like making the same mistake. Focusing on the object level because you want to be [cool in that you focus on the object level], that does has the positive effect of focusing on the object level, but I think also can just as well have all the bad effects of trying to be in the Inner Ring. If there's something good about getting into the Inn... (read more)

1Dirichlet-to-Neumann2y
Exactly this. The whole point of the Inner Ring (which I did not read, but judging by the review and my knowledge of Lewis/Christian thought and virtue ethic) is that you should aim at the goods that are inherent to your trade or activity (i.e., if you are a coder, writing good code), and not care about social goods that are associated with the activity. Lewis then makes a second claim (which is really a different claim) that you will also reach social goods through sincerely pursuing the inherent goods of your activity.

I agree that the epistemic formulation is probably more broadly useful, e.g. for informed oversight. The decision theory problem is additionally compelling to me because of the apparent paradox of having a changing caring measure. I naively think of the caring measure as fixed, but this is apparently impossible because, well, you have to learn logical facts. (This leads to thoughts like "maybe EU maximization is just wrong; you don't maximize an approximation to your actual caring function".)

In case anyone shared my confusion:

The while loop where we ensure that eps is small enough so that

bound > bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

is technically necessary to ensure that bad1() doesn't surpass bound, but it is immaterial in the limit. Solving

bound = bad1() + (next - this) * log((1 - p1) / (1 - p1 - eps))

gives

eps >= (1/3) (1 - e^{ -[bound - bad1()] / [next - this]] })

which, using the log(1+x) = x approximation, is about

(1/3) ([bound - bad1()] / [next - this] ).

Then Scott's comment gives the rest. I was worried about the

... (read more)

Could you spell out the step

every iteration where mean(𝙴[𝚙𝚛𝚎𝚟:𝚝𝚑𝚒𝚜])≥2/5 will cause bound - bad1() to grow exponentially (by a factor of 11/10=1+(1/2)(−1+2/5𝚙𝟷))

a little more? I don't follow. (I think I follow the overall structure of the proof, and if I believed this step I would believe the proof.)

We have that eps is about (2/3)(1-exp([bad1() - bound]/(next-this))), or at least half that, but I don't see how to get a lower bound on the decrease of bad1() (as a fraction of bound-bad1() ).

1Scott Garrabrant7y
You are correct that you use the fact that 1+eps is at approximately e^(eps). The concrete way this is used in this proof is replacing the ln(1+3eps) you subtract from bad1 when the environment is a 1 with 3eps=(bound - bad1) / (next - this), and replacing the ln(1-3eps/2) you subtract from bad1 when the environment is a 0 with -3eps/2=-(bound - bad1) / (next - this)/2 Therefore, you subtract from bad1 approximately at least (next-this)((2/5)(bound - bad1) / (next - this)-(3/5)*(bound - bad1) / (next - this)/2). This comes out to (bound - bad1)/10. I believe the inequality is the wrong direction to just use e^(eps) as a bound for 1+eps, but when next-this gets big, the approximation gets close enough.

(Upvoted, thanks.)

I think I disagree with the statement that "Getting direct work done." isn't a purpose LW can or should serve. The direct work would be "rationality research"---figuring out general effectiveness strategies. The sequences are the prime example in the realm of epistemic effectiveness, but there's lots of open questions in productivity, epistemology, motivation, etc.

This still incentivizes prisons to help along the death of prisoners that they predict are more likely then the prison-wide average to repeat-offend, in the same way average utilitarianism recommends killing everyone but the happiest person (so to speak).

-9VoiceOfRa8y
3lululu8y
Hmmm, yes. Yikes. Additional thought needed.

I see. That could be right. I guess I'm thinking about this (this = what to teach/learn and in what order) from the perspective of assuming I get to dictate the whole curriculum. In which case analysis doesn't look that great, to me.

Ok that makes sense. I'm still curious about any specific benefits that you think studying analysis has, relative to other similarly deep areas of math, or whether you meant hard math in general.

0JonahS8y
I think that analysis is actually the easiest entry point to the kind of mathematical reasoning that I have in mind for people who have learned calculus. Most of the theorems are at least somewhat familiar, so one can focus on the logical rigor without having to simultaneously having to worry about understanding what the high level facts are.

Seems like it's precisely because of the complicated technical foundation that real analysis was recommended.

What I'm saying is, that's not a good reason. Even the math with simple foundations has surprising results with complicated proofs that require precise understanding. It's hard enough as it is, and I am claiming that analysis is too much of a filter. It would be better to start with the most conceptually minimal mathematics.

Even great mathematicians ran into trouble playing fast and loose with the real numbers. It took them about two hundred y

... (read more)
0JonahS8y
Oh, sure, in expressing agreement with Epictetus I was just saying that I don't think that you get the full benefits that I was describing from basic discrete math. I agree that some students will find discrete math a better introduction to mathematical proof.

Could you say more about why you think real analysis specifically is good for this kind of general skill? I have pretty serious doubts that analysis is the right way to go, and I'd (wildly) guess that there would be significant benefits from teaching/learning discrete mathematics in place of calculus. Combinatorics, probability, algorithms; even logic, topology, and algebra.

To my mind all of these things are better suited for learning the power of proof and the mathematical way of analyzing problems. I'm not totally sure why, but I think a big part of it i... (read more)

1Gram_Stone8y
This is also somewhat in reply to your elaboration in this comment [http://lesswrong.com/lw/mac/the_value_of_learning_mathematical_proof/cfoc]. Just some data points: In regards to this topic of proof, and more generally to the topic of formal science, I have found logic a very useful subject. For one, you can leverage your verbal reasoning ability, and begin by conceiving of it as a symbolization of natural language, which I find for myself and many others is far more convenient than, say, a formal science that requires more spatial reasoning or abstract pattern recognition. Later, the point that formal languages are languages in their own right is driven home, and you can do away with this conceptual bridge. Logic also has helped me to conceive of formal problems as a continuum of difficulty of proof, rather than proofs and non-proofs. That is, when you read a math textbook, sometimes you are instructed to Solve, sometimes to Evaluate, sometimes to Graph; and then there is the dreaded Show That X or Prove That X! In a logic textbook, almost all exercises require a proof of validity, and you move up over time, deriving new inference rules from old, and moving onto metalogical theorems. Later returning to books about mathematical proof, I found things much less intimidating. I found that proof is not a realm forbidden to those lacking an innate ability to prove; you must work your way upwards as in all things. Furthermore, in regards to this: In my opinion, very significant and complex results in logic are arrived at quite early in comparison to the significance of, and effort invested in, results in other fields of formal science. And in regards to this: I have found that in continuous mathematics I have walked away from proofs with a feeling best expressed as, "If you say so," as opposed to discrete mathematics and logic, where it's more like, "Why, of course!"
0JonahS8y
I agree with Epictetus' comment.
3JeremyHahn8y
Personally I think real analysis is an awkward way to learn mathematical proofs, and I agree discrete mathematics or elementary number theory is much better. I recommend picking up an Olympiad book for younger kids, like "Mathematical Circles, A Russian Experience."
0Epictetus8y
I think the main thrust of the article was less about the power of mathematics and more about the the habits of close reading and careful attention to detail required to do rigorous mathematics. Seems like it's precisely because of the complicated technical foundation that real analysis was recommended. Theorems have to be read carefully, as even simple ones often have lots of hypotheses. Proofs have to be worked through carefully to make sure that no implicit assumptions are being introduced. Even great mathematicians ran into trouble playing fast and loose with the real numbers. It took them about two hundred years to finally lay rigorous foundations for calculus.

PSA: If you wear glasses, you might want to take a look behind the little nosepads. Some... stuff... can build up there. According to this unverified source it is oxidized copper from glasses frame + your sweat, and can be cleaned with an old toothbrush + toothpaste.

9Dorikka8y
Sounds like the only disutility of the stuff is that it annoys some people, but it can't annoy you if you dont notice it...so why bring it up?

There are ten thousand wrong solutions and four good solutions. You don't get much info from being told a particular bad solution. The opposite of a bad solution is a bad solution.

1Jiro8y
So ask a series of "which of X and Y would you prefer that we do". The demon always prefers the worst thing, but is constrained to truthfully describe its preferences. This is a single bit of data, but it's really useful.

Lol yeah ok. I was unsure because alexa says 9% of search traffic to LW is from "demetrius soupolos" and "traute soupolos" so maybe there was some big news story I didn't know about.

0Viliam_Bur8y
Probably yes, see: http://www.alexa.com/siteinfo/hpmor.com [http://www.alexa.com/siteinfo/hpmor.com]

I'd say your first thought was right.

She noticed half an hour later on, when Harry Potter seemed to sway a bit, and then hunch over, his hands going to cover up his forehead; it looked like he was prodding at his forehead scar. The thought made her slightly worried; everyone knew there was something going on with Harry Potter, and if Potter's scar was hurting him then it was possible that a sealed horror was about to burst out of his forehead and eat everyone. She dismissed that thought, though, and continued to explain Quidditch facts to the historicall

... (read more)

As a simple matter of fact, Voldemort is stronger than Harry in basically every way, other than Harry's (incomplete) training in rationality. If Voldemort were a good enough planner, there's no way he could lose; he is smarter, more powerful, and has more ancient lore than any other wizard. If Voldemort were also rational, and didn't fall prey to overconfidence bias / planning fallacy...

Well, you can be as rational as you like, but if you are human and your opponent is a superintelligent god with a horde of bloodthirsty nanobots, the invincible Elder Ligh... (read more)

2Eli Tyre3y
Ah. But he would want to be more careful than that, because there's a prophecy, and Voldemort got burned the last time a prophecy was involved. So he goes out of his way to tear it apart, by bringing Hermione back, for instance, which required the stone, and having the other Tom swear an unbreakable vow.
4Velorien8y
Yup. So the solution is not to make your villain a superintelligent god with a horde of bloodthirsty nanobots, the invincible Elder Lightsaber, and the One Thing to Rule Them All to begin with. Eliezer took the risk of setting up an incredibly powerful villain, and it is to his credit as a writer that up until the very end he made us believe that he was capable of writing a satisfying resolution anyway. Frankly, he still might. There are four chapters left, and Eliezer is nothing if not capable of surprising his audience. And as a Naruto fan, he might also have come across Bleach (another of the Big Three shounen series), and learned from its author already having made the exact same mistake.

A brief and terrible magic lashed out from the Defense Professor's wand, scouring the hole in the wall, scarring the huge chunk of metal that lay in the room's midst; as Harry had requested, saying that the method he'd used might identify him.

Chapter 58

I'm kind of worried about this... all the real attempted solutions I've seen use partial transfiguration. But if we take "the antagonist is smart" seriously, and given the precedent for V remembering and connecting obscure things (e.g. the Resurrection Stone), we should assume V has protections ... (read more)

Didn't V see at least the results of a Partial Transfiguration in Azkaban (used to cut through the wall)? Doesn't seem like something V would just ignore or forget.

1Nornagest8y
I believe Voldemort was unconscious at the time, following a magical feedback mishap at the conclusion of his duel with Bahry. Bellatrix was awake, but probably not very coherent after eleven years in Azkaban, and Voldemort strikes me as the type to dismiss confusing reports from unreliable underlings.

Since they are touching his skin, does he need his wand to cancel the Transfiguration?

3jkadlubo8y
No. He just learned to dispell Transfiguration without a wand when he dispelled the one on Hermione's body.
4Astazha8y
Reduce, re-use, recycle.

This is persuasive, but... why the heck would Voldemort go the trouble of breaking into Azkaban instead of grabbing Snape or something?

4Astazha8y
VM said he broke into Azkaban to find out where his wand was; there's also the flesh of the servant thing. Using her Dark Mark is a secondary benefit.
5arundelo8y
In Chapter 61 [http://hpmor.com/chapter/61] Dumbledore says:
0bramflakes8y
You can't Apparate within the Hogwarts wards.
4Jost8y
I rather doubt it; he might still be “guarding” that corridor. On the other hand, Lucius Malfoy should be there. His reaction might be interesting, given his previous, rather unusual encounters with Harry …

FYI, each sequence is (very roughly) 20,000 words.

2Paul Crowley8y
Assuming it is slower to read than the standard 200 wpm, that's still only a couple of hours each; seems doable!

(Presumably Parseltongue only prevents willful lies.)

Quirrell also claims (not in Parseltongue):

Occlumency cannot fool the Parselmouth curse as it can fool Veritaserum, and you may put that to the trial also.

It seems like what you can say in Parseltongue should only depend on the actual truth and on your mental state. What happens if I Confundus / Memory Charm someone into believing X? Can they say X in Parseltongue? If they can say it just because they believe it, then Parseltongue is not so hard to bypass; I just Confundus myself (or get someone t... (read more)

If Parseltongue depended only on the actual truth of the world, Voldemort would have won already, because you can then pull single bits of arbitrary information out of the aether one at a time.

Load More