kbear
kbear has not written any posts yet.

kbear has not written any posts yet.

Experimental result (pseudodeterminism): Computer experiments show that the function typically has only one local maximum in the sense that we cannot find any other local maximum.
a lot hinges on this. i would be interested to learn about the experimental setup.
Totoro is perfectly constructed in this way. not just in that scene, but credits to credits: the main driver of the plot is tucked just out of reach, on the upper shelf. as for the characters, as for the young audience.
a book that succeeds in this way is Holes. upon returning to it as an adult, i found that i had simply not read (or internalized) the central and most moving chapter.
both works also manage a further trick: they make this confusion primary to the conflict. contrast with Frozen and Up where these framing events are better understood as worldbuilding than narrative.
any non-literal storytelling tool (satire, allegory, allusion, theme, ...) can be straightforwardly used to discriminate audiences according to their sophistication. it is more rare (and more enjoyable!) when the simultaneous readings apply to the literal events (and without any tricks that would warrant the "psychological", or "unreliable" qualifiers). i cannot think of other examples at this time.
The arationality of emotion is easy to notice if you have a mood disorder.
isn't this a bit like saying "the arationality of the conscious mind is easy to notice if you have a thought disorder"?
Call me crazy, but I think unflinching analysis is pretty good! What is the alternative?
first, i disagree with the author of the original essay. the rationalist community clearly does engage with emotional and moral realities.
that said, from a faith-based (as opposed to acts-based) perspective, the supposed lack of engagement does undermine the arguments made. it is not so easy to make this perspective clear, according to the rules of logical argument. but my best attempt is something like so:
humans have many drives. most of these are self-serving, if not outright selfish. only one ("compassionate attending" maybe?) is good and just and trustworthy. in the absence of that one, some other motivation will... (read more)
Merely pointing out that the system starts from freedom and ends with "despotism" and that its conclusion is "monstrous" to you... is not enough. It's not a real argument.
sure it is! consider a classic troll argument that 1=0. we can conclude that some premise, or step of reasoning must be false, even if we are unable to locate the step. collaborative inquiry would have us then work together to determine the gap.
here, the contradiction is moral rather than logical: "i cannot stand the world that this argument implies is necessary." nonetheless, a response of "well, you need to engage with the reasoning, not just the conclusion" is rather missing the tone. we should prefer to work with our potentially dissatisfied friend to better understand our own argument, and the range of conclusions they could support.
yes. if we were capable of protecting them, we should have done so. not sure what other conclusion to draw.
if by your post you intended something like "it is in the US and China's mutual best interest to take the following course of action [...]" then, sure -- i strongly agree with this! but it seems prudent to phrase this as a prediction, rather than as a moral recommendation.
does the will of the taiwanese people have no bearing?
seems false, or at least uncharitable. do you expect that such people would self-report along the lines of "i don't take ideas seriously"? it seems more likely to me that they would report something like "i value family", and mean it. you may find the idea simple, but it is certainly an idea, and they certainly take it seriously.
put another way, this social conservatism came from somewhere, and is itself an idea. the assumption -- that arguments that worked to change your behavior would not change their behavior -- can be explained in two ways. either they do not take ideas seriously, as you suggest, or either they value different things than you.
if we assume the base universe looks something like the "objective" version of this universe, then my subjective experience requires vastly less information than the base universe. much of that could be deduplicated between other variations: the positions of the asteroids only need to be simulated once, for instance.
the assumption seems decent to me, as i expect the simulators to dream of variations on their own circumstances.
two questions:
- this seems to include an assumption regarding the points between training samples. if we take the point masses as the known values, then with this step we're adding some interpolation between those. (that is, if we were "really training" these things, then the integral would look like a sum, since it would only have support at those (x,y) that represent points in our training data.)
- have you tried adversarialy constructing \mu such that the integral has multiple maxima? if so, what did you run into? i worry that this could be a case where "random" examples mostly
... (read more)