If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…
You say "presumably yes". The whole point of this discussion is to listen to everyone who will say "obviously no"; their arguments would automatically apply to all weaker boxing techniques.
How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?
Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.
I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.
Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).
Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).
Could you please clarify your question here?
Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts?
I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.
I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.
Link is good, but I guess direct explanation of this simple thing could be useful.
It is not hard to build explicit map between R and R² (more or less interleaving the binary notations for numbers).
So the claim of Continuum Hypothesis is:
For every property of real numbers P there exists such a property of pairs of real numbers, Q such that:
1) ∀x (P(x) -> ∃! y Q(x,y))
2) ∀x (¬P(x) -> ∀y¬Q(x,y))
(i.e. Q describes mapping from support of P to R)
3) ∀x1,x2,y: ((Q(x1,y)^Q(x2,y)) -> x1=x2)
(i.e. the map is an injection)
4) (∀y ∃x Q(x,y)) ∨ (∀x∀y (Q(x,y)-> ...
ZFC is the universally, unequivocally best definition of a set
Worse. You are being tricked into believing that ZFC is at all a definition of a set at all, while it is just a set of restrictions on what we would tolerate.
In some sense, if you believe that there is only one second-order model of natural numbers, you have to make decisions what are the properties of natural numbers that you can range over; as Cohen has taught us, this involves making a lot of set-theoretical decisions with continuum hypothesis being only one of them.
Well Foundation in V_alpha case seems quite simple: you build externally-countable chain of subsets which simply cannot be represented as a set inside the first model of ZFC. So the external WF is not broken because the element-relation inside the models is different, and the inner WF is fine because the chain of inner models of external ZFC is not an inner set.
In the standard case your even-numbers explanation nicely shows what goes on — quoting is involved.
I need to think a bit to say what woud halt our attempts to build a chain of transitive countable models...
Ah, sorry, I didn't notice that the question is about model of ZFC inside a "universe" modelling ZFC+Con^∞(ZFC)
Actually, in NBG you have explicitness of assumptions and of first-order logic — and at the same time axiom of induction is a single axiom.
Actually, if you care about cardinality, you need a well-specified set theory more than just axioms of reals. Second-order theory has a unique model, yes, but it has the notion of "all" subsets, so it just smuggles some set theory without specifying it. As I understand, this was the motivation for Henkin semantics.
And if you look for a set theory (explicit or implicit) for reals as used in physics, I am not e...
Well, technically not every model of ZFC has a ZFC-modelling element. There is a model of "ZFC+¬Con(ZFC)", and no element of this monster can be a model of ZFC. Not even with nonstandard element-relation.
It may be that goal-orientation where there are no made-up rules is fun; as a good person there is need to follow some of the more stupid moral norms that made sense some puny two hundred years ago.
It seems that in weak formulations it can be confirmed.
Have you read "Through the Language Glass" by Deutscher?
Choosing better words for some situations does train you in some skills. It looks like people distinguish colours quicker if they have different names. For example, Russian speaker will notice the difference between "closer to sky blue" and "closer to navy blue" faster than English speaker because of habit to classify them as different colours. Deutscher cites a few different studies of that kind.
Apparently, language...
Actually, what you may wonder is whether utility of increased status just has a complex shape for you.
For example, I can imagine some situation of having too little status, but in most cases I get what is enough personally for me before even trying.
Actually, whatever license you use, your content will be copied around.
If you use a proprietary license after taking CC-BY core content, copying your content will be less legal and less immoral.
Pistol to the mouth seems to require full mouth of water for high chance of success.
Everything you said is true.
Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.
But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.
A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.
On the other hand, the better you are, the more things you learn just because they are easy enough to learn to be worth your time.
Well, it does look that killing a big species is like freezing a body to absolute zero.
I am ready to buy that 0%-10% killers are distributed according to the promised power law, if you buy that reducing human population from X to kX costs -log k. Note that this doesn't alter your observed power law at all.
Why this? Because you have to be stronger than random resistances (due to genetics, lifestyle, health conditions etc.) present in different slices in populations.
Also, any virus will evolve while it infects a billion humans; there is a pressure to spread better and leave host alive.
And long incubation period cuts both ways — either you are stealth for years, or you are spread by air next week.
You would say these words if you would want to check whether you have found a "stable" equilibrium.
If you consider that "0 is not a probability", there is a point in checking for that.
Does anyone consider them the proofs in the ordinary sense?
I could ask him, but given that experience of verification of ZKP is an example personal/non-transferrable evidence, I see no question here.
And in some sense, ZKP proofs are usualy proofs of knowledge. If you represent ZK prover as a black box with secret information inside that uses random number generation log and communication log as sources to calculate its next message, access to this blackbox in such form is enough to extract some piece of data. This piece of data makes the proven statement...
Whether it is a cognitive disability is not a useful question; the question is whether there is something that is cost-effective to offer to these people.
My main point was that having this situation is not incompatible with being on LessWrong.
About cheapness: you oversimplify. A good recorded video lecture requires noticeably less effort to produce than a good textbook. And even simple lectures for big audience are quite good w.r.t. scalability.
Depends on the amount of effort you are willing to spend. I have a local Squid caching proxy (maybe privoxy or 3proxy or something else is better for you), I have set up URL rewriting and among other things I block some URL patterns for various reasons. It is not too hard to set up and there are various ways to do this with various proxies.
People are different.
As far as I see around, there are people with various optimal bite sizes.
For something I do want to consume in entirety, I prefer long-form writing; there are people who prefer smaller-sized pieces or smaller-sized pieces with a rare chance to interrupt and ask a question.
I learn better from text; there are people who understand spoken words better. Spoken words have intonations and emotional connotations (and often there are relevant gestures at the same time); text reading speed can be changed without any loss.
So, I wouldn't discou...
As I understand Eliezer's definition: "Your text is proof if it can convince me that the conclusion is true"
As I understand Uspenskiy's definition: "Your text is proof if it can convince me that the conclusion is true and is I am willing to reuse this text to convince other people".
The difference is whether the mere text convinces me that I myself can also use it succesfully. Of course this has to rely on social norms for convincing arguments in some way.
Disclosure: I have heard the second definition from Uspenskiy first-person, and I have never seen Eliezer in person.
Well, it does look likely (not guaranteed - just 50% likely) that the primary target for the strike would be The Enemy (China-USA-Russia-EU-India-whoever). From what is publically known, the prepared plans from 20th century referred to first-strike/revenge dynamics...
Risking extinction on the Earth could be done just to slightly improve your chances not to be enslaved in the fallout or at least not to let The Enemy get away less destroyed than you. It means that you spend all that you can on your selected targets.
Africa would (except South Africa, maybe) w...
If we consider malicious actions (like a big war), such colonies on Earth are just too convenient for military installations to be left untouched.
Mars colony is too far away to be able to deal a sneak attack; so wiping it is not as urgent, and after 3 hours there are no rockets anywhere on Earth for next 5 years.
Also, I am not sure that the cheapest way to "filter CO2 and CO, boost O2" is more expensive than "filter all possible chemical weapons and kill all designer bacteria".
Whether a war that realy destroys every human (maybe not ...
Yup, I tried sitting at the desk with The Thing there, it is annoyingly uncomfortable.
My chair is actually rotatable and even has wheels; it is just worn enough that it requires slight effort to make it roll or rotate.
I noticed that when standing it is easier to shift around than when sitting, which is sometimes nice. Also, when sitting you can sometimes get your legs numb because of compressing something not inteded for being compressed - no such problem when standing.
By the way, I tried almost shoulder-height position for notebook (abuse of a preexistin...
I do work standing at the computer for extended time sometimes at my workplace (at my own whim and for purely comfort-related reasons).
I received a SIGTHTBABW (signal sent by Unix to its programmers to remind them that There Has To Be A Better Way), took some materials I have stashed for this project and assembled Something. It is somewhat strange and could be way more symmetrical, but I work in a nice enough place to afford not caring about that.
So, it is a very light thing out of plastic which I put on top of my desk (and then the notebook goes on top). ...
It is much more segmented popularity contest because who are the "right" people vaires.
Follower count is a global instantly updated popularity contest, which may be considered worse.
Well, maybe Rational_Brony wants to find posts with detailed explanation of some position/fact with a summary of corroborating evidence.
I'm a wordy person who likes to take it slow and read things leisurely
I treat that as preference for 1k-ish posts over ≤140-character posts.
On many forums posting a medium-length essay without too much polishing would be just "business as usual", on some other it would be "weird but OK". On Twitter it is declared impossible if you use it as supposed. You could use Twitter as an RSS-like stream for ...
Well, if at least one of my upvotes is from a person who tested my solution, I would be a partial counterexample - I do look at my karma for latest posts, and although it is not a good motivation per se, using upvote count as a data point for what and when is valued by LW is a minor amusement for me.
You really think noone would mark own posts as fluff? I would mark some of my posts which are more conversations than public comments but relevant for some people enough not to make them PM.
Actually, maybe 3 fluff votes to mark as fuff would be a better alternative to current three-downvotes rule.
How will it help? 140 characters is simply too short form for some kinds of posts.
Maybe "span.monthly-score {display:none;} span.score {display:none;}" in userstyle would help?
For the record: anti-kibitzer (I force-start it via a domain-specific user script) hides your cumulative score, but not last-month score.
Well, startng from vastly different priors and interpreting the evidence according to the intermediate posterior probabilities at the moment of evaluation could reasonably lead to different conclusions. Especially in the areas where controlled experiments are hard to do, and so the evidence may be relatively weak.
This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.
And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).
I don't say it is impossible to specify, just that your description could be improved.
Trying to suppress thoughts and emotions doesn't work? Really? Dunno, worked well enough for me. Now, emotions from recurring situations are harder to deal with in a full-suppression way during the experience, but suppressing thoughts not currently tied to anything you experience at the moment is simple enough for some people.
I do not think it is a black-or-white divide, of course - there are people better at it and worse at it by different amounts.
Or maybe that the improvements require a complex set of seemingly unrelated changes to occur.
For example, Adam Smith explained that the main driving force of change at some point was the appearance of new things to buy for the very rich people, as this allowed a novel redistribution of wealth and power.
(I am not quoting the ful argument here, and I am not claiming that the described factor was the only driving force of change, or even that I can claim on my own that it was the main force)
My impression is that there is a bigger problem - some LW-relevant issues become political inside LW.
I have no reason to claim that poitical threads help or hurt, though.
Note, though, that we are talking about much smaller population - so you could spend quite a lot of land per capita on growing both ethanol source and fuel.
Current size of humankind is clearly unsustainable in this mode, of course.
But if saying something and creating a separate comment to ask for feedback would become accepatable, the trolls will create even more visible noise before they get into obviously malicious territory.
Probably it does lose coherence. What specifically that means has to be shown in the future by working theory that accepts GR and QM as its limit cases...
Whether it will be any of the current research directions called quantum gravity or something else is hard to predict.
I have no intuitions here, as I am between a mathematician and a programmer and have catastrophically not enough knowledge of physics to try to predict unknown areas of it.
There seems to be a significant amount of people who browse with anti-kibitzer and full-unhide.
If you want us to stop using such option combinations, maybe putting a warning into preferences would be a reasonable first step?
As far as I understand, there is still no satisfactory theory that would include both quantum mechanics and general relativity (i.e. the possibility for spacetime not to be same).
I would expect that in unified theory spacetime structure would be a part of the state undergoing quantum superposition.
So, if I post some honest argument but make a couple of stupid mistakes (I hope that such a post will get downvoted to around -5), anyone who explains me what I have missed will be punished?
Just in case - synchronising the definitions.
I usually consider something transitive if "X≥Y, Y≥Z then X≥Z" holds for all X,Y,Z.
If this holds, preferences are transitive. Otherwise, there are some X,Y,Z: X≥Y, Y≥Z, Z>X. I would call that cyclical.
And you are completely right.
I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.
I.e. walking the walk is harder than talking the talk.