All of vi21maobk9vp's Comments + Replies

And you are completely right.

I meant that designing a working FOOM-able AI (or non-FOOMable AGI, for that matter) is vastly harder than finding a few hypothetical hihg-risk scenarios.

I.e. walking the walk is harder than talking the talk.

If we are not inventive enough to find a menace not obviously shielded by lead+ocean, more complex tasks like, say, actually designing FOOM-able AI is beyond us anyway…

I... don't believe that. I think that making a FOOM-able AI is much easier than making an AI that can break out of a (considerably stronger) lead box in solar orbit.

You say "presumably yes". The whole point of this discussion is to listen to everyone who will say "obviously no"; their arguments would automatically apply to all weaker boxing techniques.

All the suggestions so far that might allow an AI without conventional outputs to get out would be overcome by the lead box+ocean defenses. I don't think that containing a strong AI is likely to be that difficult a problem. The really difficult problem is containing a strong AI while getting anything useful out of it.

How much evidence do you have that you can count accurately (or make a corect request to computer and interpret results correctly)? How much evidence that probability theory is a good description of events that seem random?

Once you get as much evidence for atomic theory as you have for the weaker of the two claims above, describing your degree of confidence requires more efforts than just naming a number.

I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.

Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).

Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).

Could you please clarify your question here?

Why, as you come to believe that Zermelo-Fraenkel set theory has a model, do you come to believe that physical time will never show you a moment when a machine checking for ZF-inconsistency proofs halts?

I try to intepret it (granted, I interpret it in my worldview which is different) and I cannot see the question here.

I am not 100% sure whether even PA has a model, but I find it likely that even ZFC has. But if I say that ZFC has a model, it means that this is a model where formula parts are numbered by the natural numbers derived from my notion of subsequent moments of time.

Link is good, but I guess direct explanation of this simple thing could be useful.

It is not hard to build explicit map between R and R² (more or less interleaving the binary notations for numbers).

So the claim of Continuum Hypothesis is:

For every property of real numbers P there exists such a property of pairs of real numbers, Q such that:

1) ∀x (P(x) -> ∃! y Q(x,y))

2) ∀x (¬P(x) -> ∀y¬Q(x,y))

(i.e. Q describes mapping from support of P to R)

3) ∀x1,x2,y: ((Q(x1,y)^Q(x2,y)) -> x1=x2)

(i.e. the map is an injection)

4) (∀y ∃x Q(x,y)) ∨ (∀x∀y (Q(x,y)-> ... (read more)

ZFC is the universally, unequivocally best definition of a set

Worse. You are being tricked into believing that ZFC is at all a definition of a set at all, while it is just a set of restrictions on what we would tolerate.

In some sense, if you believe that there is only one second-order model of natural numbers, you have to make decisions what are the properties of natural numbers that you can range over; as Cohen has taught us, this involves making a lot of set-theoretical decisions with continuum hypothesis being only one of them.

Well Foundation in V_alpha case seems quite simple: you build externally-countable chain of subsets which simply cannot be represented as a set inside the first model of ZFC. So the external WF is not broken because the element-relation inside the models is different, and the inner WF is fine because the chain of inner models of external ZFC is not an inner set.

In the standard case your even-numbers explanation nicely shows what goes on — quoting is involved.

I need to think a bit to say what woud halt our attempts to build a chain of transitive countable models...

Ah, sorry, I didn't notice that the question is about model of ZFC inside a "universe" modelling ZFC+Con^∞(ZFC)

Actually, in NBG you have explicitness of assumptions and of first-order logic — and at the same time axiom of induction is a single axiom.

Actually, if you care about cardinality, you need a well-specified set theory more than just axioms of reals. Second-order theory has a unique model, yes, but it has the notion of "all" subsets, so it just smuggles some set theory without specifying it. As I understand, this was the motivation for Henkin semantics.

And if you look for a set theory (explicit or implicit) for reals as used in physics, I am not e... (read more)

Well, technically not every model of ZFC has a ZFC-modelling element. There is a model of "ZFC+¬Con(ZFC)", and no element of this monster can be a model of ZFC. Not even with nonstandard element-relation.

5Eliezer Yudkowsky10y
Linked impressive authority says the model has a ZFC-model-encoding element, plus enough nonstandard quoted ZFC axioms in-model that in-model ZFC doesn't think it's a ZFC-model-encoding element. I.e., the formula for "semantically entails ZFC" is false within the model, but from outside, using our own standard list of axioms, we think the element is a model of ZFC.

It may be that goal-orientation where there are no made-up rules is fun; as a good person there is need to follow some of the more stupid moral norms that made sense some puny two hundred years ago.

It seems that in weak formulations it can be confirmed.

Have you read "Through the Language Glass" by Deutscher?

Choosing better words for some situations does train you in some skills. It looks like people distinguish colours quicker if they have different names. For example, Russian speaker will notice the difference between "closer to sky blue" and "closer to navy blue" faster than English speaker because of habit to classify them as different colours. Deutscher cites a few different studies of that kind.

Apparently, language... (read more)

Actually, what you may wonder is whether utility of increased status just has a complex shape for you.

For example, I can imagine some situation of having too little status, but in most cases I get what is enough personally for me before even trying.

Actually, whatever license you use, your content will be copied around.

If you use a proprietary license after taking CC-BY core content, copying your content will be less legal and less immoral.

Pistol to the mouth seems to require full mouth of water for high chance of success.

Everything you said is true.

Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.

But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.

A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.

Okay, sure, but here's the hitch: Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I'm not sure the modern establishment of neurology has any good idea of how you'd do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It's going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.

On the other hand, the better you are, the more things you learn just because they are easy enough to learn to be worth your time.

Well, it does look that killing a big species is like freezing a body to absolute zero.

I am ready to buy that 0%-10% killers are distributed according to the promised power law, if you buy that reducing human population from X to kX costs -log k. Note that this doesn't alter your observed power law at all.

Why this? Because you have to be stronger than random resistances (due to genetics, lifestyle, health conditions etc.) present in different slices in populations.

Also, any virus will evolve while it infects a billion humans; there is a pressure to spread better and leave host alive.

And long incubation period cuts both ways — either you are stealth for years, or you are spread by air next week.

You would say these words if you would want to check whether you have found a "stable" equilibrium.

If you consider that "0 is not a probability", there is a point in checking for that.

Does anyone consider them the proofs in the ordinary sense?

I could ask him, but given that experience of verification of ZKP is an example personal/non-transferrable evidence, I see no question here.

And in some sense, ZKP proofs are usualy proofs of knowledge. If you represent ZK prover as a black box with secret information inside that uses random number generation log and communication log as sources to calculate its next message, access to this blackbox in such form is enough to extract some piece of data. This piece of data makes the proven statement... (read more)

Whether it is a cognitive disability is not a useful question; the question is whether there is something that is cost-effective to offer to these people.

My main point was that having this situation is not incompatible with being on LessWrong.

About cheapness: you oversimplify. A good recorded video lecture requires noticeably less effort to produce than a good textbook. And even simple lectures for big audience are quite good w.r.t. scalability.

Depends on the amount of effort you are willing to spend. I have a local Squid caching proxy (maybe privoxy or 3proxy or something else is better for you), I have set up URL rewriting and among other things I block some URL patterns for various reasons. It is not too hard to set up and there are various ways to do this with various proxies.

People are different.

As far as I see around, there are people with various optimal bite sizes.

For something I do want to consume in entirety, I prefer long-form writing; there are people who prefer smaller-sized pieces or smaller-sized pieces with a rare chance to interrupt and ask a question.

I learn better from text; there are people who understand spoken words better. Spoken words have intonations and emotional connotations (and often there are relevant gestures at the same time); text reading speed can be changed without any loss.

So, I wouldn't discou... (read more)

Someone who is familiar with the relevant cognitive science is encouraged to correct me if it turns out that my current contrarian opinion is merely the result of my ignorance, but---I'm inclined to just call that a cognitive disability. To be sure, if you happen to be so lucky as to have a domain expert nearby who is willing to spend time with you to clear up your misconceptions, then that's a wonderful resource and you should take advantage of it. But human labor is expensive and text is cheap; people who understand something deeply enough to teach it well have better things to do with their lives than give the same lecture dozens of times. What happens when you want to know something that no one is willing to teach you (at an affordable price)? To be so incompetent at reading as to actually be dependent on a flesh-and-blood human to talk you through every little step every time you want to understand something complicated is a crippling disability, much much worse than not being able to walk. I weep for those who are cursed to live with such a hellishly debilitating condition, and look forward to some future day when our civilization's medical technology has advanced enough to cure this awful disease.

As I understand Eliezer's definition: "Your text is proof if it can convince me that the conclusion is true"

As I understand Uspenskiy's definition: "Your text is proof if it can convince me that the conclusion is true and is I am willing to reuse this text to convince other people".

The difference is whether the mere text convinces me that I myself can also use it succesfully. Of course this has to rely on social norms for convincing arguments in some way.

Disclosure: I have heard the second definition from Uspenskiy first-person, and I have never seen Eliezer in person.

Does Uspenskiy have an opinion on Zero-knowledge proofs []? They differ from standard proofs in that they have a probability of being wrong (which can be as small as you want), and the key property which is that if I use one to convince you of something, you aren't able to use it to convince anyone else.

Well, it does look likely (not guaranteed - just 50% likely) that the primary target for the strike would be The Enemy (China-USA-Russia-EU-India-whoever). From what is publically known, the prepared plans from 20th century referred to first-strike/revenge dynamics...

Risking extinction on the Earth could be done just to slightly improve your chances not to be enslaved in the fallout or at least not to let The Enemy get away less destroyed than you. It means that you spend all that you can on your selected targets.

Africa would (except South Africa, maybe) w... (read more)

If we consider malicious actions (like a big war), such colonies on Earth are just too convenient for military installations to be left untouched.

Mars colony is too far away to be able to deal a sneak attack; so wiping it is not as urgent, and after 3 hours there are no rockets anywhere on Earth for next 5 years.

Also, I am not sure that the cheapest way to "filter CO2 and CO, boost O2" is more expensive than "filter all possible chemical weapons and kill all designer bacteria".

Whether a war that realy destroys every human (maybe not ... (read more)

If we can transport 80,000 humans to Mars and keep them alive there long-term, we have the tech necessary to wipe out those Martian colonies as surely as comparable ones on Earth.

Yup, I tried sitting at the desk with The Thing there, it is annoyingly uncomfortable.

My chair is actually rotatable and even has wheels; it is just worn enough that it requires slight effort to make it roll or rotate.

I noticed that when standing it is easier to shift around than when sitting, which is sometimes nice. Also, when sitting you can sometimes get your legs numb because of compressing something not inteded for being compressed - no such problem when standing.

By the way, I tried almost shoulder-height position for notebook (abuse of a preexistin... (read more)

I do work standing at the computer for extended time sometimes at my workplace (at my own whim and for purely comfort-related reasons).

I received a SIGTHTBABW (signal sent by Unix to its programmers to remind them that There Has To Be A Better Way), took some materials I have stashed for this project and assembled Something. It is somewhat strange and could be way more symmetrical, but I work in a nice enough place to afford not caring about that.

So, it is a very light thing out of plastic which I put on top of my desk (and then the notebook goes on top). ... (read more)

I tried 'position 2' and it works fairly well; 3 doesn't work for me though because my chair rotates. I also occasionally sit for short periods at my desk, even though the keyboard tray is just below shoulder height and I have to tilt my gaze upwards about 20 degrees to see the screen. I can also set my laptop on the top (monitor) level of the desk (not optimal for typing), or use it sitting down in a variety of positions. If I pay attention, I do notice that I shift around a decent amount while standing, most likely only because it's uncomfortable to stand perfectly still for any length of time, as you mentioned. I've only been using the standing desk for 2 weeks; I'm not sure whether I'll revert back to sitting all the time if I do get a high stool, but I'm considering getting one so I can have the option.

It is much more segmented popularity contest because who are the "right" people vaires.

Follower count is a global instantly updated popularity contest, which may be considered worse.

. []

Well, maybe Rational_Brony wants to find posts with detailed explanation of some position/fact with a summary of corroborating evidence.

I'm a wordy person who likes to take it slow and read things leisurely

I treat that as preference for 1k-ish posts over ≤140-character posts.

On many forums posting a medium-length essay without too much polishing would be just "business as usual", on some other it would be "weird but OK". On Twitter it is declared impossible if you use it as supposed. You could use Twitter as an RSS-like stream for ... (read more)


Well, if at least one of my upvotes is from a person who tested my solution, I would be a partial counterexample - I do look at my karma for latest posts, and although it is not a good motivation per se, using upvote count as a data point for what and when is valued by LW is a minor amusement for me.

You really think noone would mark own posts as fluff? I would mark some of my posts which are more conversations than public comments but relevant for some people enough not to make them PM.

Actually, maybe 3 fluff votes to mark as fuff would be a better alternative to current three-downvotes rule.

How will it help? 140 characters is simply too short form for some kinds of posts.


Maybe "span.monthly-score {display:none;} span.score {display:none;}" in userstyle would help?

For the record: anti-kibitzer (I force-start it via a domain-specific user script) hides your cumulative score, but not last-month score.

Well, startng from vastly different priors and interpreting the evidence according to the intermediate posterior probabilities at the moment of evaluation could reasonably lead to different conclusions. Especially in the areas where controlled experiments are hard to do, and so the evidence may be relatively weak.

This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.

And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).

I don't say it is impossible to specify, just that your description could be improved.

Sure, it would also be nice to know that your wearing blue shoes will not throw off Omega. In the absence of any such information (we can stipulate if need be) the analysis is correct.

Trying to suppress thoughts and emotions doesn't work? Really? Dunno, worked well enough for me. Now, emotions from recurring situations are harder to deal with in a full-suppression way during the experience, but suppressing thoughts not currently tied to anything you experience at the moment is simple enough for some people.

I do not think it is a black-or-white divide, of course - there are people better at it and worse at it by different amounts.

I can shut off most unwanted thoughts by finding something else to concentrate on, such as a video game.
Wegner, D. M. 1994. Ironic processes of mental control []. Psychological Review 101:34-52. The above paper gives a more nuanced explanation that the success of mental control depends on the lack of simultaneous cognitive load, which seems to fit what you're saying. But there also seem to be quite a lot of papers in which people asked to suppress are compared to controls with the result that suppressors fail and even experience a rebound after they stop trying. For example: Wegner, D. M., D. J. Schneider, S. R. Carter, and T. L. White. 1987. “Paradoxical Effects of Thought Suppression [,Schneider,Carter,&White%201987.pdf].” Journal of Personality and Social Psychology 53: 5–13. Lavy, E. H., and M. A. Van den Hout. 1990. “Thought Suppression Induces Intrusion [].” Behavioural and Cognitive Psychotherapy 18: 251–58. Marcks, B. A., and D. W. Woods. 2005. "A comparison of thought suppression to an acceptance-based technique in the management of personal intrusive thoughts: A controlled evaluation []." Behaviour Research and Therapy 43:433-445. And there's even a book summarizing the literature: Rassin, E. 2005. Thought Supression. [] Oxford, UK: Elsevier.

Or maybe that the improvements require a complex set of seemingly unrelated changes to occur.

For example, Adam Smith explained that the main driving force of change at some point was the appearance of new things to buy for the very rich people, as this allowed a novel redistribution of wealth and power.

(I am not quoting the ful argument here, and I am not claiming that the described factor was the only driving force of change, or even that I can claim on my own that it was the main force)

My impression is that there is a bigger problem - some LW-relevant issues become political inside LW.

I have no reason to claim that poitical threads help or hurt, though.

Note, though, that we are talking about much smaller population - so you could spend quite a lot of land per capita on growing both ethanol source and fuel.

Current size of humankind is clearly unsustainable in this mode, of course.

With a much smaller population you start losing all sorts of other advantages, especially economies of scale and comparative advantage.

But if saying something and creating a separate comment to ask for feedback would become accepatable, the trolls will create even more visible noise before they get into obviously malicious territory.

Probably it does lose coherence. What specifically that means has to be shown in the future by working theory that accepts GR and QM as its limit cases...

Whether it will be any of the current research directions called quantum gravity or something else is hard to predict.

I have no intuitions here, as I am between a mathematician and a programmer and have catastrophically not enough knowledge of physics to try to predict unknown areas of it.

There seems to be a significant amount of people who browse with anti-kibitzer and full-unhide.

If you want us to stop using such option combinations, maybe putting a warning into preferences would be a reasonable first step?

As far as I understand, there is still no satisfactory theory that would include both quantum mechanics and general relativity (i.e. the possibility for spacetime not to be same).

I would expect that in unified theory spacetime structure would be a part of the state undergoing quantum superposition.

That's true. I was wondering what the standard claim that "MWI is just decoherence" has to say about the spacetime. Does it also decohere into multiple outcomes? If so, how? Does it require quantum gravity to understand? In this case "just decoherence" is not a valid claim.

So, if I post some honest argument but make a couple of stupid mistakes (I hope that such a post will get downvoted to around -5), anyone who explains me what I have missed will be punished?

I agree that this is a failure mode, but it's not an absolute one: people could explain to you via PM. Then you'd be free to edit the comment, and if its score floated back up discussion could ensue below.
Yes, this policy decision doesn't happen to be one sided. What you describe seems to be a comparatively rare event though. If you actually want to get better, you'll have opportunities other than particularly downvoted blunders to seek feedback, and there is an obvious solution of making a non-downvoted separate comment that asks for feedback in such cases, so that said feedback would not be punished.

Just in case - synchronising the definitions.

I usually consider something transitive if "X≥Y, Y≥Z then X≥Z" holds for all X,Y,Z.

If this holds, preferences are transitive. Otherwise, there are some X,Y,Z: X≥Y, Y≥Z, Z>X. I would call that cyclical.

Load More