All of lavalamp's Comments + Replies

[Link] The Dominant Life Form In the Cosmos Is Probably Superintelligent Robots

An extremely low prior distribution of life is an early great filter.

2014 Less Wrong Census/Survey

Done. Thank you for running these.

Ethical frameworks are isomorphic

Check out the previous discussion Luke linked to:

It seems there's some question about whether you can phrase deontological rules consequentially-- to make this more formal that needs to be settled. My first thought is that the formal version of this would say something along the lines of "you can achieve an outcome that differs by only X%, with a translation function that takes rules and spits out a utility function, which is only polynomially larger." It's not clear t... (read more)

Ok. If I ever get to work with this I will let you know, perhaps you can help/join.
Ethical frameworks are isomorphic

(Sorry for slow response. Super busy IRL.)

If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

Not necessarily. I'm not saying it makes much sense, but it's possible to construct a utility function that values agent X not having performed action Y, but doesn't care if agent Z performs the same action.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

a) After reading Luke's link below, I'm still not certain if... (read more)

Ethical frameworks are isomorphic

If indeed the frameworks are isomorphic, then actually this is just another case humans allowing their judgment to be affected by an issue's framing. Which demonstrates only that there is a bug in human brains.

Ethical frameworks are isomorphic

I think so. I know they're commonly implemented without that feedback loop, but I don't see why that would be a necessary "feature".

Ethical frameworks are isomorphic

Which is why I said "in the limit". But I think, if it is true that one can make reasonably close approximations in any framework, that's enough for the point to hold.

Ethical frameworks are isomorphic

Are you saying that some consequentialist systems don't even have deontological approximations?

It seems like you can have rules of the form "Don't torture... unless by doing the torture you can prevent an even worse thing" provides a checklist to compare badness I'm not convinced?

Actually, this one is trivially true, with the rule being "maximize the relevant utility". I am saying the converse need not be true.
How long will Alcor be around?

How does it change the numbers if you condition on the fact that Alcor has already been around for 40 years?

Reminds me of John C. Wright's comments on the subject here []
Although in reality it makes a big difference, in my model it does not - my model varies only the size of the company, since that's all I could find good data on. I found another source saying that the age of a company was about 30% more important in predicting its survival than its size, but because it was a complicated regression I was unable to exclude terms that had absolutely nothing to do with cryonics. It is probable that you should shade the probability of Alcor surviving up and the probability of KryoRus surviving down to account for this.
Siren worlds and the perils of over-optimised search

Absolutely, granted. I guess I just found this post to be an extremely convoluted way to make the point of "if you maximize the wrong thing, you'll get something that you don't want, and the more effectively you achieve the wrong goal, the more you diverge from the right goal." I don't see that the existence of "marketing worlds" makes maximizing the wrong thing more dangerous than it already was.

Additionally, I'm kinda horrified about the class of fixes (of which the proposal is a member) which involve doing the wrong thing less effect... (read more)

Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?

90% agree, one other thing you may not know: both dropbox and google drive have options to automatically upload photos from your phone, and you don't have to sync your desktop with them. So it's not clear that they merely double the needed space.

Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?

I think your expanded point #6 fails to consider alternative pressures for hard drive & flash memory. Consider places like dropbox; they represent a huge demand for cheap storage. People probably (?) won't want huge(er) drives in their home computers going forward, but they are quite likely to want cloud storage if it comes down another order of magnitude in price. Just because people don't necessarily directly consume hard drives doesn't mean there isn't a large demand.

Consider also that many people have high MP digital cameras, still and video. Those files add up quickly.

This is a good point that I didn't address in the post. I'd thought about it a while back but I omitted discussing it in the post. A few counterpoints: * Dropbox is all about backing up data that you already have. Even if everybody used Dropbox for all their content, that would still only double the need for storage space (if Dropbox stores everything at 3 locations, then it would 4X the need for storage space). This doesn't create huge incentives for improvement. * In practice, Dropbox and cloud services wouldn't multiply storage space needs by that much, because a lot of content shared on these would be shared across devices (for instance, Amazon's Cloud Music Service doesn't store a different copy of each track for each buyer, it just stores one, or a few, copies per track). And many people won't even keep local copies. This would reduce rather than increase local storage needs. Even today, many people don't store movies on their hard drives or in DVDs but simply rely on online streaming and/or temporary online downloading. I should note that I'm somewhat exceptional: I like having local copies of things to a much greater extent than most people (I download Wikipedia every month so I can have access to it offline, and I have a large number of movies and music stored on my hard drive). But to the extent that the Internet and improved connectivity has an effect, I suspect it would be ranging from something like multiplying demand by 4X (high-end) to actually reducing demand. The point about camera, still, and video is good, and I do see applications in principle that could be used to fill up a lot of disk space. I don't think there is a lot of demand for these applications at the current margin. How many people who aren't photographers (by profession or hobby) even think about the storage space of their photos on their hard drives? How many people shoot videos and store them on their hard drives to a level that they actually
Siren worlds and the perils of over-optimised search

It sounds like, "the better you do maximizing your utility function, the more likely you are to get a bad result," which can't be true with the ordinary meanings of all those words. The only ways I can see for this to be true is if you aren't actually maximizing your utility function, or your true utility function is not the same as the one you're maximizing. But then you're just plain old maximizing the wrong thing.

Er, yes? But we don't exactly have the right thing lying around, unless I've missed some really exciting FAI news...
Be comfortable with hypocrisy

Hypocrisy is only a vice for people with correct views. Consistently doing the Wrong Thing is not praiseworthy.

Unfortunately, it's much easier to demonstrate inconsistency than incorrectness.

Siren worlds and the perils of over-optimised search

Ah, thank you for the explanation. I have complained about the proposed method in another comment. :)

Siren worlds and the perils of over-optimised search

The IC correspond roughly with what we want to value, but differs from it in subtle ways, enough that optimising for one could be disastrous for the other. If we didn't optimise, this wouldn't be a problem. Suppose we defined an acceptable world as one that we would judge "yeah, that's pretty cool" or even "yeah, that's really great". Then assume we selected randomly among the acceptable worlds. This would probably result in a world of positive value: siren worlds and marketing worlds are rare, because they fulfil very specific criteri

... (read more)
Why? This agrees with my intuition, ask for too much and you wind up with nothing.
Siren worlds and the perils of over-optimised search

TL;DR: Worlds which meet our specified criteria but fail to meet some unspecified but vital criteria outnumber (vastly?) worlds that meet both our specified and unspecified criteria.

Is that an accurate recap? If so, I think there's two things that need to be proven:

  1. There will with high probability be important unspecified criteria in any given predicate.

  2. The nature of the unspecified criteria is such that it is unfulfilled in a large majority of worlds which fulfill the specified criteria.

(1) is commonly accepted here (rightly so, IMO). But (2) seems... (read more)

That's not exactly my claim. My claim is that things that are the best optimised for fulfilling our specified criteria are unlikely to satisfy our unspecified ones. It's not a question of outnumbering (siren and marketing worlds are rare) but of scoring higher on our specified criteria.
This proposes a way to get an OK result even if we don't quite write down our values correctly.
Open Thread February 25 - March 3

Possibly of interest: Help Teach 1000 Kids That Death is Wrong.

(have not actually looked in detail, have no opinion yet)

Weighting the probability of being a mind by the quantity of the matter composing the computer that calculates that mind

I think you're getting downvoted for your TL;DR, which is extremely difficult to parse. May I suggest:

TL;DR: Treating "computers running minds" as discrete objects might cause a paradox in probability calculations that involve self-location.

Changed it, that sounds better.
Rationality Quotes February 2014

I dunno. I'd be pretty happy with a system that produced reasonable output when staffed with idiots, because that seems like a certainty. I actually think that's probably why democracy seems to be better than monarchies-- it has a much lower requirement for smarts/benevolence. "Without suffering" may be a high bar, but the universe is allowed to give us problems like that! (And I don't think that democracy is even close to a complete solution.)

EDIT: Also, perhaps the entirety of the system should be to make sure that an "utter genius with leet rationality skillz" is in the top position? I'd be very happy with a system that caused that even when staffed by morons.

Seems to me that a system that incentivized putting smart people in high places would do better in the long run than one that was designed to be robust against idiocy and didn't concern itself with those incentives. The trick is making sure those incentives don't end up Goodharting themselves. Don't think I've ever heard of a system that's completely solved that problem yet.
The first AI probably won't be very smart

I think we just mean different things by "human level"-- I wouldn't consider "human level" thought running at 1/5th the speed of a human or slower to actually be "human level". You wouldn't really be able to have a conversation with such a thing.

And as Gurkenglas points out, the human brain is massively parallel-- more cores instead of faster cores is actually desirable for this problem.

Understanding and justifying Solomonoff induction

Ah, I see. Yeah, 1 bit in input bitstream != 1 bit of bayesian evidence.

Understanding and justifying Solomonoff induction

I think you mean BB(1000) bits of evidence?

I was measuring the Kolmogorov complexity of the evidence, but now that you mention it that does make for a bit of circular reasoning.
The first AI probably won't be very smart

1) Yes, brains have lots of computational power, but you've already accounted for that when you said "human-level AI" in your claim. A human level AI will, with high probability, run at 2x human speed in 18 months, due to Moore's law, even if we can't find any optimizations. This speedup by itself is probably sufficient to get a (slow-moving) intelligence explosion.

2) It's not read access that makes a major difference, it's write access. Biological humans probably will never have write access to biological brains. Simulated brains or AGIs probabl... (read more)

0Jonathan Paulson9y
1) I expect to see AI with human-level thought but 100x as slow as you or I first. Moore's law will probably run out sooner than we get AI, and these days Moore's law is giving us more cores, not faster ones.
Understanding and justifying Solomonoff induction

I see, thanks!

You can't "count every possible program equally".

I did know this and should have phrased my sentence hypothetically. :)

Understanding and justifying Solomonoff induction

The only programs allowed in the Solomonoff distribution are ones that don't have any extended versions that produce the same output observed so far.

Did not know that! It seems like that would leave some probability mass unassigned, how do you rebalance? Even if you succeed, it seems likely that (for large enough outputs) there'll be lots of programs that have epsilon difference--that are basically the same, for all practical purposes.

Normalize! Solomonoff induction is just defined for binary data. Differences are a minimum of 1 bit,, which is enough.
Understanding and justifying Solomonoff induction

I have been thinking that the universal prior is tautological. Given a program, there are an infinite number of longer programs which perform the same computation (or an indistinguishable variation) but only a finite number of shorter programs having this characteristic. If you count every possible program equally, you'll find that each short program represents a host of longer programs. However, now that I write this down, I'm no longer sure about it. Can someone say why/if it's wrong?

[EDIT note: This is completely different from what I originally wrote in response to lavalamp's question, because originally I completely misunderstood it. Sorry.] You can't "count every possible program equally". (What probability will you give each possible program? If it's positive then your total probability will be infinite. If it's zero then your total probability will be zero. You can do a lot of probability-like things on a space with infinite total measure, in which case you could give every program equal weight, but that's not generally what one does.) Leaving that aside: Your argument suggests that whatever probabilities we give to programs, the resulting probabilities on outcomes will end up favouring outcomes that can be produced by simpler programs. That's true. But a universal prior is doing something stronger than that: it gives (in a particular way) higher probabilities to simpler programs as well. So outcomes generated by simpler programs will (so to speak) be doubly favoured: once because those simple programs have higher probabilities, and once because there are "more" programs that generate those outcomes. In fact, any probability assignment with finite total probability (in particular, any with total probability 1, which is of course what we usually require) must "in the limit" give small probabilities to long programs. But a universal prior is much more specific, and says how program length corresponds to probability.
Open Thread for January 8 - 16 2014

I'd like to agree with you, but how do I know you're not a concern troll?

The mathematical universe: the map that is the territory

I believe if you read my previous comments, you'll see that they all are attempts to do exactly this. I will bow out of this conversation now.

(Meta: you're tripping my troll sensers. I'm sorry if it's unintentional on your part. I'm just not getting the sense that you're trying to understand me. Or it's the case that the two of us just really cannot communicate in this forum. Either way, it's time to call it quits.)

EDIT: Your response to this has caused my P(you're trolling me) to rise from ~60% to ~95%.

You mean you had some even more nonsencial interpetations in mind, and chose the most charitable?
A proposed inefficiency in the Bitcoin markets

Ah, you're disagreeing with the model and phrasing it as "if that model were true, no one would sell you btc, but people are willing to sell, therefore that model is false." Do I understand?

If so, I do not agree that "if that model were true, no one would sell you btc" is a valid inference.

Essentially, the model says "there is free money lying on the ground, just picking it up is a 'guaranteed-positive-return trading strategy'." I am pointing out that free money lying on the ground is an illusion.
The mathematical universe: the map that is the territory

You have mentioned the Mathematical Universe hypothesis several times, and Tegmark's is a name very much associated with it, ...

Right, I know what the MUH is, I know who Tegmark is, I just don't recognize terms that are a combination of his name and (im)materialism. Please taboo your terms! I don't know what they mean to you!

If anything, let's call my position "non-distinctionalism"-- I maintain that there's no other coherent models for the word "exist" than the one I mentioned earlier, and people who use the word "exist"

... (read more)
Perhaps you could interpret their remarks according to the Principle of Charity: since their remarks are nonsense under you interpretation, they probably have a different one in mind.
A proposed inefficiency in the Bitcoin markets

What Vaniver said. Also, emperically, you can look at the current price/order book on an exchange and see that people are in fact willing to sell you these things. If my holdings represented a life altering sum of money it would be time to take less risk and I would be one of those people.

Sigh. Again, look at the context. There is a claim Which happens to be wrong.
Double-thick transistors and other subjective phenomena

The computer doesn't hold cash (clearly), it has account # and password of a bank account (or private key to bitcoin addresses if it's a particularly risky computer). The two thin computers therefore only have half as much money to bet. (Or they'll overdraw their account.)

Sure, let's go with that.
A proposed inefficiency in the Bitcoin markets

Bitcoin is plenty liquid right now unless you're throwing around amounts > $1 mil or so.

Look at the grandparent: Given that the expected value for the change between today and tomorrow ((+250-200)/2=+25) is publicly known, I wonder who will sell him bitcoins for $1000 today. In other words, the situation as described is unstable and will not exist (or, if it will appear, it will be arbitraged away very very quickly).
The mathematical universe: the map that is the territory

Thanks for editing-- I'm still puzzled.

I also don't know what "Tegmarkian immaterialism" is and I'm not arguing for or against it. I do not know what "immaterialism" is and I'm also not arguing for or against that. (Meta: stop giving either sides of my arguments names without first giving the names definitions!)

If anything, let's call my position "non-distinctionalism"-- I maintain that there's no other coherent models for the word "exist" than the one I mentioned earlier, and people who use the word "exist"... (read more)

You have mentioned the Mathematical Universe hypothesis several times, and Tegmark's is a name very much associated with it, as WP states: "In physics and cosmology, the mathematical universe hypothesis (MUH), also known as the Ultimate Ensemble, is a speculative "theory of everything" (TOE) proposed by the theoretical physicist, Max Tegmark.[1]" You second sentence doens't follow from your first. Someone can define "material existence" as existence in your sense, plus some additional constraint, such as the "world" in which the pattern is found being a material world. Standard arguments against MUH (etc) are that they predict too much weirdness. But that is an arguemnt against the truth of MUH, not for the coherence of materialism. However, you have not acutally argued against the coherence of materialism. Your definition of existene doesn't requires worlds to be material or immaterial, but it also doesn't require them to be neither.
The mathematical universe: the map that is the territory

I think we are having very significant communication difficulties. I am very puzzled that you think that I think that I'm arguing against MUH. I think something like MUH is likely true. I do not know what "Tegmarkian materialism" is and I'm not defending or attacking it. I also cannot make sense of some of your sentences, there seems to be some sort of editing problem.

I think you have been arguing against immaterialism, and that Tegmarkian MUH is a form of immaterialism. I have edited my previous comment.
The mathematical universe: the map that is the territory

So far, none of this tells us what immateriality is. But then it isn't easy to say what matter is either.

Yeah. There's supposedly two mysterious substances. My claim is that I can't see a reason to claim they're separate, and this thought experiment is (possibly) a demonstration that they're in fact same thing. Then we still have one mysterious substance, and I'm not claiming to make that any less mysterious with this argument.

There are more than two options. If you had evidence of a bitstring corresponding to a billions of years of biological develop

... (read more)
EDITED No, the claim of Tegmarkian immaterialism is not that there is another substance other than matter. You were previously saying that a log or record of mental style acrivity was probably produced by a mind. This is an explanation of an argument that you said supports "something the MUH ". I still don'see how it does,. I am also puzzled that you thave been arguing against immaterialism throughout.
The mathematical universe: the map that is the territory

What would "immaterial existence" even mean?

I don't know exactly, but if "material existence" means something, so does "immaterial existence".

Hm. I don't think "material existence", if it's a thing, has a unique opposite.

I guess I'd define exists-in-the-real-world as equivalent to a theoretical predicate function that takes a model of a thing, and a digitized copy of our real world, and searches the world for patterns that are functionally isomorphic (given the physics of the world) to the model, and returns tr... (read more)

Failing to have a unique referent is not meaninglessness. That is rather beside the point, since none of that is necessarily material. Most people would interpret it as "exists, but is not made of matter". To cash that out, without contradiction, you need a notion of existence that is agnostic about materiality. You have given one above. Tegmarkians, can input a maximal mathematical structure as their world, and then say that something exists if it can be pattern-matched within the structure. So far, none of this tells us what immateriality is. But then it isn't easy to say what matter is [] either. For immaterialists, anything physics says about matter boils down to structures, behaviour and laws that are all mathemaitcal, and therefore within some regions of Tegmarkia. There are more than two options. If you had evidence of a bitstring corresponding to a billions of years of biological development involving trillions of organims -- amuch more comple bitstring than a mind, but not a mind,-- it might well be most probable to assign the production of a mind to that. I don't know if you realise it, but your argument was Paleyian []
The mathematical universe: the map that is the territory

What would "immaterial existence" even mean?

I think my claim is that the above argument shows that whatever that might be, it's equivalent to epistemological objectivism.

Specifically, to believe that they're separate, given the scenario where you simulate universes until you find a conscious mind and then construct a replica in your own universe, you have to believe both of the following at the same time:

(1) Mind X didn't have real memories/experiences until you simulated it in the "real" world (i.e., yours), and (2) proof of mind X's r... (read more)

I don't know exactly, but if "material existene" means something, so does "immaterial existence". I think you argument assumes that. You say that the simulated person must have had a pre-existence (ontology) because mathematicians agree about pi (epistemolology) Specifically, to believe that they're separate, given the scenario where you simulate universes until you find a conscious mind and then construct a replica in your own universe, you have to believe both of the following at the same time: You seem to be assuming that if a mind has "memories", then it must have pre-existed, ie that the only way a mind can have "memories" at time T, i sby expereincing things at some previous time and recording them. Rather than assuming that there are infinite numbers of real but immaterial people floaitng around somewhere, I prefer to assume that "memories" are just data that don't have any intrinsic connection to prior events. []. Ie, a memory proper is a record of an event, but neurons can be configured as if there were a trace of an even that never happened. I don't see your point.
A proposed inefficiency in the Bitcoin markets

Perhaps this isn't obvious, but note that fully validating nodes do not need to store the entire block chain history, currently, just the set of unspent transaction outputs, and there are proposals to eliminate the need for validators to store even that. If it's just a gentleman's agreement then it would be doable but wouldn't really have any teeth against a motivated attacker.

That's a good point. To make this work, it'd probably make the most sense to treat the pre-published hash the same as unspent outputs. It can't be free to make these or you could ... (read more)

There's nothing wrong with sacrificing coins (there are in fact, legitimate uses of that -- see the Identity Protocol for example). The problem is creating outputs which you know to be unspendable, but can't be proven by the deterministic algorithm the rest of the network uses (prefixing with RETURN).
A proposed inefficiency in the Bitcoin markets

Infinitely is a bit of an overstatement, especially if there's a fee to store a hash. I agree it might still be prudent to have a time limit, though. Miners can forget the hash once it's been referenced by a transaction.

The ability to pay to store a hash in the blockchain could be interesting for other reasons, like proving knowledge at a particular point in the past. There's some hacky ways to do it now that involve sending tiny amounts of BTC to a number of addresses, or I suppose you could also hash your data as if it were a pubkey and send 1e-8 btc to that hash-- but that's destructive.

If you make it part of the validation process, then every validating node needs to keep the full list of seen hashes. Nodes would never be allowed to forget hashes that they haven't seen make it onto the block chain, meaning that anyone could DoS bitcoin itself by registering endless streams of hashes. Perhaps this isn't obvious, but note that fully validating nodes do not need to store the entire block chain history, currently, just the set of unspent transaction outputs, and there are proposals to eliminate the need for validators to store even that. If it's just a gentleman's agreement then it would be doable but wouldn't really have any teeth against a motivated attacker. You certainly don't want to store data on the block chain by sending bitcoins to fake addresses, sacrificing coins, or other nonsense. That is inefficient and hurts the entire network, not just you. Please don't do it. There are already mechanisms to store hashes on the block chain which require no changes to the bitcoin protocol. You simply store the root hash of a Merkle list or prefix tree to the coinbase string, or an RETURN output of any transaction. An output can have zero value assigned to it, and if prefixed by the RETURN scripting opcode, is kept out of the unspent transaction output set entirely. I proposed one such structure for storing arbitrary data here: [] Just stick the root hash in a coinbase string or the PUSHDATA of a RETURN output, then provide the path to the transaction containing it and the path through the structure as proof.
A proposed inefficiency in the Bitcoin markets

Thanks for the explanation, it seems like I'm not wildly misreading wikipedia. :)

It seems like the more qubits are required for this attack, the more likely we are to have a long warning time to prepare ourselves. The other attack of just cracking the pubkey when a transaction comes through and trying to beat the transaction, seems vastly more likely to be an actual problem.

Do you have any idea how I'd go about estimating the number of qubits required to implement just the SHA256(SHA256(...)) steps required by mining?

A proposed inefficiency in the Bitcoin markets

They don't.

Suppose you publish hash HT1 of a transaction T1 spending address A, and then several blocks later when you publish T1 itself, someone hacks your pubkey and publishes transaction T2 also spending address A. Miners would hypothetically prefer T1 to T2, because there's proof that T1 was made earlier.

In the case where someone had even earlier published hash HT0 of transaction T0 also spending address A, but never bothers to publish T0 (perhaps because their steal bot--which was watching for spends from A--crashed), well, they're out of luck, becau... (read more)

I see. I understand your proposal now at least. The downside is that it requires infinitely increasing storage for validating nodes, although you could get around that by having a hash commitment have a time limit associated with it.
A proposed inefficiency in the Bitcoin markets

I always forget about the RIPEMD-160 step. The bitcoin wiki claims that it's strictly more profitable to mine than to search for collisions, but it seems to me that that's a function of block reward vs. value of address you're trying to hack, so I don't know if I believe that.

It's unclear to me how you would actually implement this in a quantum computer; do you have to essentially build a set of quantum gates that implement RIPEMD-160(SHA256(pubkey))? Does this imply you need enough qubit... (read more)

Actually RIPEMD-160(SHA-256(pubkey(privkey))). That's a massive understatement. Grover's algorithm can be used to reverse either RIPEMD-160 or SHA-256 with sqrt speedup. In principle it should also handle RIPEMD-160(SHA-256(x)), just with a lot more qubits. Shor's algorithm can be used to reverse the pubkey-from-privkey step. I'll hand-wave and pretend there's a way to combine the two into a single quantum computation [citation needed]. It's ... a lot of qubits. And you still need 2^80 full iterations of this algorithm, without errors, before you are 50% likely to have found a key. Which must be performed within ~10 minutes unless there is key reuse. So really it's of zero practical relevance in the pre-singularity foreseeable future. Technically you are correct. But in no conceivable & realistic future will it ever be more profitable to search for collisions then mine bitcoins, so for practical purposes the wiki is not wrong.
A proposed inefficiency in the Bitcoin markets

I think you misunderstood me-- the transaction could still be rejected when you try to get it included in a subsequent block if it's not valid. The hash of the transaction is just to prove that the transaction is the first spend from the given address; the transaction doesn't/can't get checked when the hash is included in the blockchain. Miners wouldn't be able to do it for free-- the protocol addition would be that you pay (from a quantum-safe address) to include a hash of a transaction into the blockchain. You publish the actual transaction some number o... (read more)

How does the miner know that there is no other conflicting transaction whose hash appeared earlier?
Welcome to Less Wrong! (5th thread, March 2013)

Me three-- I thought I was the only one, where are we all hiding? :)

A proposed inefficiency in the Bitcoin markets

That's true, but there's some precedent for picking a block that everyone agrees upon (that exists before quantum computers) and changing the rules at that block to prevent someone from rewriting the blockchain. A lot depends on how much warning we have.

It looks like making a cryptocoin that can survive quantum computers might be a high value activity.

A proposed inefficiency in the Bitcoin markets

I think a gigahertz quantum computer would have the hashing power of the current bitcoin network.

My math agrees with you. Looks like I was underestimating the effect of quantum computers.

Difficulty is currently growing at 30-40% per month. That won't last forever, obviously, but we can expect it to keep going up for a while, at least. ( Still, it looks like you'd need an unrealistic amount of ASICs to match the output of just 1000 quantum computers.

Given that, there'll probably be a large financial incentive to m... (read more)

In particular, the ability to roll back the blockchain breaks the patches you've mentioned for the fact that QC also breaks elliptic curve crypto.
A proposed inefficiency in the Bitcoin markets

This is not completely true-- since only hashes of the public key are posted until funds are spent from an address

So what? Then the attacker waits till someone spents his funds and double spends them and gives the double spending transaction a high processing fee.

This can be fixed by a protocol addition, which can be implemented as long as there's warning. (First publish a hash of your transaction. Once that's been included in a block, publish the transaction. Namecoin already does something like this to prevent that exact attack.)

No, that won't work. Blocks are rejected if any transaction contained within is invalid (this is required for SPV modes of operation, and so isn't a requirement that can be dropped). Therefore a miner that works on a block containing transactions he didn't personally verify can be trivially DoS'd by the competition. They would have a very large incentive not to include your transaction.
Load More