All of blogospheroid's Comments + Replies

Moloch's Toolbox (1/2)

The link is broken, I think + Didn't Alex Tabarrok do one better by creating the dominant assurance contract?

7Sniffnoy4yBlargh, thanks. Stupid pseudo-Markdown rich text editor... this thing is really awful. We really just need a straightforward comment box, ideally supporting preview for Markdown stuff, with optionally some sort of rich-text editor for those who want it; this current system is just an incomprehensible, uncontrollable mess.
MIRI's 2016 Fundraiser

Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser. Anyway, please take this as positive reinforcement for what it is worth. You're doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.

0So8res5yThanks :-)
[Stub] The problem with Chesterton's Fence

This basically boils down to the root of the impulse to remove a chesterton's fence, doesn't it?

Those who believe that these impulses come from genuinely good sources (eg. learned university professors) like to take down those fences. Those who believe that these impulses come from bad sources (eg. status jockeying, holiness signalling) would like to keep them.

The reactionary impulse comes from the basic idea that the practice of repeatedly taking down chesterton's fences will inevitably auto-cannibalise and the system or the meta-system being used to de... (read more)

0Stuart_Armstrong6yNot really. A lot of fences seem to have been taken down for bad or at least objectionable reasons, and to have turned out either fine or not to bad. I'd point to a different distinction - effective fences tend to have more defenders than bad ones (on average). So by taking down a fence that's easy to take down, you're more likely to improve the situation. And what fences get taken down the most often? The easy ones. So my "argument" can say that it's ok to take down a fence, but that this might not apply to major/important ones that have remained untouched to date.
Bragging thread, December 2015

Donated $100 to SENS. Hopefully, my company matches it. Take that, aging, the killer of all!

[LINK] The Bayesian Second Law of Thermodynamics

I'm not a physicist, but aren't this and the linked quanta article on Prof. England's work bad news? (great filter wise)

If this implies self-assembly is much more common in the universe, then that makes it worse for the latter proposed filters (i.e. makes them EDIT higher probability)

MIRI's 2015 Summer Fundraiser!

I donated $300 which I think my employer is expected to match. So $600 to AI value alignment here!

2So8res6yNice. Thanks!
[link] FLI's recommended project grants for AI safety research announced

I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.

California Drought thread

Letting market prices reign everywhere, but providing a universal basic income is the usual economic solution.

Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115

Guys everyone on reddit/Hpmor seems to be talking about a spreadsheet with all solutions listed. Could anyone please post the link as a reply to this comment. Pretty please with sugar on top :)

29eB17y []
Open thread, Jan. 19 - Jan. 25, 2015

A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.

To illustrate - You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.

0JoshuaZ7yThis assumes that you get a very strong singularity with either a hard take off or a fairly fast takeoff. If someone doesn't assign that high a probability to AI engaging in recursive self-improvement this argument will be unpersuasive.
Stupid Questions January 2015

Is anyone aware of the explanation behind why technetium is radioactive while molybdenum and ruthenium, the two elements astride it in the periodic table are perfectly normal? Searching on google on why certain elements are radioactive are giving results which are descriptive, as in X is radioactive, Y is radioactive, Z is what happens when radioactive decay occurs, etc. None seem to go into the theories which have been proposed to explain why something is radioactive.

8RolfAndreassen7yThe dynamics of the strong nuclear force are not well understood when high numbers of nucleons are involved. By which I mean, we have some empirical models that kinda-sorta work for various regimes, given some tinkering with the constants, but we have no from-first-principles understanding. You by no means need to go as far as biology before you get into stuff we cannot calculate from the equations; but in this case we don't even know the equations all that well, because the strong-force constant (ie, the equivalent of G in gravity and alpha in electromagnetism) varies drastically with the energy involved, and we don't know exactly how it varies. ("So why", you ask plaintively, "is it called a constant?" By analogy with G and alpha, which genuinely are constants so far as anyone knows.) So while nuclear dynamics are not my particular subfield of physics, I would be unsurprised to learn that the answer to your question is "N. N.'s PhD thesis, submitted 2025". One more observation: Nuclear dynamics is the field in which physicists refer unironically to Magic Numbers []; that is, some numbers of protons and neutrons are particularly stable compared to their neighbours, and it's not quite clear why. Presumably there's some sort of symmetry involved.
2Galap7yHere's what I know about the matter: At low atomic number, isotopes that are more stable tend to be close to a 1:1 ratio of neutrons to protons. At high atomic number, this ratio approaches 3:2. I do not know why this is the case, and I believe it is not entirely understood by anyone. Also, this is not a very good predictor anyway. The real problem is that unlike electron energy levels in an atom, which are well known and easily approximable by various systems and techniques, the nuclear energy levels are not very well understood, and I think to an extent they are even difficult to measure. I believe it is known that unlike the electrons' spherical potential well, the nucleons are bound in a well that is a mixture of a spherical and cubic well, and the exact form is unknown, thus we can't predict the levels very well. I don't know why this is the case, and I believe it is not entirely understood by anyone else either. In short, I think that a good theoretical model that predicts these kind of things has yet to come.
1gwillen7yLooking at [] , one of the patterns I see is that even numbers of protons and neutrons are systematically more stable than odd numbers. So that might answer the specific part of your question about its neighbors. (As to why even numbers, I don't know but I bet it's related to spins.) EDIT: Apparently this is enough of a thing that it even has its own Wikipedia page. []
6gedymin7yThe answer to the specific question about technetium is "it's complicated, and we may not know yet", according to physics Stack Exchange []. For the general question "why are some elements/isotopes less or more stable" - generally an isotope is more stable if it has a balanced number of protons and neutrons .
Approval-directed agents

I think this is a very important contribution. The only internal downside of this might be that the simulation of the overseer within the ai would be sentient. But if defined correctly, most of these simulations would not really be leading bad lives. The external downside is overtaking by other goal oriented AIs.

The thing is, I think in any design, it is impossible to tear away purpose from a lot of the subsequent design decisions. I need to think about this a little deeper.

Stupid Questions December 2014

How do they propose to move the blackholes? Nothing can touch a blackhole, right?

2DanielLC7yIt can, as long as you don't mind that you won't get it back when you're done. You have to constantly fuel the black hole anyway. Just throw the fuel in from the opposite direction that you want the black hole to go.
6gjm7yBlack holes feel gravity just like any other massive body. And they can be electrically charged. So you can move them around with strong enough gravitational and/or electric fields.
December 2014 Bragging Thread

Donated $300 to SENS foundation just now. My company matches donations, so hopefully a large cheque is going there. Fightaging is having a matching challenge for SENS, so even more moolah goes to anti-aging research. Hip Hip Hurray!

Open thread, Nov. 24 - Nov. 30, 2014

Weird fictional theoritical scenario. Comments solicited.

In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).

We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."

They reply " We follow a version of your m... (read more)

1Document7ySimilar "problem"(?): Acausal trade with Azathoth []
8Eliezer Yudkowsky7yThat's not how TDT works.

The whole scenario depends on a reification fallacy. You don't negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).

Evolution is powerful, but that doesn't make it an intelligence, certainly not a superintelligence. We're not defecting against evolution, evolution just doesn't/can't play PD in the first place. But I'm also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.

And as long as we're personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution's "values", like survival and unlimited reproduction.

We follow a version of your meta-gol

... (read more)
4Lumifer7yDeification of natural forces is a standard human culture trait. A large proportion of early gods just personified natural phenomena. Shinto is a contemporary religion that still does that a lot.
Neo-reactionaries, why are you neo-reactionary?

So, is my understanding correct that your FAI is going to consider only your group/cluster's values?

0[anonymous]7yOf course not.
Neo-reactionaries, why are you neo-reactionary?

Yes, that too.

Poland had used a version of that when arguing with the European union about the share in some commision, I'm not remembering what. It mentioned how much Poland's population might have been had they not been under attack from 2 fronts, the nazis and the communists.

Neo-reactionaries, why are you neo-reactionary?

Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who're acting sensibly today and restricting reproduction to children they can bring up properly.

I'm not very certain of this answer, but it is my best attempt at the qn.

0[anonymous]7yGood grief. You know, we already have nation-states for this sort of thing. If people form coherent separate "groups", such that mixing the groups results in a zero-sum conflict over resources (including "utility function voting space"), then you just keep the groups separate in the first place. EDIT: Ah, the correct word here is clusters.
-1Azathoth1237yNot to mention those who prosecuted and genocided ideological opponents.
Neo-reactionaries, why are you neo-reactionary?

I went from straight Libertarianism to Georgism to my current position of advocacy of competitive government. I believe in the right to exit and hope to work towards a world where exit gets easier and easier for larger numbers. My current anti-democratic position is informed by the amateur study of public choice theory and incentives. My formalist position is probably due to an engineering background and liking things to be clear.

When the fundamental question arises - what keeps a genuine decision maker, a judge or a bureaucrat in government (of a polity ... (read more)

Fixing Moral Hazards In Business Science

I have been thinking of a lot of incentivized networks and was almost coming to the same conclusion, that the extra cost and the questionable legality in certain jurisdictions may not be worth the payoff, and then the Nielsen scandal showed up on my newsfeed. I think there is a niche, just not sure where would it be most profitable. Incidentally Steve Waldman also had a recent post on this - social science data being maintained in a neutral blockchain.

About the shipping of products and placebos to people, I see a physical way of doing it, but it is defin... (read more)

1DavidLS7yYour approach to blinding makes sense, and works. I thought we were trying for a zero third party approach though? I was giving more thought to a distributed solution during dinner, and I think I see how to solve the physical shipments problem in a scalable way. I'm still not 100% sold on it, but consider these two options: * You ship both a placebo package and a non-placebo package to the participant, and have them flip a coin to decide which one to use. They either throw away or disregard the other package for the duration of the study. * You ship N packages to Total/N participants. The participants which receive N packages then randomly assigns himself a package, and randomly distributes the remaining (N-1) packages to other participants. They both require trusting the participant with assignment. Which feels wrong to me, but I'm not sure why...
Fixing Moral Hazards In Business Science

Hi David,

This is a worthwhile initiative. All the very best to you.

I would advise that this data be maintained on a blockchain like data structure. It will be highly redundant and very difficult to corrupt, which I think is one of the primary concerns here.,

6DavidLS7yInteresting. I'm hoping that by getting a trustworthy non-profit to host the site (and paying for a security audit) we can largely side step the issues. I spent a long time trying to create a way not to need the trusted third party, but I kept hitting dead ends. The specific dead end that hurt the most was blinding of physical product shipments. If we can figure out a way to ship both products and placebos to people without knowing who's getting what, I think we can do this :)
Contrarian LW views and their economic implications

Yes, I think so. Something I won't be able to do as a non-US investor.

Contrarian LW views and their economic implications

Invest in Quixey when they go in for the next round of funding, perhaps.

2Larks7yInteresting idea. Presumably one would have to be an accredited investor to do so?
A possible tax efficient swap mechanism for charity

Thanks, Toby. I expected that the legal risks would be quite an issue. Point noted. I had not expected this to be a new idea as well, after all it seemed too simple. I guess a more informal means is good for now. Hope the EA forum has such a place to make this discussion.

A possible tax efficient swap mechanism for charity

I think most charities are tax deductible only in their own countries. Oxford's cross country deductiblity is more the exception than the rule. To be specific, I'll not get a tax deduction in India if I contributed to fhi. But if I swap with an englishman who wanted to contribute to the ramakrishna mission or child relief and you (indian charities) then we both benefit.

I agree on potential regulatory issues. That's why I wanted more opinions.

The Great Filter is early, or AI is hard

I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.

I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.

Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.

Coming to chemistry and biology, w... (read more)

5MugaSofer7yIf you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true. We have math for calculating these things, based on the probability different options are true. For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons. But, in fact, we can reason about this uncertainty - we can't get rid of it, but we can quantify it to a degree. We know how soon life appeared after conditions became suitable. So we can consider what kind of frequency that would imply for abiogenesis given Earthlike conditions and anthropic effects. This doesn't give us any new information - we still don't know how abiogenesis works - but it does give us a rough idea of how likely it is to be nigh-impossible, or near-certain. Similarly, we can take the evidence we do have about the likelihood of Earthlike planets forming, the number of nearby stars they might form around, the likely instrumental goals most intelligent minds will have, the tools they will probably have available to them ... and so on. We can't be sure about any of these things - no, not even the number of stars! - but we do have some evidence. We can calculate how likely that evidence would be to show up given the different possibilities. And so, putting it all together, we can put ballpark numbers to the odds of these events - "there is a X% chance that we should have been contacted", given the evidence we have now. And then - making sure to update on all the evidence available, and recalculate as new evidence is found - we can work out the implications.
Open thread, July 21-27, 2014

If a storm like the one described in the link had actually hit, then would people really be concerned with these fine differences?

0ChristianKl7yI don't see how a good time for partying and apocalypse are only distinguished by a fine difference. Anyone who would put a serious thought and effort into reading and understanding ancient prophecies certainly would be concerned about the difference.
Open thread, July 21-27, 2014

This just showed up on my google reader.

My immediate thought was about this storm actually hitting in 2012. The mayan apocalypse was predicted on that year. The civilizational challenge to rebuild would have been substantial. But even more, the epistemic state of the civilization that recovered would almost have been permanently compromised. It would appear to most people that an ancient prophecy of a civilization that was brutally crushed was actually true.

What w... (read more)

3ChristianKl7yThe "Mayan apocalypse" isn't an ancient prophecy. From Wikipedia []:
Calling all MIRI supporters for unique May 6 giving opportunity!

Gave 3 small $10 donations over the last 3 hrs.

Weird question - why is MIRI classified as a > 2M$ charity. Did it actually pull in that much last year? I'm , for some reason, not able to open and check it myself..

4raisin8yFor the year 2012 total revenue was $1,633,946. [] The financials for 2013 don't seem to be available, but probably it was even higher then.
[LINK] Joseph Bottum on Politics as the Mindkiller

The points he makes would be familiar to those who've read Moldbug.

AALWA: Ask any LessWronger anything

Haven't read your book so not sure if you have already answered this.

what is your assessment of miri's current opinion that increasing the global economic growth rate is a source of existential risk?

How much risk is increased for what increase in growth?

Are there safe paths? (Maybe catch up growth in india and china is safe??)

3James_Miller8yGreater economic growth means more money for AI research from companies and governments and if you think that AI will probably go wrong then this is a source of trouble. But there are benefits as well including increased charitable contributions for organizations that reduce existential risk and better educational systems in India and China which might produce people who end up helping MIRI. Overall, I'm not sure how this nets out. Catch up growth is not necessarily safe because it will increase the demand for products that use AI and so increase the amount of resources companies such as Google devote to AI. The only safe path is someone developing a mathematically sound theory of friendly AI, but this will be easier if we get (probably via China) intelligence enhancement with eugenics.
[LINK] Why I'm not on the Rationalist Masterlist

I agree with Romeo Steven's comment that the issues seem orthogonal. As an example, (caveat YMMV), Steve Sailer believes in HBD. However, he frequently cites lower growth in african american wages as a reason to shut the american borders down to low skilled workers.

However, in today's environment, I'm not sure how many top-rated charities are HBD believing. A neoreactionary charity aiming at improving Africa might do many things differently. And being a relatively new ideology, most policies would not have substantial support of data. Hence, atleast in the current scenario, you would not find many people that were HBD aware and contributed greatly to african charities. However, it is not intellectually inconsistent.

[LINK] Why I'm not on the Rationalist Masterlist

I agree with Romeo Steven's comment that the issues seem orthogonal. As an example, (caveat YMMV), Steve Sailer believes in HBD. However, he frequently cites lower growth in african american wages as a reason to shut the american borders down to low skilled workers.

However, in today's environment, I'm not sure how many top-rated charities are HBD believing. A neoreactionary charity aiming at improving Africa might do many things differently. And being a relatively new ideology, most policies would not have substantial support of data. Hence, atleast in the current scenario, you would not find many people that were HBD aware and contributed greatly to african charities. However, it is not intellectually inconsistent.

[This comment is no longer endorsed by its author]Reply
MIRI's Winter 2013 Matching Challenge

Paid 300$ with my employer matching it, but the employers contribution may come in only at around Jan 15. Hope that isn't too late.

3lukeprog8yNo, that'll count just fine. Thanks!
AI Policy?

David Brin believes that high speed trading bots are a high probability route to human indifferent AI. If you agree with him, then laws governing the usage of high speed trading algorithms could be useful. There is a downside in terms of stock liquidity, but how much that will affect overall economic growth is still a research area.

2hylleddin8yI see trading bots as a not unlikely source of human indifferent AI, but I don't see how a transaction tax would help. Penalizing high frequency traders just incentivizes smarter trades over faster trades.
4ChrisHallquist8yThis seems unlikely. How do the stock-trading bots make the jump to being good at anything other than trading stock? Maybe if it spurs a bunch of investment in natural-language processing so the bots can read written-for-humans information on the companies they're buying and selling stock in, but unless that ends up being a huge percentage of the investment in NLP, it would probably make more sense to worry about NLP directly.
4Jayson_Virissimo8yDoes Brin give an argument for that? That significantly conflicts with my priors.
3Lumifer8yThat seems highly unlikely to me.
5[anonymous]8yI found at least one article relating to this from Brin. [] In my opinion here are two of the notable pieces of evidence he advances in that article: 1: The people involved are already spending billions of dollars on ways to get information processing slightly faster. 2: The people involved doesn't significantly value ethics or tight control.
AI ebook cover design brainstorming

Not Exactly a march of progress line, but something like a chimp and einstein on one corner and a server rack on the far end. Similar to the line diagram used in the sequences to illustrate how much of a difference we;re looking at. We are looking at appealing to numerate people, so it should not be overkill to have a graph.

Thought experiment: The transhuman pedophile

Ah.. Now you understand the frustrations of a typical Hindu who believes in re-incarnation. ;)

Help us name a short primer on AI risk!

Flash Crash of the Universe : The Perils of designed general intelligence

The flash crash is a computer triggered event. The knowledgeable amongst us know about it. It indicates the kind of risks expected. Just my 2 cents.

My second thought is way more LW specific. Maybe it could be a chapter title.

You are made of atoms : The risks of not seeing the world from the viewpoint of an AI

Open thread, August 26 - September 1, 2013

I seek help on a problem that I stumbled upon when thinking about a rational teleporter's story.

As typical of such protagonists, he finds that he can teleport and teleport a human's mass sideways with him, seemingly unharmed. As befits a rational protagonist, he experiments and finds out that he can teleport animals and after a demonstration to a very reluctant brother, he realises that he can teleport a human being, unharmed. After a crazy week of teleporting, he realises that he needs approximately 3 minutes to recuperate after a teleport to really do th... (read more)

0ChristianKl8yI think the correct solution probably involves hiring secretaries to manage the whole affair instead of bidding on a website.
3Unnamed8yOne option is to have clients enter their source, destination, bid, and the range of times when they'd be willing to travel, and to have an algorithm for selecting the highest revenue path (which won't always mean accepting the highest bid, if slightly lower bids can chain together destinations and sources). (There could also be a secondary market to fill any otherwise-empty legs in this path.) The operations problem of creating this algorithm seems related to the traveling salesman problem (it's a different problem, but may involve similar math). But I expect that, to maximize revenue, satisfying the highest-paying clients will be more important than efficiency in maximizing the number of paying clients per hour. Prices of trips are likely to vary by more than an order of magnitude, and trips with a fixed source, destination, and time would tend to get lower bids. He might even want to give some of these otherwise-empty trips away for free, (e.g., as "upgrades" to airline passengers who were scheduled on that route, in order to build goodwill with airlines who are now sharing the airport with him, and who may be able to do him favors like transporting passengers who he has to cancel on). There are complications in terms of the timing of booking (regardless of whether there are secondary auctions), because some high-paying clients will want to book at the last minute and get a trip ASAP, while others will want to book in advance and have a guarantee that the trip will go as scheduled. So the revenue-maximizing strategy would probably involve a mix of trips booked in advance and trips booked closer to the departure time, with some preference for clients who are more flexible about timing or cancellations.
Open thread, August 26 - September 1, 2013

I'm not sure if you have already tested for this. Please have the test for hyperthyroidism done. My wife had a problem with a finger ache and after many tests, we eventualy zero-ed in on hyperthyroidism.

Do Earths with slower economic growth have a better chance at FAI?

I'm not sure that humane values would survive in a world that rewards cooperation weakly. Azathoth grinds slow, but grinds fine.

To oversimplify, there seem to be 2 main factors that increase cooperation, 2 basic foundations for law. Religion and Economic growth. Of this, religion seems to be far more prone-to-volatility. It is possible to get some marginally more intelligent people to point out the absurdity of the entire doctrine and along with the religion, all the other societal values collapse.

Economic growth seems to be a far more promising foundatio... (read more)

Earning to Give vs. Altruistic Career Choice Revisited

if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.

Mathematicians and the Prevention of Recessions

I had thought of another way that mathematicians could contribute to global welfare using mostly math skills.

A lot of newcomers to bitcoin often mentioned that it looked to them that the calculations look wasted. Those who have read about bitcoin know that this is not true as the calculations are used to secure the network. The calculations really don't have other uses.

The essence of a bitcoin like problem is - tough to crack, but easy to verify once the solution is in. A talented mathematician/chemist team could team up to try to map protein folding or... (read more)

[LINK] Evidence-based giving by Laura and John Arnold Foundation

for 'evidence based giving' givewell doesn't show up in the first 2 pages of google results, but it does show up on the first page for the terms 'evidence based philanthropy'.

Bitcoins are not digital greenbacks

Right now, the best velocity measure seems to be coin days destroyed. But it is gameable. It is not being gamed in bitcoin because nothing is dependent on it.

The closest GDP measure in a cryptocurrency of the structure of bitcoin seems to be sum of transaction fees. It can be gamed by early adopters, but that is true of almost every measure

A Rational Altruist Punch in The Stomach

I guess after a point, the network takes care of itself, with self interest guiding the activities of participants. Of course, I could be wrong.

A Rational Altruist Punch in The Stomach

I agree to a certain extent. I just pointed out one thing, probably the only thing, that is fairly immune from the law , is expected to last fairly long and rewards its participants.

I did mention, something like a blockchain, a peer to peer network that rewards its participants. Contrarians and even reactionaries can use something like this to preserve and persist their values across time.

A Rational Altruist Punch in The Stomach

The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown.

So, an answer for the extreme rational altruist seems to lie in how to encode the values of their trust in something like a bitcoin blockchain, a peer to peer network that rewards participants in some manner, giving them the motive to keep the network alive.

The bitcoin blockchain looks like it will almost last forever, since there are many fanatics that would keep the flame lit even if there was a severe crackdown.

That seems highly unlikely, unless you actually meant something like 'the successors to the bitcoin blockchain'. We already know that quantum computing is going to lop off a large fraction of the security in the hashes used in Bitcoin, and no cryptographic hash has so far lasted even a century.

5Randy_M9yImmortal fanatics? Or fanatics very good at inspiring equal zeal in future generations?
Load More