All of Plasmon's Comments + Replies

EMdrive paper published, nearly identical to leaked draft.

It remains extremely likely that the em drive will turn out to either not work or work only by known-but-improperly-accounted-for physics (see also: Pioneer anomaly).

MIRI's 2016 Fundraiser

Ah yes, pausing ghostery seems to fix it.

MIRI's 2016 Fundraiser

Clicking the "Donate now" button under "PayPal or Credit Card" does not seem to do anything other than refresh the page.

(browser Firefox 48.0 , OS Ubuntu)

4So8res5yHuh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.
[Link] Op-Ed on Brussels Attacks

We know that whatever they did led to Paris and Brussels

Correlation / Causation?

2Lumifer6yEvidence. But do note that the OP explicitly asserts causation between police presence and radicalization.
Lesswrong 2016 Survey

I have taken the survey.

As I understand it, "This painting is beautiful" is completely equivalent to "I like (the visual aspects of) this painting".

Definitional arguments are not useful. Even using your interpretation, the point stands: the statement, properly understood, is empirical truth.

Why bring the brain into it?

No particular reason.

"This painting is beautiful" is a statement about the reaction of the speaker

That is what I mean, yes.

Or, paralleling Good_Burning_Plastic, a statement about the reaction of people generally

Whether we define beauty to be the reaction of the speaker, or the reaction of the majority of a certain group of people that are similar to the speaker, is not relevant: in both cases "This painting is beautiful" becomes an empirical truth instead of an "affective" truth.

No. "This painting is round" is a statement about the properties of the painting itself, independent of any observer. "This painting is beautiful" is a statement about the reaction of the speaker's brain upon seeing the painting. The syntactical similarity between those different kinds of statements in English (and all other natural languages that I know of) is unfortunate to say the least.

2RichardKennaway6yI was trying to get at the unnecessary turn into neuroscience. Why bring the brain into it? Why not say that "This painting is beautiful" is a statement about the reaction of the speaker? Or, paralleling Good_Burning_Plastic, a statement about the reaction of people generally (at least those raised in etc.)?
1Good_Burning_Plastic6yWell, not only the speaker, otherwise it'd be completely equivalent to "I like this painting" which it isn't. It is a claim about ambijective [] features of the painting -- more or less "this painting has certain features such that brains (at least those raised in cultural contexts similar to us) typically produce pleasant reactions".

I do not think we should dilute the meaning of the word "truth" like this.

If I say "This painting is beautiful", I mean "my brain produces a pleasant reaction upon seeing this painting". The latter sentence is empirical truth. See also 2-Place and 1-Place Words

"This place feel right to me" -- true! Affectively true.

Also empirically true!

Shakespeare is truth

If by this, you mean "I like Shakespeare's writing" (an empirical truth), just say so.

2RichardKennaway6yIf I say "This painting is round", do I mean "my brain produces a sensation of roundness upon seeing this painting"?
Crazy Ideas Thread

All advertising for a product should be produced by a competitor of the company that makes the product. This should be required by law.

Truth in advertising laws should keep the thus-produced advertising more or less factual. It would be much less annoying and manipulative than current advertising.

0polymathwannabe7yI had a horrific vision of Terry Crews re-doing the Po-Po-Power ad campaign, this time for Axe. Please, no.
Debunking Fallacies in the Theory of AI Motivation

However, arguments such as "you can't exactly specify what you want it to do, so it might blackmail the president into building a road in order to reduce the map distance"

The reason that such arguments do not work is that you can specify exactly what it is you want to do, and the programmers did specify exactly that.

In more complex cases, where the programmers are unable to specify exactly what they want, you do get unexpected results that can be thought of as "the program wasn't optimizing what the programmers thought it should be optimi... (read more)

The reason that such arguments do not work is that you can specify exactly what it is you want to do, and the programmers did specify exactly that.

Which is part of my point. Because you can specify exactly what you want--and because you can't for the kinds of utility functions that are usually discussed on LW--describing it as having a utility function is technically true, but is misleading because the things you say about those other utility functions won't carry over to it. Yeah, just because the programmer didn't explicitly code a utility function d... (read more)

Debunking Fallacies in the Theory of AI Motivation

me and a pretty large community of real AI builders who consider a utility-function-based goal stack to be so unworkable that it will never be used in any real AI.

Just because the programmer doesn't explicitly code a utility function does not mean that there is no utility function. It just means that they don't know what the utility function is.

3Good_Burning_Plastic7yWhy would you expect an AI to obey the vNM axioms at all, unless it was designed to?
3[anonymous]7yNot true, except in a trivial sense.
7Jiro7yAlthough technically any AI has a utility function, the usual arguments about the failings of utility functions don't apply to unusual utility functions like the type that may be more easily described using other paradigms. For instance, Google Maps can be thought of as having a utility function: it gains higher utility the shorter the distance is on the map. However, arguments such as "you can't exactly specify what you want it to do, so it might blackmail the president into building a road in order to reduce the map distance" aren't going to work, because you can program Google Maps in such a way that it never does that sort of thing.
Language Learning and the Dark Arts.


I like the minimalist UI - compare with far more cluttered sites like memrise and fluentu .

Some famous scientists who believed in a god

you mustn´t make a religious belief into a premise for science

I strongly disagree. If religion were true, that would be exactely what you should do.

Of course you can´t mix up scientific work with religion.


That statement is widely accepted today, but it is only widely accepted because virtually all attempts to do so have failed.

What happened is the following: people did try to base science on religion, they did make interesting predictions based on religious hypotheses. By elementary Bayesian reasoning, if an observation would be evidence for a r... (read more)

-1[anonymous]7yThat kind of religion is quite alien to me so I can´t say. I think we would have speratae systems today if such discoveries had been made. A couple of centuries ago people explained different phenomena with different systems. Some phenomena used Aristotle´s teachings, some used mechanichs (as taught by Archimedes) and some used magic as a model. I view religion as dealing with what is currently, at least partly, beyond the realms of experimental science. For example, concepts like love, goodness and evil are concepts that religions offer to explain. Science don´t have many theories concerning these concepts that are widely spread and accepted. We could use religious beliefs as premises, but since we can´t prove these premises yet, we can´t use them.
Some famous scientists who believed in a god

If you can only think of Francis Collins

I did say the only relatively well-known one, not the only one. Would you prefer if I used as an example Frank Tipler or Immanuel Velikovsky, both of whom make up exceedingly implausible hypotheses to fit their religious worldview, and are widely considered pseudoscientist because of that? Or Marcus Ross, who misrepresented his views on the age of the earth in order to get a paleontology phd?

No, today's good theistic scientists, to the extent that they still exist, are precisely those who have stopped to take reli... (read more)

-2[anonymous]7yThat is extremely obvious and something of the first thing I said in this article is that you mustn´t make a religious belief into a premise for science. Of course you can´t mix up scientific work with religion.
Some famous scientists who believed in a god

Why did you mention him then?

He is the only well-known example of a modern theistic scientist that I can think of.

Why not mention Erwin Schrödinger or Heisenberg for example?

Both are dead, and I am not familiar with their thoughts on religion.

I looked up Schrödinger on wikipedia, and there it is : "Despite being raised in a religious household, he called himself an atheist.".

0[anonymous]7yHe was agnostic most part of his life. But you are right that at one point in his life he openly declared himself an atheist. I remembered wrong. Heisenberg at the other hand was openly a theist. If you can only think of Francis Collins, maybe you shouldn´t base all your beliefs on just one person? Wikipedia on Schrödinger:
Some famous scientists who believed in a god

obscure hobby quite secret and separated from his scientific work

To someone who truly takes a certain religion seriously as a scientific hypothesis, attempting to extract non-obvious information from that religion's holy book is scientific work! The book was supposedly written by, or inspired by, an omnipotent being. How could they not expect to find important clues in there?

What do you base that last sentence on?

The complete and utter lack of modern theistic scientists looking for a soul-body communication organ, to name just one example.

There ma

... (read more)
0[anonymous]7yWhy did you mention him then? Why not mention Erwin Schrödinger or Heisenberg for example?
Some famous scientists who believed in a god

Besides, not all old theistic scientists based their science on religious premises.

Very true. They would hardly have made much progress if they did!

Many tried to do it though. Another example is Isaac Newton, who tried to extract scientific information from the bible.

My point here is not that their conclusions were wrong, but that their attitude towards religion was a scientific one, an attitude rarely seen in today's theists.

-1[anonymous]7yI don´t fully agree though. There many scientists today whom are also theists. What do you base that last sentence on? As for Newton trying to extract information from the bible, as far as I remember this was in his senior years when he started with numerology. He kept this obscure hobby quite secret and separated from his scientific work as far as I know.
Some famous scientists who believed in a god

I see you mention Rene Descartes. He believed in the existence of souls, and, taking that hypothesis seriously, he concluded there has to be a way for the soul to send signals to the body. He went looking for an organ that might fulfil this purpose, and concluded that it is the pineal gland.

This conclusion is false, the true function of the pineal gland is known today, but it illustrates a point : the old theistic scientists tended to take religion seriously, they viewed it as a valid scientific hypothesis whose implications in the real world could be stud... (read more)

0[anonymous]7yYou are right :) I was amused when I first heard about his theories regarding the soul. We have to remember though, that he was philosopher and not a real doctor or a scientist in the modern meaning. Besides, not all old theistic scientists based their science on religious premises.
[FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122

The fact that quantum mechanics conserves energy is stronger evidence for the hypothesis that reality conserves energy than the fact that classical mechanics conserves energy. He is saying "our best model of reality conserves energy" which is very relevant.

0TobyBartels7yIf quantum mechanics allowed for small violations of energy conservation (which sometimes people even say that it does, on short time periods, although this is not really correct), then McGonagall's tranformation would still violate physical law as we know it. In physics, you don't always push everything down to the most fundamental theory, which is a good thing, since we don't actually have a most fundamental theory of physics. There is no such thing as ‘our best [single] model of reality’; there are some ways in which our quantum models are (so far) worse than our classical ones.
Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113

I make the following prediction : the transfiguration exercise of ch. 104 foreshadows the possibility of safely transfiguring a certain kind of explosive, that relies on containing several components that will explode upon contact. The ch. 104 exercise tells us that containment chambers can be formed first, and their contents afterwards, such that the bomb will not accidentally explode during transfiguration.

Open thread, Feb. 23 - Mar. 1, 2015

binding constraint that you absolutely must make sure he doesn't say certain words at school

What would happen if he did say those words at school? Would they expel him? Does he know what the consequences of saying those words at school are, and does he think these consequences are insufficiently bad to act as an effective deterrent?

The morality of disclosing salary requirements

There are other options. Especially in cases where requiring this is illegal, forging such a proof may be an option. Or, answer "If I give you proof that my previous salary was X, I will precommit to only accept this job if you pay at least X + 20%".

2Viliam_Bur7yOn one occasion I have said my previous salary but I also said that I believe I was seriously underpaid at my previous job, so I want X + 100%. At the end I have received X + 70%. I had very good references though -- my former student was a developer at that company, so when the boss invited him to the interview, he said "well, he taught me what I know" (which wasn't literally true, but made a good impression).
Open thread, Jan. 26 - Feb. 1, 2015

Why would anyone be able to sell an item with a given pricing scheme like 1/n?

On downloaded, digital goods, this would be simple.

If their competitor is undercutting them on the first item, they'll never get a chance to sell the latter ones. And besides there's no reason such a scheme would be profit-maximizing.

Please see the numerical example in this comment

Open thread, Jan. 26 - Feb. 1, 2015

I imagine the following:

Suppose 2 movies have been produced, movie A by company A and movie B by company B. Suppose further that these movies target the same audience and are fungible, at least according to a large fraction of the audience. Both movies cost 500 000 dollars to make.

Company A sells tickets for 10 dollars each, and hopes to get at least 100 000 customers in the first week, thereby getting 1000 000 dollars, thus making a net gain of 500 000 dollars.

Company B precommits to selling tickets priced as 10 f(n) dollars, with f(n) defined as 1 / ( 1 ... (read more)

2Punoxysm7yIf movie A sells for 9 dollars, people able to do a side-by-side comparison will never purchase movie B. Movie A will accrue 1.8 million dollars. I don't see what sublinear pricing has to do with it unless the audience is directly engaging in some collective buying scheme.
4Nornagest7yThe movie industry actually does this, more or less. It's not a monotonic function, which makes analysis of it mathematically messy, but it's common (albeit less common now than twenty years ago) for films to be screened for a while in cheaper second-run theaters after their first, full-priced run; and then they go to video-on-demand services and DVD, which are cheaper still. Wouldn't surprise me if similar things happened with 3D projectors and other value-added bells and whistles, but I don't have any hard data.
Open thread, Jan. 26 - Feb. 1, 2015

Sublinear pricing.

Many products are being sold that have substantial total production costs but very small marginal production costs, e.g. virtually all forms of digital entertainment, software, books (especially digital ones) etc.

Sellers of these products could set the product price such that the price for the (n+1)th instance of the product sold is cheaper than the price for the (n)th instance of the product sold.

They could choose a convergent series such that the total gains converge as the number of products sold grows large (e.g. price for nth item = ... (read more)

0DanielLC7yIf they have less gains, then in what way are they outcompeting other sellers? If they want to sell the most copies, they should just give them away, or better yet, pay people to take them. [] is essentially trying to solve the same problem in a different way. Instead of changing the product price as more units are sold, they ask folks to finance its fixed component directly, using a game-theoretic mechanism that increases total contributions superlinearly as more people choose to contribute. (This boosts the effectiveness of any single user's contributions through a "matching" effect). However, there is no distinction between "earlier" vs. "later" contributors; they're all treated the same. The underlying goal is to generalize the successful assurance-contract mechanism to goods and services that do not have a well-defined 'threshold' of feasibility, especially services that must be funded continuously over time.
4Punoxysm7yI don't get what you're getting at. Pricing is a well-studied area. Price discrimination based on time and exclusivity of 'first editions' and the like is possible, but highly dependent on the market. Why would anyone be able to sell an item with a given pricing scheme like 1/n? If their competitor is undercutting them on the first item, they'll never get a chance to sell the latter ones. And besides there's no reason such a scheme would be profit-maximizing.
8passive_fist7yA psychological effect could be at play. If you pay $10 for a product and this causes the next person to pay $9 for it, it's an incentive against being the first to buy it. You would wait until others have bought it before buying. Or you might think the product is being priced unfairly and refuse to buy at all. It seems that to counter this, you'd need another psychological effect to compensate. Like, for instance, offering the first set of buyers 'freebies' that actually have zero or near-zero cost (like 'the first 1000 people get to enter a prize-giving draw!')
2Slider7yOne could note that natural markets are going to this direction. For example steam has pretty reliably games appear on sale year or two after their release. Savvy consumers already know to wait if they can. This can get so bad taht early access games hit sales before they are released! I tired to bring this topic up at a LessWrong meeting I have been calling my thoughts on this direction as "contributionism". There is some additional even more radical suggestions. Instead of treating at each new sell as lesser amount, retroactively lower the price for already happened purchases (I am pretty sure they dont' mind). Otherwise there is this contention that if two customers are about to buy the product they try to make the other guy buy first so they get the cheaper price (which leads to a mexican standoff that chills selling). Also normally when a seller and a customer are negotiating for a price seller wants to make it high and the buyer wants it to go low. However if the seller fixes the total amount of money he wants form all of his products then the price negotiation is only about whether the buyers wants to opt in now that it is higher or later when it is lower. However if the price retroactively changes you are "ultimately" going to be spending the same amount of money. If you attach your money early you get earlier access and run the risk that the product never hits high sales numbers (ie that you do not get any returns on it). However the more people attach money the more the instant price lowers and more money is prone to flow in. This can also be leveraged to overcome a coordination problem. Even if the current instant price too much for you the seller can ask you how much you would honestly be willing to pay for it. (Answering this question too high will not cost you (too much) money). Then when the next customer that doesn't quite have enough buying willingness he might still promise the same level of sum of it. At some point that enough promisers have v
4Nornagest7yI think you could probably model Kickstarter as a sneaky version of this.
3Manfred7yFor the practical real-world analogue of this, look up price discrimination strategies. Anyhow, this doesn't work out very well for a number of reasons. In short antiprediction form, there's no particular reason why price discrimination should be monotonic in time, and so it almost certainly shouldn't.
2Lumifer7yWhy would the dynamic-price seller outcompete other sellers who are making more money? Besides, he would have the classic takeoff problem -- this first items would be (relatively) very expensive and nobody will buy them (the flat-price sellers are selling the same thing much cheaper).
0[anonymous]7yWhy would sellers doing this outcompete sellers who don't? Sellers reducing prices whenever they want, rather than precommitting to a set function, will have more information to base their prices on at the time they set each price, so I'd expect them to do better.
The Unique Games Conjecture and FAI: A Troubling Obstacle

I was unclear, of course it is real physics. By "real" I mean simply something that occurs in reality, which quantum nonlocality certainly does.

Quantum nonlocality - despite being named "nonlocality"- is actually local in a very important sense, just like the rest of physics : information never moves faster than c.

The Unique Games Conjecture and FAI: A Troubling Obstacle

Every single physical theory that is currently considered fundamental is local, from general relativity to quantum mechanics.

I dislike the wikipedia article on the subject, it gives far to much credence to the fringe notion that maybe there is a way to exploit entanglement to get faster-than-light information transfer.

The quantum nonlocality article is much better, it correctly points out that

it (quantum nonlocality) does not allow for faster-than-light communication, and hence is compatible with special relativity.

1Jiro7yDoes quantum nonlocality count as not being real physics?
The Unique Games Conjecture and FAI: A Troubling Obstacle

Real physics is local. The graphs, to the extent that there are any, are embedded in metric spaces, have small upper bounds on the number of edges per vertex, are planar, .... generally there is plenty of exploitable structure beyond the pure graph-theoretical problem. This is why I do not think hardness results on abstract graph-theoretical problems will be a great obstacle for practical problems.

2JoshuaZ7yReal physics is local, but many practical problems require understanding how the local aspects interact. Consider for example the traveling salesman: this is a purely local problem in statement, but many versions or variants of it occur in practical contexts where actually solving hard instances matters. For example, the bottleneck version [] shows up in circuit board design. Similarly, graph bandwith [] is "local" in nature but shows up in chip design, and tough instances do show up in practice. Similarly the pathwidth probem [] has practical applications in compiler design,general circuit design, and CPU design. This also fails to appreciate the central interesting thing about UGC: hardness of UGC translates into hardness of approximating many problems which are not phrased in a graph way. Many different NP-complete problems have practical applications, not just for an AI trying to go Foom but also industrial applications now. Consider for example the cutting stock problem [] which has variants relevant to many different industries and where slightly better solutions really do lead to real savings. It is also worth noting that this shouldn't be surprising: most NP-complete problems are of the form where one can phrase the problem so that one has some sort of local information and one is interested in a solution that satisfies all the necessary local restrictions as well as some very weak global condition.
0Jiro7yWhere are you getting this from?
Open thread, Jan. 19 - Jan. 25, 2015

Recently, there has been talk of outlawing or greatly limiting encryption in Britain. Many people hypothesize that this is a deliberate attempt at shifting the overton window, in order to get a more reasonable sounding but still quite extreme law passed.

For anyone who would want to shift the overton window in the other direction, is there a position that is more extreme than "we should encrypt everything all the time" ?

-3James_Miller7yThe criminalization of all encryption in the U.S. is just one big terrorist attack away.

Frame attempts to limit the use of encryption as unilateral disarmament, and name specific threats.

As in, if the government "has your password", how sure are you that your password isn't eventually going to be stolen by Chinese government hackers? Putin? Estonian scammers? Terrorists? Your ex-partner? And you know that your allies over in (Germany, United States, Israel, France) are going to get their hands on it too, right? And have you thought about when (hated political party) gets voted into power 5 years from now?

A second good framing is us... (read more)

7fubarobfusco7yTo be a bit more specific than "we should encrypt everything all the time": Mandatory full-disk encryption on all computer systems sold, by analogy to mandatory seat belts in cars — it used to be an optional extra, but in the modern world it's unsafe to operate without it.

Assuming you just want people throwing ideas at you:

Make it illegal to communicate in cleartext? Add mandatory cryptography classes to schools? Requiring everyone to register a public key and having a government key server? Not compensating identity theft victims and the like if they didn't use good security?

Respond to what they probably meant

What you mean is there's no way to write Pi with finitely many digits, in any basis."

pi=1 in base pi

... but that's not what you meant :)

As long as we're nitpicking, it should be 10.

0DanArmak7yA basis is normally defined as an integer. But you're right; there doesn't seem to be any reason not to extend this to reals. (Although what good is a non-computable basis to anyone I don't know...)
1buybuydandavis7yFrom the article, why I may seem a little miffed about the whole thing:
Some recent evidence against the Big Bang

Another is an asymmetry in the average temperatures on opposite hemispheres of the sky. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look.

Why doesn't this just mean that we are moving w.r.t. the rest frame of the CMB? The signal is redshifted in the hemisphere we're moving away from, and blueshifted in the hemisphere we're moving towards, so it would look hotter in the hemisphere we're moving towards.

2CellBioGuy7yThe anisotropy you speak of from motion does exist. When you extract such a dipole from the data and renormalize though (you can very precisely calculate what the effects of motion would be and fit the direction to the radiation we observe) an asymmetry remains along a different axis.
6Luke_A_Somers7yYes, that is precisely correct. This is one anisotropy that you expect to see in any model, because it's a fact about us, not about the universe. The higher modes, however, suggest that the Big Bang wasn't homogeneous and isotropic. That doesn't make it not a BIG BANG in a general sense, nor not the Big Bang in the technical sense. It just means that there was more going on than we knew about. We already knew that.
Open thread, Dec. 29, 2014 - Jan 04, 2015

The prior distribution over hypotheses is distribution over programs, which are bit strings, which are integers. The distribution must be normalizable (its sum over all hypotheses must be 1). All distributions on the integers go to 0 for large integers, which corresponds to having lower probability for longer / more complex programs. Thus, all prior distributions over hypotheses have a complexity penalty.

You could conceivably use a criterion like "pick the simplest program that is longer than 100 bits" or "pick the simplest program that starts with 101101", or things like that, but I don't think you can get rid of the complexity penalty altogether.

0gedymin7yI know what SI is. I'm not even pushing the point that SI not always the best thing to do - I'm not sure if it is, as it's certainly not free of assumptions (such as the choice of the programming language / Turing machine), but let's not go into that discussion. The point I'm making is different. Imagine a world / universe where nobody has any idea what SI is. Would you be prepared to speak to them, all their scientists, empiricists and thinkers and say that "all your knowledge is purely accidental, you unfortunately have absolutely no methods for determining what the truth is, no reliable methods to sort out unlikely hypotheses from likely ones - while we, incidentally, do have the method and it's called Solomonoff induction"? Because it looks like what iarwain1 is saying implies that. I'm sceptical of this claim.
Open thread, Dec. 29, 2014 - Jan 04, 2015

Solomonoff induction justifies this : optimal induction uses a prior which weights hypotheses by their simplicity.

1gedymin7yLet me clarify my question. Why do you and iarwain1 think there are absolutely no other methods that can be used to arrive at the truth, even if they are sub-optimal ones?
Rationality Jokes Thread

Two is an odd prime number, because two isn't odd.

[link] On the abundance of extraterrestrial life after the Kepler mission

what happens if we find all these biologically feasible exoplanets that just don't have any life on them?

That would be evidence for an early filter over a late filter, so it would probably be good news.

0Luke_A_Somers7ys/probably/really, really/
If Many-Worlds Had Come First

Indeed. Truly reversing the measurement would involve also forgetting what the result of the measurement was, and Copenhagenists would claim this forgotten intermediate result does not count as a "measurement" in the sense of something that (supposedly) collapses the wave function.

Open thread, 25-31 August 2014

The Brussels meetup is typically in English.

2Sherincall7yAnd is also quite close to my new location. Thanks!
5Jiro7yThe biggest problem with the headline is that it assumes "immigrants" are a homogenous group. There could be some groups that are good for the country and some which are not, in which case you could still justify keeping the second group out. If you read the article it even says "However the report highlights that not all groups of migrants make a positive fiscal contribution to the UK and in some cases migrants can represent a burden for public finances." Also, this doesn't count losses in utility that happen to locals because of immigrants but don't involve collecting benefits, such as increases in crime rates or unemployment rates.
1skeptical_lurker7yThat's quite interesting, and does refute a lot of the people who argue against all immigration. But the UK does have some forms of boarder control.
Open thread, 14-20 July 2014

I wonder if there's any significance to the fact that there is only one (known?) material out of which a space elevator could be constructed on Earth. Most planets, I would expect, will be such that either no material is strong enough, or several materials are strong enough. Earth lies just on the boundary.

1Luke_A_Somers8yBoron Nitride nanotubes are strong enough, and enough stickier that they'd probably be better than carbon nanotubes.
3wadavis8yIt is just the current state of technology that only one usable material exists (no fact checking has been done), Given time a wide variety of eligible materials will be developed with appropriate properties. This is nothing special, just the endless advance of science in materials engineering.
4Manfred8yWe probably won't build a space elevator anyhow - though check out some other options [].
1Metus8yMaybe this is part of the great filter? No planets that have lower gravitational force are hospitable to intelligent life and all planets that are hospitable to intelligent life have higher gravitational force. We are just incredibly lucky to lie just one the boundary. I am not suggesting that gravity causes intelligent life to evolve but that the overall characteristic of such planets is such that etc.
Separating university education from grading

the same institutions that educate students also give them their degrees and grades.

In the Netherlands and Flanders, there is a government organisation responsible for maintaining the quality of higher education. It is true that they do not grade the students, but they do look at past exams to verify if the questions asked were sufficiently difficult and if they were graded properly. They use experts from competing universities to help them make their judgements.

Open thread, 30 June 2014- 6 July 2014

The anecdote in this post, about Fermi, Rabi and Szilard considering keeping the possibility of practical nuclear fission a secret, may shed some light on the subject. He thinks that some knowledge is dangerous enough that people who know it may reasonably want to keep it secret.

(much more recently, there has been some controversy about the publication of a way of obtaining a particularily infectious strain of a certain virus, but I can't find any references for that right now)

6gwern8yThis is a perennial issue, occurring in various forms relating to the preservation of viruses like smallpox, the sequencing of their genomes, and increasing their virulence. Looking in Google News for 'virus research increase virulence', it seems the most recent such research would be [] / [] : EDIT: Sandberg provides an amazing quote on the topic: []
1Will_BC8yI think that I remember reading an even better example about publishing scientific results that might have furthered the Nazis ability to produce a nuclear weapon in HPMOR, though I can't recall where it was exactly. I found that example persuasive, but I considered it a distasteful necessity, not a desirable state of affairs. Hence my confusion at Brennan's world, which I thought being set in the future of our world was perhaps post-Singularity, and therefore the epitome of human flourishing. Another commenter asked me if I wouldn't enjoy the thought of being a super-villain, and I thought , um no, that would be terrible, so maybe there are some Mind Projection issues going on in both directions. I don't know the distribution of people who would gain positive utility from a world of conspiracies, but I'm sure there would be a great deal of disutility with some proportion of current people with current minds. I can see where that world might provide challenge and interest for its inhabitants, but I remain highly skeptical that it's a utilitarian optima. Using my current brain and assuming stable values, it actually seems pretty dystopian to me, but I'll admit that's a limited way to look at things.
On Terminal Goals and Virtue Ethics

You said

There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner with such a primitive output as "in the class" or "not in the class" does not engage in world optimization, that is: its actions do not, to its own knowledge, skew any probability distribution over future states of any portion of the world outside itself. ... Now, what we've been proposing as an Oracle is even less capable.

which led me to think you were talking about an oracle even less capable than a learner with a sub-Turing hypot... (read more)

On Terminal Goals and Virtue Ethics

Physics is turing-complete, so no, a learner that did not consider turing complete hypotheses could not model the outer environment.

0[anonymous]8yYou seem to have lost the thread of the conversation. The proposal was to build a learner that can model the environment using Turing-complete models, but which has no power to make decisions or take actions. This would be a Solomonoff Inducer approximation, not an AIXI approximation.
On Terminal Goals and Virtue Ethics

This thing you are proposing, an "oracle" that is incapable of modeling itself and incapable of modeling its environment (either would require turing-complete hypotheses), what could it possibly be useful for? What could it do that today's narrow AI can't?

0[anonymous]8yA) It wasn't my proposal. B) The proposed software could model the outer environment, but not act on it.
On Terminal Goals and Virtue Ethics

I agree, of course, that none of the examples I gave ("primitive classifiers") are dangerous. Indeed, the "plans" they are capable of considering are too simple to pose any threat (they are, as you say, not Turing complete).

But, that doesn't seem to relevant to the argument at all. You claimed

a very general learning algorithm with some debug output, but no actual decision-theory or utility function whatsoever built in. That would be safe, since it has no capability or desire to do anything.

You claimed that a general learning algori... (read more)

0[anonymous]8y/facepalm There is in fact such a thing as a null output. There is in fact such a thing as a learner with a sub-Turing hypothesis class. Such a learner with such a primitive output as "in the class" or "not in the class" does not engage in world optimization, that is: its actions do not, to its own knowledge, skew any probability distribution over future states of any portion of the world outside itself. It does not narrow the future. Now, what we've been proposing as an Oracle is even less capable. It would truly have no outputs whatsoever, only input and a debug view. It would, by definition, be incapable of narrowing the future of anything, even its own internal states. Perhaps I have misused terminology, but that is what I was referring to: inability to narrow the outer world's future.
Load More