All of Ilverin's Comments + Replies

Are we in an AI overhang?

Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)

7Andy Jones1yI'd say it's at least 30% likely that's the case! But if you believe that, you'd be pants-on-head loony not to drop a billion on the 'residual' 70% chance that you'll be first to market on a world-changing trillion-dollar technology. VCs would sacrifice their firstborn for that kind of deal.
Has Moore's Law actually slowed down?

How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.

The Copernican Revolution from the Inside

With regard to "How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?"

I think a possibly better question might be "How should you develop intellectually, in order to become the kind of person who would have considered both geocentrism and heliocentrism plausible with probability less than 0.5 and greater than 0.1 during the Copernican revolution?"

edit: May have caused confusion, alternative phrasing of same idea:

who would have considered geoce... (read more)

I disagree. The point of the post is not that these theories were on balance equally plausible during the Renaissance. It's written so as to overemphasize the evidence for geocentrism, but that's mostly to counterbalance standard science education.

In fact, one my key motivations for writing it -- and a point where I strongly disagree with people like Kuhn and Feyerabend -- is that I think heliocentrism was more plausible during that time. It's not that Copernicus, Kepler Descartes and Galileo were lucky enough to be overconfident in the right direction, and really should just have remained undecided. Rather, I think they did something very right (and very Bayesian). And I want to know what that was.

Postmodernism for rationalists

Any idea why?

Is it possibly a deliberate strategy to keep average people away from the intellectual movement (which would result in an increased intellectual quality)? If so, I as an average person should probably respect this desire and stay away.

Possibly there should be 2 communities for intellectual movements: one community with a thickly walled garden to develop ideas with quality intellectuals, and a separate community with a thinly walled garden in order to convince a broader audience to drive adoption of those ideas?

1scarcegreengrass4yYes, i think a big aspect of postmodernist culture is speaking in riddles because you want to be interacting with people who like riddles. I don't think that the ability to understand a confusingly-presented concept is quite the same thing as intellectual quality, however. I think it's a more niche skill.
6Said Achmiz4yYou can't understand Zizek, or Zizek's "strategy", if you approach it in so straightforward a way. And that's the point. It's not "average" people who are being "kept away", it's enlightened people who are being filtered for. By "enlightened", of course, I do not mean the Zen notion, or any such thing, nor do I even use the term normatively; I only mean that those who have independently had the experiences and reached the understandings necessary to apprehend what Zizek is saying, will be able to do so. That is the filter. If Zizek explains to you in plain language what he is saying, you may understand it; but that is counterproductive, because if you need his points explained to you in plain language then you are not the sort of person he is speaking to. Conversely, if you listen to Zizek and do not understand him, you may later have the relevant experiences and reach the relevant understandings, and apprehend his points retroactively. Many things work this way.
Postmodernism for rationalists

Your comment is quite clear and presents an important idea, thank you.

Why is the original comment about coffee in the presentation lacking in context? Is it deliberately selectively quoted to have less context in order to be provocative?

2SilentCal4yWhy speak in riddles? Because sometimes solving a puzzle teaches you more than being the solution. As an observation about coffee, Zizek's statement is true in its way but not especially useful. His broader point is "you should think about history and context more." So he presents you with two physically identical items, coffee without milk and coffee without cream, so that you can be surprised by noticing that there's potentially an important difference, and that surprise will make you update towards considering context and history as well as present physical makeup.
1ChristianKl4yZizek sounds just as ridiculous when you hear him speak in context.
2quanticle4yI think it's more that, on a slide you necessarily have to remove context in order to keep the presentation legible (both metaphorically and literally) for the audience. Walls of text in tiny print don't make for good slides.
Postmodernism for rationalists

I think this is honest and I'm thankful to have read it.

Probably I'm biased and/or stupid, but with regard to Slavoj's comment “Coffee without cream is not the same as coffee without milk.” [this article's author's requests being charitable to this comment], the most charitable I can convince myself to be is "maybe this postmodernist ideology is an ideology specifically designed to show how ideology can be stupid - in this way, postmodernists have undermined other stupid ideologies by encouraging deconstruction of ideology t

... (read more)
8quanticle4yIt's an illustration of postmodernism's insistence on looking at the context of a thing in addition to the thing itself. A modernist would look at coffee-without-cream and coffee-without-milk and say, "So what, they're both black coffee, right?" But a postmodernist would say, "Yes, they're both black coffee but the choices that led to each being black coffee were different." That history, that context, is different between the two coffees, and thus they're different. Another way of thinking about it is, "Is a (ex-)Jewish atheist different from an (ex-)Catholic atheist?"
The Rationalistsphere and the Less Wrong wiki

I think this might be confounded: the kind of people with sufficient patience or self-discipline or something (call it factor X) are the kind of people both to read the sequences in full and also to produce quality content. (this would cause a correlation between the 2 behaviors without the sequences necessarily causing improvement).

Less costly signaling

Here's a post by Scott Sumner (an economist with a track record) about how taxing positional goods does make sense:

http://www.themoneyillusion.com/?p=26694

0Benquo5yPaul's arguing for punitive taxes on positional goods for the sake of reducing wasteful consumption. I think Sumner's mostly trying to argue that the social costs of taxing the consumption of the rich are low. I agree with the latter point, for roughly the same reason I disagree with the former; I think wasteful conspicuous consumption's a side-effect of limited opportunities for more substantive consumption or investment.
Less costly signaling

The main problem with taxing positional goods is that the consumption just moves to another country.

I don't have an economics degree, but:

1) governments could cooperate to tax positional goods (such as with a treaty)

2) governments could repair the reduced incentive to work hard by lowering taxes on the rich

3) these 2 would result in lower prices for non-positional goods

4) governments could adjust for lost tax revenue by lowering welfare programs because of (3)

The flaw I can think of (there are probably others) is that workers in positional goods industries might lose their jobs.

What other flaws are there or why isn't this happening already?

0ChristianKl5yhttps://en.wikipedia.org/wiki/Luxury_tax [https://en.wikipedia.org/wiki/Luxury_tax]Bush senior did pass such a tax but the Clinton administration allowed it to be repealed.
Less costly signaling

Regarding 'relax constraints that make real resources artificially scarce' - why not both your idea and the OP's idea to tax positional goods? In the long run the earth/our future light cone really is only so big so don't we need any and all possible solutions to make a utopia?

0Benquo5yAttention is scarce. I wouldn't lobby against such a tax, but I would advise people not to put energy into advocating for one, because I think it's an especially inefficient solution.
Open thread, Oct. 10 - Oct. 16, 2016

Is there any product like an adult pacifier that is socially acceptable to use?

I am struggling with self-control to not interrupt people and am afraid for my job.

EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.

2MrMind5yHow about a lollipop? It's almost the same thing, and since inspector Kojak it's become much more socially acceptable, even cool, if you pull it off well. If you are a woman, though, you'll likely suffer some sexual objectification (what a news!).
5Lumifer5yIt's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.
8SithLord135yCould chewing gum serve as a suitable replacement for you?
Open Thread - Aug 24 - Aug 30

Efficient charity: you don't need to be an altruist to benefit from contributing to charity

Effective altruism rests on two philosophical ideas: altruism and utilitarianism.

In my opinion, even if you're not an altruist, you might still want to use statistics to learn about charity.

Some people believe that they have an ethical obligation to cause a net 0 suffering. Others might believe they have an ethical obligation to cause only an average amount of suffering. In these causes, in order to reduce suffering to an acceptable level, efficient charity might be ... (read more)

Open Thread, May 18 - May 24, 2015

Disclaimer: I may not be the first person to come up with this idea

What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?

Could this be useful to prevent overdoses?

29eB17yThere are already dispensing machines that dispense doses on a timer. They are mostly targeted at people who need reminding (e.g. Alzheimers), though, rather than people who may want to take too much. I don't think the cryptographic security would be the problem in that scenario, but the physical security of the device. You would need some trusted way to reload it and it would have to be very difficult to open even though it would presumably just be sitting on your table at home, which is a very high bar. It could possibly be combined with always-on tampering reporting and legal threats to make the idea of tampering with it less appealing though.
3Lumifer7yIf the dispensing device is "locked" against the user and you want to enforce dosing you don't need any crypto keys. Just make the device have an internal clock and dispense a dose every X hours. In the general case, the device is externally controlled and then people who have control can do whatever they want with it. I'm still not seeing a particular need for a crypto key.
Open thread, Dec. 8 - Dec. 15, 2014

Disclaimer: Not remotely an expert at biology, but I will try to explain.

One can think of the word "gene" as having multiple related uses.

Use 1: "Genotype". Even if we have different color hair, we likely both have the same "gene" for hair which could be considered shared with chimpanzees. If you could re-write DNA nucleobases, you could change your hair color without changing the gene itself, you would merely be changing the "gene encoding". The word "genotype" refers to a "function" which takes ... (read more)

LINK: In favor of niceness, community, and civilisation

Thank you, I initially wrote my function with the idea of making it one (of many) "lower bound"(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that "this works fine as a general theory, not just a lower bound".

Thank you for helping me think more clearly.

LINK: In favor of niceness, community, and civilisation

"How dire [do] the real world consequences have to be before it's worthwhile debating dishonestly"?

M̶y̶ ̶b̶r̶i̶e̶f̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶:̶

One lower bound is:

If the amount that rationality affects humanity and the universe is decreasing over the long term. (Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).

T̶h̶i̶s̶ ̶i̶s̶ ̶a̶l̶s̶o̶ ̶m̶y̶ ̶a̶n̶s̶w̶e̶r̶ ̶t̶o̶ ̶t̶h̶e̶ ̶q̶u̶e̶s̶t̶i̶o̶n̶ ̶"̶w̶h̶a̶t̶ ̶i̶s̶ ̶w̶i̶n̶n̶i̶n̶g̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶r̶a̶t̶i̶o̶n̶a̶l̶i̶s̶t̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶y̶"̶?̶

R̶a̶t... (read more)

5Mestroyer8yDownvoted for the fake utility function [http://lesswrong.com/lw/lq/fake_utility_functions/]. "I wont let the world be destroyed because then rationality can't influence the future" is an attempt to avoid weighing your love of rationality against anything else. Think about it. Is it really that rationality isn't in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived? If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?
Luck I: Finding White Swans

If the author could include a hyperlink to Richard Wiseman when he is first mentioned, it might prevent any reader from being confused and not realizing that you are describing actual research. (I was confused in this way for about half of the article).

9ESRogs8yAgreed, especially since the name Wiseman sounds like it could be symbolic. Also, what book is being talked about here?
Prisoner's Dilemma (with visible source code) Tournament

I wonder if there's a chance of the program that always collaborates winning/tieing.

If all the other programs are extremely well-written, they will all collaborate with the program that always collaborates (or else they aren't extremely well-written, or they are violating the rules by attempting to trick other programs).

[This comment is no longer endorsed by its author]Reply