Ilverin

Wiki Contributions

Comments

Are we in an AI overhang?

Is it more than 30% likely that in the short term (say 5 years), Google isn't wrong? If you applied massive scale to the AI algorithms of 1997, you would get better performance, but would your result be economically useful? Is it possible we're in a similar situation today where the real-world applications of AI are already good-enough and additional performance is less useful than the money spent on extra compute? (self-driving cars is perhaps the closest example: clearly it would be economically valuable, but what if the compute to train it would cost 20 billion US dollars? Your competitors will catch up eventually, could you make enough profit in the interim to pay for that compute?)

Has Moore's Law actually slowed down?

How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.

The Copernican Revolution from the Inside

With regard to "How should you develop intellectually, in order to become the kind of person who would have accepted heliocentrism during the Copernican revolution?"

I think a possibly better question might be "How should you develop intellectually, in order to become the kind of person who would have considered both geocentrism and heliocentrism plausible with probability less than 0.5 and greater than 0.1 during the Copernican revolution?"

edit: May have caused confusion, alternative phrasing of same idea:

who would have considered geocentrism plausible with probability less than 0.5 and greater than 0.1 and would have considered heliocentrism plausible with probability less than 0.5 and greater than 0.1

Postmodernism for rationalists

Any idea why?

Is it possibly a deliberate strategy to keep average people away from the intellectual movement (which would result in an increased intellectual quality)? If so, I as an average person should probably respect this desire and stay away.

Possibly there should be 2 communities for intellectual movements: one community with a thickly walled garden to develop ideas with quality intellectuals, and a separate community with a thinly walled garden in order to convince a broader audience to drive adoption of those ideas?

Postmodernism for rationalists

Your comment is quite clear and presents an important idea, thank you.

Why is the original comment about coffee in the presentation lacking in context? Is it deliberately selectively quoted to have less context in order to be provocative?

Postmodernism for rationalists

I think this is honest and I'm thankful to have read it.

Probably I'm biased and/or stupid, but with regard to Slavoj's comment “Coffee without cream is not the same as coffee without milk.” [this article's author's requests being charitable to this comment], the most charitable I can convince myself to be is "maybe this postmodernist ideology is an ideology specifically designed to show how ideology can be stupid - in this way, postmodernists have undermined other stupid ideologies by encouraging deconstruction of ideology to reveal its stupidity". I think that yes you could elaborate on the coffee comment to make it coherent (e.g. talk about how a human can think about the absence of milk or think about the absence of cream while drinking), but the comment isn't meaningful by itself.

I can't convince myself to say "maybe this comment about coffee is meaningful and I should learn to understand it better", for this reason I'm not planning to study postmodernism.

The Rationalistsphere and the Less Wrong wiki

I think this might be confounded: the kind of people with sufficient patience or self-discipline or something (call it factor X) are the kind of people both to read the sequences in full and also to produce quality content. (this would cause a correlation between the 2 behaviors without the sequences necessarily causing improvement).

Less costly signaling

Here's a post by Scott Sumner (an economist with a track record) about how taxing positional goods does make sense:

http://www.themoneyillusion.com/?p=26694

Less costly signaling

The main problem with taxing positional goods is that the consumption just moves to another country.

I don't have an economics degree, but:

1) governments could cooperate to tax positional goods (such as with a treaty)

2) governments could repair the reduced incentive to work hard by lowering taxes on the rich

3) these 2 would result in lower prices for non-positional goods

4) governments could adjust for lost tax revenue by lowering welfare programs because of (3)

The flaw I can think of (there are probably others) is that workers in positional goods industries might lose their jobs.

What other flaws are there or why isn't this happening already?

Less costly signaling

Regarding 'relax constraints that make real resources artificially scarce' - why not both your idea and the OP's idea to tax positional goods? In the long run the earth/our future light cone really is only so big so don't we need any and all possible solutions to make a utopia?

Load More