There is no guarantee that there exists some way for them to understand.
Consider the possibility that it's only possible for people with nontrivial level of understanding to work with 5TB+ amounts of data. It could be a practical boost in capability due to understanding storage technology principles and tools... maybe?
What level of sophistication would you think is un-idiot-proof-able? Nuclear missiles? not-proven-to-be-friendly-AI?
So someone has mentioned it on LW after all. Lots of singulatarian ideas depend heavily exponential growth.
Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?
Sorry, hadn't seen this (note to self: mail alerts).
Is this really true, even if we pick a similarly restricted set of models? I mean, consider a set of equations which can only contain products of a number of variables : like (x_1)^a (x_2)^b = const1 ,(x_1)^d (x_2)^e = const2 .
Is this nonlinear? Yes. Can it be solved easily? Of course. In fact it is easily transformable to a set of linear equations through logarithms.
That's what I'm kinda getting at : I think there is usually some transform that can convert your problem into a linear, or, in general, easy problem. Am I more correct now?
What's with the downvoting?
I argue that agw is the worst because it is the only one that hits at very deep-seated human assumptions that may well be genetic/inherent.
The first obstacle to agw is, even before coordination, is anchoring - we assume that everything must get better only, and nothing ever gets worse. Further, a lot of systems are built up on the assumption that there will always be a continuously expanding material economy. This is like the case where becoming slightly more rational from a point of relatively complete irrationality is likely to make one less effective :...
I assume you're talking of around 4 degrees warming under business-as-usual conditions?
To pick the most important effect, it's going to impact agriculture severely. Even if irrigation can be managed, untimely heavy rain will still damage crops. And they can't be prevented from affecting crops, unless you build giant roofs.
If you are saying that all these effects can be defended against, I agree. But the key point is that our entire economy is built on a lot of things being extremely cheap. Erecting a giant roof over all sensitive cropland is far less techn...
Of course, "leading to global warming" is a subset of "harmful for the environment". Agreed on all counts.
Computing can't harm the environment in any way - it's within a totally artificial human space.
The others ("good") can harm the environment in general, but are much better for AGW.
*Longtime lurker, and I've managed to fight akrasia and geniune shortage of time to put my thoughts down into a post. I think it does deserve a post, but I don't have the karma or the confidence to create a top-level post.
Comments and feedback really welcome and desired : I've gotten tired of being intellectually essentially alone.*
There are many urgent problems in the world yet Anthropogenic Global Warming (AGW) should be considered the defining crisis to humanity. For example, increasing drug-resistance in pathogens , reducing populations of endangered ...
Ah that particular idea of all human pleasures being harmful for the environment is pretty much religious. It's not at all what the impact is like.
Computing is basically blameless in the direct sense for global warming. We should probably enjoy it as much as possible. Electricity is good. Trains are good. Holidaying is good.
Airconditioning is bad. Air travel is bad. Short product lifetime is bad.
The situation is far more positive than some make it out to be. Even the direst climate change predictions necessitates drastic changes in some aspects of life.
AGW can't take away modern medicine or virtual reality from you.
What do you mean by "abdicate control over the physical world"?
I fit the profile described here quite well. Feel free to ask (I know I'm 6 years late, but that's the point of internet forums).
Apologies for commenting almost a decade after most of the comments here, but this is the exact same reason why "using nonlinear models is harder but more realistic".
The way we were taught math led us to believe that linear models form this space of tractable math, and nonlinear models form this somewhat larger space of mostly intractable math. This is mostly right, but the space of nonlinear models is almost infinitely larger than that of linear models. And that is the reason linear models are mathematically tractable : they form such a small sp...
A vote for the statement that : sex-positive activism is (unarguably) an extremely "low priority" type of activism.
It might be better if you can find ways to change what you feel happy about.
Just my 2p.
It sounds like you're implying that most lies are easily found, and consequently, most unchallenged statements are truths.
That's, really really really stretching my capacity to believe. Either you're unique with this ability, or you're also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
I can corroborate that. Indian Hindus believe that there are eons (longer) and numerous eras (shorter) consisting of 4 "yuga"s, during each of which humans generally become worse off... all great traits are part of the first yuga, and goes downhill to the last one (in which we exist, obviously). After each era, a "pralaya" takes place destroying everything. Then start afresh.
Sigh.
I'm slowly getting more and more determined that a mass-usable but based-on-sound-principles debate/argument tool should be created, and a structure is taking shape in my mind. And somehow none of the tools I have seen can be adapted to fit this bill.
I have seen your extensive comments and articles on this subject here. So :
Are there any serious problems in going mass-based? I would like this because we need to get more arguments, and that can't be done with a highly rigid and hard-to-use interface and model. This would limit it to those who are extremel
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
- Resource exhaustion : There is always the risk of using up resources earlier for relatively unimportant things, and facing constraints for later, more important, purposes. This risk ends up materialising more often as we develop faster. Undoing m
... (read more)