In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.

How do you prove something is impossible?  You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method.  You might prove that all the methods you know about do not work.  That doesn't prove there's not some other option you don't see.  "I don't see an option, therefore it's impossible." is only an appeal to ignorance.  It's a common one but it's incorrect reasoning regardless.  Think about it.  Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long. 

I say: "Then Look!"

How often do we push past this feeling to keep thinking of ideas that might work?  For many, the answer is "never" or "only if it's needed".  The sense that something is impossible is subjective and fallible.  If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief.  What distinguishes this from bias? 

I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible.  This is valid, but it's completely missing the obvious:  As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work.  The hard part is THINKING of a plan to do the impossible.  I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one.  Not only that, I think we're capable of doing this on a worthwhile topic.  An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.

Here's how I am going to proceed: 

Step 1: Come up with a bunch of impossible project ideas. 

Step 2: Figure out which one appeals to the most people. 

Step 3: Invent the methodology by which we are going to accomplish said project. 

Step 4: Improve the method as needed until we're convinced it's likely to work.

Step 5: Get the project done.

 

Impossible Project Ideas

  • Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster.  Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety").  My ideas.
  • Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it. 
  • Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities. 
  • Understand the psychology of money

  • Rational Agreement Software:  If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree?  This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top.  This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
  • Discover unrecognized bias:  This is especially hard since we'll be using our biased brains to try and detect it.  We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
  • Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.

Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.

 

Figure out which one appeals to the most people.

Assuming each idea is put into a separate comment, we can vote them up or down.  If they begin with the word "Idea" I'll be able to find them and put them on the list.  If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.

 

New Comment
72 comments, sorted by Click to highlight new comments since: Today at 1:27 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

This is not in the spirit of Eliezer's exhortation. I read that article as saying "When, in your travels, you encounter an impossible problem, that is not an excuse to give up, or otherwise an excuse for failure. It merely means you might try your best, and still fail. So hurry up and do it, and don't give us any of this 'try' crap either, because having tried harder doesn't excuse failure either."

I emphatically don't read it as saying "seek out impossible problems and try to do them". Your odds of failure are high, and your odds of producing useful results are low. None of these problems are things that you would think absolutely needed solving had they not already been identified as impossible.

3Epiphany12y
There's no reason to interpret that as "Never set out to do the impossible." Eliezer begins with "The virtue of tsuyoku naritai, 'I want to become stronger', is to always keep improving—to do better than your previous failures, not just humbly confess them." It is THAT spirit that I refer to - "I want to become stronger" If you don't relate to the desire for impossible problems because you want to become stronger, then it's simple - this thread is not your cup of tea. I am not going to sit around waiting for an opportunity to become stronger. I'm going to seek them out. If you don't relate to taking initiative when it comes to getting a challenge, then go find some other thread you do relate to.
3buybuydandavis12y
That's fine, but attempting to bench press your car is not the most effective way to increase your bench press. You don't try to lift an impossible weight, you select a possible weight that stresses your capabilities. Also, I think the "I want to get stronger" ethos is taken in terms of incremental improvement, not in terms of "I want to be all powerful today".
0Epiphany12y
I completely redid my description in the original post. I think all your concerns have been addressed. Let me know how I did?
0buybuydandavis12y
You removed reference to "get stronger" so that no longer applies. I think you have a point about whether one can know if something is impossible. Also, even if you can't think of a solution, the attempt may allow you to solve some lesser problems.
0Epiphany12y
Thanks for the feedback. Look what I found. Love it. I was doing a search for people on LessWrong saying "is impossible" so I could come up with some other examples of how thinking things are impossible is biased by coming up with ways to do them. I was surprised to see you say the same thing as I did! No wonder this thread attracted your attention. (: I almost put your comment in the quotes discussion, before realizing that LW comments are forbidden for some reason. (: Any ideas for how to make this thread more successful?
0evand12y
Thinking about an impossible or merely very difficult problem, because you think that putting forth effort on it will make you stronger, is very different from what Eliezer is talking about. Ask yourself this: if you spend time working on one of the problems from this thread, and in the process become stronger and learn something, and eventually give up to work on something else, will your reaction be more like "I have failed" or "at least I learned something while failing"? If the latter, Eliezer's post is not relevant to you, and your attempts are not in its spirit.
0Epiphany12y
I rewrote my entire intro because of your post. Thanks for giving me complaints specific enough to go on. Now that I've explained my vision much better, do you feel like I've done a good job of addressing the concerns in your comment?
0evand12y
Yes, this is an improvement. Now I just think that you're going about things in suboptimal fashion, rather than also attempting to justify it with appeals to an article that goes against what you're doing. As I doubt recommending an alternative plan would increase your chance of success, I will simply wish you good luck. It would be awesome to see something good come of this! (FWIW, I don't think you picked a good example for "even Eliezer can be wrong". It seems too much like you made a very short search for an instance of Eliezer being wrong and stopped at the first plausible option, which wasn't a very good one.)

Idea: Rational Agreement Software

1billswift12y
I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.
0Epiphany12y
What I'm seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of "wizard" where you can go through your document with it one piece at a time and have a sort of "chat" with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.
0buybuydandavis12y
I like the Rational Agreement Software project, which I'd consider improved collaboration software. That's a good project. That's an important project. That's the fastest way to superhuman AI - combining the talents of multiple humans with technology. That's probably the fastest way to solve all our problems - create an infrastructure to better harness our cognitive surplus. You seemed to focus on creating agreement - I think we'd be doing pretty well just to speed up the cycle time for improving our arguments, and accurately communicating them. Get a bunch of people together, get them all typing, get them providing feedback, and iterate in a way that keeps track of the history, but leaves and improved and concise summary at each iteration.
-2Epiphany12y
Sorting opinion and fact with code: When a statement is incorrect, it will tend to follow a certain pattern. Change out the subject words and you get the same pattern. For instance, hasty generalization: All bleeps are bloops. All gerbils are purple. All Asians are smart. These are all false reasoning, the falseness is inherent to the sentence structure such that if we change out "bleeps" and "bloops" for any other subject, it's still hasty generalization. If we were to build a piece of software that allows users to flag a statement for review, the reviewer could be given the statement with different subject words. For instance, if someone argues a piece of obvious bad reasoning like "All black people are bad." the reviewer might be given "All oranges are bad." Without race to potentially trigger the reviewer's bias, the reviewer can plainly see that the sentence is a hasty generalization. This will help prevent bias and politics from interfering with the rational review of statements. If that's not found to be good enough alone, we could use it as part of a larger strategy.
2Decius12y
Every square is a rhombus If the difference between a hasty generalization and a fact is that the fact is true, then to call something a hasty generalization we need to say something about its factualness.
0pragmatist12y
Not all claims of the form "all Xs are Ys" are false, and neither is every conclusion of the form "all Xs are Ys" a product of bad reasoning. Suppose your software were to replace "All electrons are negatively charged" with "All rabbits are highly educated". How is the reviewer supposed to react? Is she supposed to conclude that the original statement is false? Why? You are using the phrase "hasty generalization" in a highly non-standard way here. Philosophers classify hasty generalization as an informal fallacy. The "informal" means that the content matters, not just the formal structure of the argument. Also, a hasty generalization is an argument, not a single sentence. An example of a hasty generalization would be, "I was cut off by an Inuit driver on the way to work today. All Inuits must be terrible drivers." The fallacy is that the evidence being appealed to (being cut off by one Inuit driver) involves much too small a sample for a reliable generalization to an entire population. But just looking at the formal structure of the argument isn't going to tell you this. There are formal fallacies, where the fallacy in the reasoning is subject-independent. An example would be "affirming the consequent" -- arguing that since B is true and A implies B, A must also be true. You could build the kind of software you envisage for formal fallacies, but you'd need another strategy for catching and dealing with informal fallacies.

Idea: Do everything on the final list that gets more than 10 upvotes

2Epiphany12y
We can try and tell ourselves that we're going to do all the projects that we like, but that's just not what's going to happen. What's really going to happen is we'll probably have way more projects we like than resources. Making ourselves do projects because of an arbitrary number (spending years and thousands of dollars because ten people pressed a thumb up button) will most likely spread people thin, and that would make projects more likely to fail. People, being intelligent, will sense this, and they will pick a limited number of projects that look like they have enough resources to get somewhere and they'll stick to them. Then, while those projects are being done, new projects will be thought of. By the time the people are free to work on a new project, there might be a hundred ideas that are better than our first batch. Neither you or I have the power to make these people do all the good projects. We just have to see whether enough of them are inspired by any of the ideas for there to be the right amount of momentum to get started. What we should really do is keep talking about ideas until something sparks them. Once we've got a fire big enough that we couldn't put it out if we wanted to, THEN we know an idea is going to happen. Our goal here should be to make a lot of sparks and let inspiration decide which ideas will be chosen.
0[anonymous]12y
I doubt that even one of these ideas will be pursued to the point the effort pays out (if I had to give you a probability, I'd say p=30%).

I have an impossible project that I want to do because it needs doing, not because its impossible.

The Social Sciences are often very unscientific. I want to do to economics and foreign policy analysis what Jared Diamond and other similar authors have done with history. This is important because, you know, existential risk from nuclear wars or global warming or whatever else might kill us all. We can't have an AI or colonize space if we all die in the meantime. Making the Social Sciences more rigorous and subject to simple and empirical and bias free review methods would definitely pay off. We need this.

Anyone have any ideas how to get started?

2Costanza12y
These two sentences may contradict each other. I'd suggest that Jared Diamond is famous as a multidisciplinarian pop-sci author. I don't mean that as an insult to him at all. He has sold a lot of books, and has interested the public in ideas, which is great as far as it goes. But if you want to bring more rigor to social science, I don't think Jared Diamond's writings on history of all subjects should be your model. Maybe you should redefine your goal to popularizing science. That wouldn't be bad if you can do it well. Even so, if you want to popularize real science, you've got to get a taste for real rigor. One place to start would be diving deep into the mathematics of statistics. Beyond that, when reading popular social science of any kind, especially any big theories which explain all of history, set your bullshit detector on high. Just assume that Adam Smith, Karl Marx, John Maynard Keynes, Milton Friedman, and Paul Krugman are just wrong. A fortiori, Jared DIamond.
0chaosmosis12y
I understand placing a low prior on ideologues and pop social sciences in general. I don't believe Diamond should be considered either of those, though. I've read Guns, Germs, and Steel and most of Collapse, and I haven't really seen any attempts by him to sweep any problems under the rug. He didn't seem to be oversimplifying things, to me, when I read him. Could you recommend a criticism of Diamond's material to me?
0Costanza12y
I think you misunderstand me. Jared Diamond is a serious academic in good standing. I did not say he was an ideologue. Apparently, Professor Diamond has a doctorate in physiology, but is currently described as a professor of geography. He is not a professional historian. In any case, the discipline of History is noble, but it is not always described as a social science at all. But both Guns, Germs, and Steel and Collapse are pop sci, not that there's anything wrong with that. They were marketed to an audience of intelligent nonexperts. They were never intended to be serious peer-reviewed academic studies. So that's three strikes against these works as bringing rigor to social science. Again, this is not an attack on Professor Diamond at all. Carl Sagan's Cosmos was pop sci, and was wonderful. Richard Dawkins has written some great pop sci. So have E.O. Wilson, and Stephen Hawking etc. etc. But their serious academic work is much more dense and technical, and was addressed to a far more narrow and critical audience. Rigourous works never, ever make it to the top of the New York Times bestseller list. Iif you want a criticism of pop sci in general, it is that it might be used as an end-run to avoid peer review. An unscrupulous academic might use his or her credentials to dazzle the public into metaphorically buying snake oil, maybe for the sake of celebrity and money. Beware of Stephen Jay Gould .
0chaosmosis12y
I misunderstood you earlier, yes. However, I think Guns, Germs, and Steel might be about as rigorous as that era of history can ever get. I've never encountered any historical arguments which cover such an unknown time period with such breadth and depth. If he were to increase the rigor of his arguments, we'd lose any chance at an overall picture. Just because the books are accessible to the masses doesn't mean that the books aren't rigorous, which is what you almost seem to be implying with your above comment. Certainly, they're not perfectly scientific and can't be readily tested. But that can never happen in these fields, and the goal is only to move towards science as an ideal. You say that they weren't intended to be peer reviewed, but I guess I'm sort of confused as to why you believe that. There's nothing precluding experts from reviewing Diamond's findings, as far as I can see. Regardless, there are some really really really bad social science arguments out there. If the average social science argument, or even some of the best social science arguments, reached a level of rigor and excellence comparable to Guns, Germs, and Steel then the field would be improved a hundred fold. Maybe this means that I've got pathetic standards for what constitutes rigor, but I prefer to think that I'm being realistic, as I think improving IR and economics to even this level of rigor is already a near impossible task.
0NancyLebovitz12y
An idea from Taleb which I find plausible: The more people there are, the harder prediction becomes. I believe this is the case because the more people there are, the more likely they are to invent things.
0chaosmosis12y
I don't necessarily disagree with the idea that people invent ideas, but generally I think that having help with something is better than trying to do it all yourself. I think having two different extremes interact and debate each other would probably yield more interesting truths than having one very smart moderate attempt to discover the truth on their own, provided that those extremes are being evaluated by a neutral and intelligent third party and those extremes are trying to court the third party to their point of view. I think Taleb's quote is more about how all people attempt to predict the actions of all other people, and then act accordingly. Lots of behavior in the social sciences functions like a super extreme version of a Keynesian Beauty Contest where all participants are both judges and players, there is no limit to how meta or recursive you go except whatever limit is imposed on you by your cognitive limits, and you have access to information that isn't just incomplete, but is often actually wrong. It's not physically impossible, like simulating a complicated simulation that simulates itself (and is thus bigger than itself which is basically a contradiction) but it's in a similar sort of vein and degree of difficulty. Very meta and recursive, very difficult to measure things or to verify if what you're doing is correct or if you even understand what you're doing.
-1Epiphany12y
Mmmm. Okay this looks like a really good one. We need a title for it so I can add this to the list. "Make Social Sciences Rigorous" might work... but I think people are already trying to be rigorous, and "more rigorous" is kind of vague. We need a nice solid, concrete goal. Maybe there's a more strict, more specific term than rigorous... "logically consistent" or ... hmmm... what specific goals would you say would best express this vision? I also feel a need to clarify the term "social sciences". You give examples like how there are too many unknowns in economics and foreign policy. This feels like two separate problems. In a way, they are. What you're saying here is "The way to solve all these problems in all these diverse areas is by making social sciences more rigorous". That, I can believe, for sure. However, I don't think that would be the entire solution. When it comes to anything political, there are also large masses of people involved in the decision-making process. They may choose the most rational, most scientifically valid option... or they might not. You might counter with "If we understood why they make decisions that are against their own best interests, we could wake them up to what's going on." Is that what you're envisioning? Would you spell out the whole line of reasoning? P.S. I redid a lot of the original post, any suggestions?
0chaosmosis12y
The goal is vague because I don't know how to get started with it. I'm not quite sure what you're saying with the rest of your comment. I understand that economics and foreign policy are basically two different areas. However, the policies of both fields interact quite a lot, and both disciplines use many of the same tools, such as games theory and statistical analysis. I would perhaps even argue that IR studies would be improved overall if they were widely conceived of as a sub discipline of economics. They also share many of the same problems. For example, in both fields there are large difficulties with comparing the results of economic and foreign policies and comparing them to the results that other policies counterfactually would have had, because countries are radically different in one time period as compared to another, and because policies themselves are more or less appropriate for some countries than others. Figuring out how to apply the lessons of one time and place to another is more or less what I was envisioning when I said that I wanted to make the social sciences more empirical. There are also problems with measuring variables in both fields. In science, it's relatively easy to determine what the output amount of energy from a system is, or the velocity of a specific object at a specific time. But in economics and IR, we have lots of trouble even understanding exactly what the inputs and outputs are or would be, let alone understanding their relationship with one another. For example, uncertainty is hugely important in IR and in economics, but it seems almost impossible to measure. Even more obvious things, like the number of troops in a certain country or the number of jobs in a specific sector, are often debated intensely by people within these fields. Without the ability to measure inputs or outputs of policy processes or the ability to compare those processes to the hypothetical effectiveness that other policies might have had, these fields

IDEA - write a syntax/static analysis checker for laws. Possibly begin with U.S. state law in a particularly simple state, and move up to the U.S. Code (U.S.C.) and the Code of Federal Regulations (C.F.R.) Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities. Gradually improve it to find real problems, and start convincing the legal profession to use it when drafting new legislation.

While it may not directly pertain to lesswrong, it is an awesomely hard problem that could have far reaching impacts.

7Costanza12y
I'm a lawyer. I'm also an enthusiast about applying computing technology to legal work generally, but not tech-savvy by the standards of LessWrong. But if I could help to define the problems a bit, I'd be happy to respond to PMs. For example, the text of the U.S. Constitution is not long. Here's just one part of it: As you know, this small bit of text has been the subject of a lot of debate over the years. But here's another portion of the Constitution, not much shorter: There's arguably a lot of room for debate over these words as well, but as a practical matter, the subject almost never comes up. I'd suggest that doesn't mean that the ambiguity isn't potentially present in the text, and could be revealed if for some reason the government had a strong urge to quarter troops in private homes. I think the text of the Motor Vehicles Code of Wyoming is much longer than the whole U.S. Constitution with all its amendments, but since Wyoming is not a populous state, and the code mostly deals with relatively mundane matters, there hasn't been a huge amount of published litigation over the precise meanings of the words and phrases in that text. It doesn't mean that there isn't just as much potential ambiguity within any given section of the Wyoming Motor Vehicles Code as there is in the First Amendment. ETA: Law is made of words, and even at its best it is written in a language far, far less precise than the language of mathematics. Law is (among other things) a set of rules designed to govern the behavior of large numbers of people. But people are tricky, and keep on coming up with new and unexpected behaviors. Also, it's important to note that there are hierarchies of law in the U.S. I mentioned the U.S. Constitution to illustrate the potential complexity of law -- libraries have been written on the Bill of Rights, and the Supreme Court hasn't resolved every conflict just yet. If this seems daunting, it's because it is. But in some ways, the U.S. Constitution is the
2NancyLebovitz12y
It might also be a good way of making money.
0Epiphany12y
So we can see your vision, please describe how this would work?
0NancyLebovitz12y
My original thought was selling access to lawyers who are preparing cases. It could also be valuable to people who are trying to maneuver in complex legal environments-- executives and politicians and such. It seems to me that there should a limited cheap or free version, but I'm not sure how that would work.
1Epiphany12y
Hmmm. Okay. So the reason this is profitable is because it's gotten SO hard to keep track of all the laws that even lawyers would be willing to pay for software that can help them check their legal ideas against the database of existing laws?
5Costanza12y
There's probably a bit of money in distilling legalese into simpler language. Nolo Press, for instance, is in that field. The real money in lawyering, however, is in applying the law to the available evidence in a very specific case. This is why some BigLaw firms charge hourly fees measured by the boatload. A brilliant entrepreneur able to develop an artificial intelligence application which could apply the facts to the law as effectively as a BigLaw firm should eventually be able to cut into some BigLaw action. That's a lot of money. This is a hard problem. My personal favorite Aesop's fable about applying the facts to the law is Isaac Asimov's short story Runaround . Worth reading all the way through, but for our purposes, the law is very clear and simple: the three laws of robotics. The fact situation is that the human master has casually and lightly ordered the robot to do something which was unexpectedly very dangerous to the robot. The robot then goes nuts, spinning around in a circle. Asimov says it better of course: In the real world, courts hardly ever decide that the law is indecipherable, and so the plaintiff should run around in a circle singing nonsense songs (but see, Ashford v Thornton [(1818) 106 ER 149].) The moral of the story, however, is that there is ambiguity in the application of the simplest and clearest of laws.
0Epiphany12y
And so the whole human race spins in circles. Yes, I see. (: And so, do you propose that this software also takes out ambiguity? Do you see a way around that other than specifying exactly what to do in every situation? BTW, I rewrote the intro on the OP - any suggestions?
2NancyLebovitz12y
Now that I think about it, a program which can do a good job of finding laws which are relevant to a case would and or ranking laws by relevance probably be valuable-- even if it's not as good as the best lawyers.
0NancyLebovitz12y
Any opinions on whether this is harder or easier than understanding natural language? In theory, legal language is supposed to be clearer (for experts) and more precise, but I'm not sure that this is true. It might be easier to write programs which evaluate scientific journal articles for contradictions with each other, the simpler sorts of bad research design, and such.
2Costanza12y
I'd say that legal language, at least in America, is absolutely well within the bounds of natural language, with all the ambiguity that implies. Certainly lawyers have their own jargon and "terms of art" that sound unfamiliar to the uninitiated, but so do airplane pilots and sailors and auto mechanics. It's still not mathematics. There are a lot of legislators and judges, and they don't all use words in exactly the same ways. Over time, the processes of binding precedent and legal authority are supposed to resolve the inconsistencies within the law, but the change is slow. In the meantime, statutes keep on changing, and human beings keep on presenting courts with new and unexpected problems. And judges and legislatures are only people within a society and culture which itself changes. Our ideas about "moral turpitude" and "public policy" and what a "reasonable man" (or person) would do are subject to change over time. In this way, the language of the law is like a leaky boat that is being bailed out by the crew. It's not a closed system.
0Epiphany12y
One bottleneck here would be that the programmer would also have to be able to understand legalese. To find someone with both specialties could be pretty hard.
0philh12y
(This would also need to be able to take case law into account.)
0Epiphany12y
I would like to see a few examples of different types of mistakes have ended up in real laws and what you think we would gain by doing this.
0Dentin12y
I honestly don't know enough about law to provide the kind of detailed mistake you're looking for. My belief that it is a somewhat 'important' problem is circumstantial, but I think there's definitely gain to be had: 1) It is often said that bad law consistently applied is better than good law inconsistently applied, but all other things being equal, good law is better than bad law. It is generally accepted that it is possible to have 'good' law which is better than 'bad' law, and I take this as evidence that it's at least possible to have good law and bad law. 2) Law is currently pretty ambiguous, at least compared to software. These ambiguities are typically resolved at run time, by the court system. If we can resolve some of these ambiguities earlier with automated software, it may be possible to reduce the run time overhead of court cases. 3) Law is written in an internally inconsistent language. The words are natural language words, and do not have well understood, well defined meanings in all cases. A checker could plausibly identify and construct a dictionary of the most consistent words and definitions, and perhaps encourage new law makers to either use better words, define undefined words, or to clarify the meaning of questionable passages. By reducing even a subset of words to a well defined, consistent definition, the law may become easier to read, understand and apply. 4) An automated system could possibly reduce the body of law in general by eliminating redundancy, overlapping logic, and obsolete/unreferenced sections. Currently, we do all of the above anyway, but we use humans and human brains to do it, and we allow for human error by having huge amounts of redundancy and failsafe. The idea that we could liberate even some of those minds to work on harder problems is appealing to me.
0Epiphany12y
What if we did this: If a program can detect "natural language words" and encourage humans to re-write until the language is very, very clear, then this could open up the process of lawmaking to the other processing tasks you're describing, without having to write natural language processing software. It would also be useful to other fields where computer-processed language would be beneficial. THOSE fields could translate their natural language into language that computers can understand, then process it with a computer. And if, during the course of using the software, the software is given access to both the "before" text (that it as marked as "natural language, please reword") AND the "after" text (the precise, machine readable language which the human has changed it to) then one would have the opportunity to use those changes as part of a growing dictionary, from which it translates natural language into readable language on it's own. At which point, it would be capable of natural language processing. I bet there are already projects like this one out there - I know of a few AI projects where they use input from humans to improve the AI like Microsoft's Milo (ted.com has a TED Talk video on this) but I don't know if any of them are doing this translation of natural language into machine-readable language, and then back. Anyway, we seem to have solved the problem of how to get the software to interpret natural language. Here's the million dollar question: Would it work, business-wise, to begin with a piece of software that acts as a text editor, is designed to highlight ambiguities and anonymously returns the before and after text to a central database? If yes, all the rest of this stuff is possible. If no, or if some patent hoarder has taken that idea, then ... back to figuring stuff out. (:
2NancyLebovitz12y
An idea from a book called The Death of Common Sense-- language has very narrow bandwidth compared to the world, which means that laws can never cover all the situations that the laws are intended to cover.
2Costanza12y
This is the story of human law.

Idea: Solve FAI.

Idea: Unify general relativity and quantum mechanics. :-)

Idea: Build a profit generating company, which will accelerate fast and go far. Which will be very optimal at this goal.

-2Epiphany12y
That's not impossible Thomas.
3Thomas12y
Sorry. I didn't realize you really want to do the impossible. Good luck! I thought you want to do something "nearly impossible", what a big money generator is. If you don't agree, if you think that's easy, set the constrains high enough. Like 10^12 USD of profit in 5 years. For example.
-1Epiphany12y
Okay, I see now that the wording you used just kind of vague. Now that you've added some numbers, it does look impossible. I think there's a difference between impossible and ridiculous, though - for instance, to make every dollar on planet earth in five years would just be ridiculous. Someone has to figure out where the line is - what monetary amount defines the boundary between possible and impossible on this, do you think? Also, do you want to expand on your idea, or are you ditching it?
1Thomas12y
Establish a predictioning firm that accurate, that you will be able to cash in all over the Wall street, London City, Frankfurt and so on. All you need is a great prediction. I consider this idea as old and obvious, but worth to accomplish. And not entirely impossible.
0Epiphany12y
Ah. Thank you for supporting my point (in the intro, which I just re-wrote) that we don't have good ways to prove that impossible things are impossible. I doubt you meant to do this, but if you think about it, I think you'll see that you did. (: So, with the assumption in mind that impossibility is subjective and unproven, what monetary goal do you predict most people would FEEL a sense of "impossibility" about? Or, if you'd rather, what goal would YOU feel a sense of impossibility about? That's what we need in order to make the idea above into an impossible one. (: Any suggestions on my new introduction?

Idea: Logic checking AI

0NancyLebovitz12y
As you say, do the impossible. I'm reasonably sure that checking for fallacies isn't possible without understanding natural language. I'm only reasonably sure because google has done more with translation than I would have thought possible without using understanding. Perhaps there's some way of using google's resources to catch at least a useful proportion of fallacies and biases.
0Epiphany12y
I'm not sure this it would require a full understanding of natural language. There's got to be an 80/20 rule method by which this can be done. Really, there are only so many logical fallacies, and there might be some way to "hack" out fallacious statements by looking for certain patterns in the sentence structure, as opposed to actual sentence interpretation. For example: "All gerbils are purple." The computer only needs to understand: "All (gibberish) are (different gibberish)." Hasty generalization pattern recognized. For another example: Gerbils are purple because purple is the color of gerbils. The computer understands: "(gibberish 1) are (gibberish 2) because (gibberish 2) is (blah blah blah) (gibberish 1)" Circular reasoning pattern recognized. Yes, it would get more complicated than that, especially when people use complex or run-on sentences, or when the fallacy occurs after numerous sentences stack together. But, I still think it could do this with pattern recognition. Hmmm... it would also have to detect statements where points are being made (looking for the words "is", "are" and "because" might help) and avoid sentences that are pure matters of opinion (I love ice cream because it's delicious! - this might look something like (blah blah blah) because it's (subjective term)). I somehow doubt Google would appreciate the leeching of their resources - unless you mean they've made it open source or something. Making it dependent on them would be a liability - if they notice the leeching of their resources, they'd surely create a new limit that would probably break the program.

Idea: Discover unrecognized bias

The first thing that comes to mind as for a way to do this is by comparing information looking for inconsistencies. We already do that, but we do it according to certain patterns. It's in determining what those patterns are and consciously choosing to compare the information using a different pattern that would reveal the types of inconsistencies that would serve as an opportunity to reverse engineer an undiscovered bias, thereby gaining knowledge of it.

For instance:

We observe ourselves behaving with people of two differ... (read more)

Idea: Revive a cryogenized dog and test if it retains trained behavior within ten years.

0ChristianKl11y
We don't have the technology necessary to revive cryogenized beings yet.
0bogdanb11y
Sorry for sounding snarky, I mean this in a friendly way: which part of "Impossible Project Ideas" don't you understand? For that matter, contrast the last three words of my comment with the last word of yours ;-) Also, if you want to get all technical, we do have technology necessary to revive cryogenized beings, just not for any kind of being. (Hint: for some kinds of cryogenized being the level of technology necessary for reviving is just "drop the ice cube in a puddle of rain water".)
0ChristianKl11y
You can do it with organisms that don't need anti-ice crystal chemicals. Once you however inject that highly poisinous stuff we don't have yet any way to remove it.
0bogdanb11y
Correct, as far as I know. Note that the "idea" did not mention that the dog was to be cryogenized using the poisonous stuff we have right now. (I.e., one "solution", or part of it, might be to invent a non-poisonous anti-crystal substance or procedure.) Anyway, the point of my "suggestion" was not that this would be some kind of "least impossible" idea, nor necessarily the most useful one. I'm not sure if you noticed, but I entered several proposals, all of which are very unoriginal. Although they are serious suggestions in the spirit of the post, I picked those in particular as a kind of humorous comment to the five-step outline of the post: All three "ideas" have in fact been discussed... often... around here, and pretty popular in theory. The "comment" is that finding worthy impossible things to do is trivial, and putting that as the first two steps in a five-step list of "how to do the impossible" is somewhat silly, kind of an excuse not to reach the hard steps. (Humorously enough, one of the ideas is "Do everything on the final list that gets more than 10 upvotes"; as far as I can tell, nothing got 10 votes.)

Idea: Solve Violent Crime

As a lot of you are probably already aware, testosterone level is considered a top predictor of violent crime. There are prescriptions that lower testosterone, so why do we still have violent crime?

I've been told there are two obstacles to treatment:

One, people with such excessively high testosterone that it causes them to commit crimes (most of them are men) feel strongly that reducing testosterone would make them less manly.

Two, our legal and ethical systems are such that forcing people, even convicted criminals, to undergo me... (read more)

1KrisC12y
Would you reconsider your idea if you found out that the most effective trauma surgeons were found to have unusually high levels of testosterone? Have you considered what other possible side effects might occur if this was carried out on a societal level? Would their be incentives for individuals to circumvent these restrictions?
0Epiphany12y
Several problems: Correlation is not causation. I'd have to see evidence that high testosterone was needed for trauma surgeons to be effective before I'd accept that it was a necessity. What percentage of traumas are caused by violence? If excess testosterone were treated would the number of traumas decrease as well, making it unnecessary to have as many trauma surgeons? As for whether I've considered what side effects would occur, no actually, I haven't. That was a good reminder. This isn't an idea I've thought about a lot yet so I haven't gotten very far. Up until this point, I'd been thinking about it like a disease - you don't justify failing to treat a disease by worrying about what society will be missing when those people are healthier. Though, you could still wonder what might happen, sometimes consequences are unexpected. I don't know that much about testosterone. Do you have suggestions?
0KrisC12y
I don't think hormone tweaking is a humane cure for violence. Honestly, I don't think I would do anything about violence directly on a patient-level. The incidence of homicide has been steadily falling for centuries. This is a desirable trajectory. Instead I would seek to improve the socio-economic conditions that I believe precipitate violent behavior. If poor people commit more violent crimes, then we should look for what factor of their condition contributes most to this behavior. I suspect it is the exaggerated boom-bust cycle engendered by living paycheck to paycheck and the disproportional value of status goods in low income communities. I promote a post-scarcity society as the solution to violent crime. If this proves too distant a solution for your concern, then I would suggest a reform of social services to establish guaranteed housing, food, education, and healthcare through a non-monetary system. I would fund this through taxes and provide the services to even those who do not need them currently. I would attempt to establish these as universal rights that every government should provide on the risk of international sanction.
-2ChristianKl11y
Testosterone alone doesn't put you into a state of rage. It makes you want to dominate others. High testosterone levels help guys to get layed. Convincing males to do something that gets them less layed will be hard. Most of the youth will have lower testosterone levels than professional bodybuilders like Arnold Schwarzenegger once you correct for age. I think your proposal is likely to tell the youth that they will have to raise their level to keep up with the stars.

Idea: Understand the human psychology that leads to the stability of the concept of currency/money.

http://www.economist.com/node/21560554

0Epiphany12y
A little more of a description would be a good thing. I read part of the article, but it's just not showing me your vision. I think we need you to describe that vision.

I started with the problem: improve the economy consisting of people with visual impairments.

I ended with a far-from-exhaustive list of problems related to visual impairment. More specifically, there are 14 entries. Probably only half of them fit the criterion "impossible (but not really)". The rest are just difficult. These seem like subproblems of the economy problem.

Should I post these? Or more accurately, Is posting likely to get useful feedback with a better than 0.001 probability of leading to a viable solution to one or more of the problem... (read more)

[+][anonymous]12y-50