Please PM me a draft of your fighting aging article if you want to - I can read it and offer feedback
Neural networks seem like they would benefit from high-latency clusters. If you divide the nodes up into 100 clusters during training, and you have ten layers, it might take each cluster 0.001s to process a single sample. So the processing time per cluster is maybe 100-1000 times less than the total latency, which is acceptable if you have 10,000,000 samples and can allow some weight updates to be a bit out of order. Also, if you just want the forward pass of the network, that's the ideal case, since there are no state updates.
In general, long computations tend to be either stateless or have slowly changing state relative to the latency, so parallelism can work.
Good point, though there should be value on the other end at least. For example if 100 people on a network each need more than their laptop's computing power 1% of the time, in the ideal case, the average person would get a 100 times speed up for that 1% of the time without providing a credit card. So they could train an image classifier in 6 minutes instead of 10 hours.
Also I should admit that I'm only poor in the relative sense - I need rice, beans, and a few dozen square feet, and I have those things covered.
Hmm it probably is more lucrative to convert my time to money, though I think it's better to invest my time in increasing my future earnings, which would probably be way better than what I could make as a part-time-working college student.
Actually, my biggest gripe about my life right now is that college is inefficient in so many ways (500 person lectures, required classes that are mostly wastes of time, absurd tuition), yet I don't know how I could get the things I like about it (flexible schedule, great peers, some extremely good teachers, excuse to be a student) somewhere else.
Like most college students, I am annoyed that I am poor. I would like a way to sell the spare computing power of my laptop over the Internet to people who would pay for it, like deep learning folks. I would be willing to share 50% of the profits with anyone who can figure out how to do this.
Being a billionaire is evidence more of determination than of luck. I also don't think billionaires believe they are the smartest people in the world. But like everyone else, they have too much faith in their own opinions when it comes to areas in which they're not experts. They just get listened to more.
You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.
What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don't don't think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can't just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can't tell who those people are.
It depends what level of fact checking is needed. Watson is well-suited for answering questions like "What year was Obama born?", because the answer is unambiguous and also fairly likely to be found in a database. I would be very surprised if Watson could fact check a statement like "Putin has absolutely no respect for President Obama", because the context needed to evaluate such a statement is not so easy to search for and interpret.
I'm still fairly skeptical that algorithmically fact-checking anything complex is tractable today. The Google article states that "this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting." Also, no real insights into nlp are presented; the article only suggests that an algorithm could fact check relatively simple statements that have clear truth values by checking a large database of information. So if the database has nothing to say about the statement, the algorithm is useless. In particular, such an approach would be unable to fact-check the Fiorina quote you used as an example.
Proposition: how much you should prioritize using currently available life extension methods depends heavily on how highly you value arbitrary life extension. The exponential progress of technology means that on the small chance a healthier lifestyle* nontrivially increases your lifespan, there is a fairly good chance you get arbitrary life extension a result. So the outcome is pretty binary - live forever or get an extra few months. If you're content with current lifespans, as most people seem to be, the chance at immortality is probably still small enough to ignore.
*healthier than the obvious (exercise, don't smoke, etc.)
In general there should be a way to outsource forum moderation tasks like these, rather than everyone in charge of a community having to do it themselves.