throws a Bayes point at you
Hi Nate, great respect. Forgive a rambling stream-of-consciousness comment.
Without the advantages of maxed-out physically feasible intelligence (and the tech unlocked by such intelligence), I think we would inevitably be overpowered.
I think you move to the conclusion "if humans don't have AI, aliens with AI will stomp humans" a little promptly.
Hanson's estimate of when we'll meet aliens is 500 million years. I know very little about how Hanson estimated that & how credible the method is, and you don't appear to either: that might be worth investigating... (read more)
This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.)
Unrelatedly but from the same advert. I had not realized it was that expensive - this rings some alarm bells for me but maybe it is fine, it is in fact a medical service. I have been waffling back and forth and will conclude I don't know enough of the details.
Regardless, the alarm bells still made me want... (read more)
Note: I didn't read the HPMOR advert, I read the one here on LW which is different. It starts like this:
In a world where 85% of doctors can't solve simple Bayesian word problems...In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...In a world where "p-values" are anything the author wants them to be......and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of......there's also MetaMed. Instead of
In a world where 85% of doctors can't solve simple Bayesian word problems...
In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...
In a world where "p-values" are anything the author wants them to be...
...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...
...there's also MetaMed. Instead of
I think Constantin's postmortem is solid and I appreciate it. She says this:
But there was a mindset of “we want to give people the best thing, not the thing they want, and if there’s a discrepancy, it’s because the customer is dumb.” I learned from experience that this is just not true -- when we got complaints from customers, it was often a very reasonable complaint, due to an error I was mortified that we’d missed.
As she says in the linked thread, Zvi's postmortem is "quite different." Constantin discusses the faults of their busines... (read more)
That is a fair point! I don't think Zvi et. al are obligated and I'm not like, going to call them fraudster hacks if they're not interested.
I said this more with the hopes that people frustrated with unaccountable governance would want to seize the mantle of personal responsibility, to show everyone that they are pure and incorruptible and it can be done. My post came across as more of a demand than I meant it to, which I apologize for.
Organizations can distribute their money how they want. My concern here is more "can pillars of the rat community get fund... (read more)
EY endorsed it strongly (which I believe counts as a ding on his track record if anyone is maintaining that anywhere)
I don't think it's a ding on his track record.
This is a basic prosocial act. You haven't made an argument that the product was low-quality, the failure of the company only shows that there wasn't enough of a market for that particular product to sustain the company. For the most part I'm glad Eliezer advertised it while I could still buy it, it seems ... (read more)
I'm sorry, this may come across as very rude, but:
MetaMed, a startup both you and Vance were on, failed abjectly and then received precious little coverage or updating from the broader rat community (as far as I've seen).
I am happy to believe your skills have improved or that the cause area is better (though this one is so nebulously ambitious that I can't help but feel a cold churn of pessimism). Certainly, demanding that every project a person attempts must meet with success is too high a bar.
But this time I would like to see you and your cofounders hold... (read more)
If you haven't seen it, there's a thread here with links to Sarah Constantin's postmortem and Zvi's semi-postmortem, plus another comment from each of them.
I'll excerpt Zvi's comment from that thread:
Most start-ups fail. Failing at a start-up doesn't even mean that you, personally are bad at start-ups. If anything the SV-style wisdom is that it means you have experience and showed you will give it your all, and should try again! You don't blow your credibility by taking investor money, having a team that gives it their all for several years, and coming up
> But this time I would like to see you and your cofounders hold yourselves accountable to keep the communities funding you informed
I love postmortems, but community accountability seems like a weird frame here. Presumably the people funding this org have asked some questions and were satisfied, and that's pretty separate from a public disclosure.
That is a cute idea but they'd do it right away [>95%]. Even if you just gave it to like five moderators. They are largely conflict theorists who believe rationalists are [insert the strongest politically derogatory terms imaginable] and LW being down is morally good.
Maybe if there were real stakes they would consider it, like an independent party making a donation to both MIRI and an organization of SC's choice — except on second thought, I think they would find this too objectionable: "wow, you'll donate to charity but only if you get to humiliate me ... (read more)
Perhaps ironically/terrifyingly I think the LW/Sneerclub Petrov Day experiment is most interesting if it actually destroys the whole site forever, rather than symbolically taking down one page for a day. This is more analogous to the US/Soviets and their goals + level of hostility
(Although I expect that deal to still be heavily lopsided in favor of SneerClub, given that SneerClub's main goal seems more like "fuck LW" than "have a functioning nice community")
I tend to have trouble evaluating this sort of thing due to cherry-picking.
Sam Altman made a twitter post; you can see 20 user-submitted prompts and their output at https://twitter.com/i/events/1511763146746212353, which might help a little if you want to build a model of the thing's strength.
The surviving worlds look like people who lived inside their awful reality and tried to shape up their impossible chances; until somehow, somewhere, a miracle appeared - the model broke in a positive direction, for once, as does not usually occur when you are trying to do something very difficult and hard to understand, but might still be so - and they were positioned with the resources and the sanity to take advantage of that positive miracle, because they went on living inside uncomfortable reality.
Can you talk more about this? I'm not sure what actions you want people to take based on this text.
What is the difference between a strategy that is dignified and one that is a clever scheme?
I may be misunderstanding, but I interpreted Eliezer as drawing this contrast: