Old man 1: Life is one trouble after another. I'd be better off dead, better yet, I wish I was never born
Old man 2: True, true, but who has such luck ?.. maybe one in a thousand.
My blog: https://cerebralab.com
I'm also building an open source generic ML library: https://github.com/mindsdb/mindsdb & https://github.com/mindsdb/lightwood .... which I guess might be of interest to some people here
Would the stakes be high enough to get participants in the market? m&m always seemed fairly unreliable to me, hype is required to generate answers and answers can be extensively biased due to the lack of a real incentive.
Still, if you'd be down for creating the markets yourself or know someone that would, I'm pretty sure the author would be ok sharing more specific predictions around TOGETHER.
Given that this was my claim and not the author's I'd rather not discuss it since I'd detract from the point.
But tl;dr, strength aside, if you count all trials ivm efficacy is arguably higher than plaxovid, once you start eliminating and de-biasing the story changes, the direction and magnitude of change is the whole story that generated e.g. Scott's ivm post and this reply.
Awesome, I myself will be 25-30 minutes late, but will catch you guys there :)
Will you have a sign on the table or some such ? Also, it might be worth emailing everyone else since LW is horrible at notifications about these events
Given that rain is forecasted, are we moving the meeting to and indoors space or to another day with a better chance of sun?
There are several things at the extreme of non-quantifiable:
I myself am pretty convinced there are a lot of things falling under <1> and <2> that are practically impossible to quantify (not fundamentally or theoretically impossible), even provided 1000x better camera, piezo, etc sensors and even provided 0.x nm transistors making perfect use of all 3 dimensions in their packing (so, something like 1000x better GPUs).
I think <3> is false and mainly make fun of the people that believe in it (I've taken enough psychedelics not to be able to say this conclusively, but still). However, I still think it will be a generator of disagreement with AI alignment for the vast majority of people.
I can see very good arguments that both 1 and 2 are uncritical and not that hard to quantify, and obviously that 3 is a giant hoax. Alas, my positions have remained unchanged on those, hence why I said a discussion around them may be unproductive.
To address your later point, I doubt I fall into that particular fallacy. Rather, I'd say, I'm on the opposite spectrum where I'd consider most people and institutions to be beyond incompetent.
Hence why I've reached the conclusion that improving on rationally legible metrics seems low ROI, because otherwise rationlandia would have arisen and ushered prosperity and unimaginable power in a seemingly dumb world.
But I think that's neither here nor there, as I said, I'm really not trying to argue my view here is correct, I'm trying to figure out why wide differences in view in both directions exist.
A negative utilitarian could easily judge that something that had the side effect of making people infertile would cause far less suffering than not doing it, causing immense real world suffering amongst the people who wanted to have kids, and ending civilizations. If they were competent enough, or the problem slightly easier than expected, they could use a disease that did that without obvious symptoms, and end humanity.
But you're thinking of people completely dedicated to an ideology.
That's why I'm saying a "negative utilitarian charter" rather than "a government formed of people autistically following a philosophy"... much like, e.g. the US government has a "liberal democratic" charter, or the USSR had a "communist" charter of sorts.
In practice these things don't come about because member in the organization disagree, secret leak, conspiracies are throttled by lack of consensus, politicians voted out, engineered solutions imperfect (and good engineers and scinetists are aware of as much)
For gwern's specific story, I agree it's somewhat implausible that one engineer (tho with access to corporate compute) trains Clippy and there's not lots of specialized models;
I think the broader argument of "can language models become gods" is a separate one.
My sole objective there was to point out flaws in this particular narrative (which hopefully I stated clearly in the beginning).
I think the "can language models become gods" debate is broader and I didn't care much to engage with it, superficially it seems that some of the same wrong abstractions that lead to this kind of narrative also back up that premise, but I'm in no position to make a hands-down argument for that.
The rest of your points I will try to answer later, I don't particularly disagree with any stated that way except on the margins (e.g. GLUE is a meaningless benchmark that everyone should stop using -- a weak and readable take on this would be - https://www.techtarget.com/searchenterpriseai/feature/What-do-NLP-benchmarks-like-GLUE-and-SQuAD-mean-for-developers ), but I don't think the disagreements are particularly relevant(?)
Last time I checked that wouldn't work for a sizeable amount. Maybe I'm wrong? I claim no expertise in crypto and, as I said, I think that's my weakest point. In principle, I can see smart-contract-based swapping with a large liquidity pool + ETH VM sidechain being sufficient to do this.
Wouldn't fit the exact description in the story but would server roughly the same purpose and be sufficient if you assume the ETH VM-optimized sidechain has enough volume (or a similar thing, with whatever would overthrone ETH in 20xx)
See cases above, even if you assume asymmetry (how does using banks square with that belief?), you still are left with the adversarial problem that all easy to claim exploits are taken and new exploits are usually found on the same (insecure, old) software and hardware.
So all exploitable niches are close to saturation at any given time if an incentive exists (and it does) to find them.