George3d6

Old man 1: Life is one trouble after another. I'd be better off dead, better yet, I wish I was never born

Old man 2: True, true, but who has such luck ?.. maybe one in a thousand.

My blog: https://cerebralab.com

I'm also building an open source generic ML library: https://github.com/mindsdb/mindsdb & https://github.com/mindsdb/lightwood .... which I guess might be of interest to some people here

Wiki Contributions

Comments

Would the stakes be high enough to get participants in the market? m&m always seemed fairly unreliable to me, hype is required to generate answers and answers can be extensively biased due to the lack of a real incentive.

Still, if you'd be down for creating the markets yourself or know someone that would, I'm pretty sure the author would be ok sharing more specific predictions around TOGETHER.

Given that this was my claim and not the author's I'd rather not discuss it since I'd detract from the point.

But tl;dr, strength aside, if you count all trials ivm efficacy is arguably higher than plaxovid, once you start eliminating and de-biasing the story changes, the direction and magnitude of change is the whole story that generated e.g. Scott's ivm post and this reply.

Awesome, I myself will be 25-30 minutes late, but will catch you guys there :)

Will you have a sign on the table or some such ? Also, it might be worth emailing everyone else since LW is horrible at notifications about these events

Given that rain is forecasted, are we moving the meeting to and indoors space or to another day with a better chance of sun?

There are several things at the extreme of non-quantifiable:

  1. There's "data" which can be examined in so much detail by human senses (which are intertwined with our thinking) that it would be inefficient to extract even with SF-level machinery. I gave as an example being able to feel another persons muscles and the tension within (hence the massage chair, but I agree smart-massage-chairs aren't that advanced so it's a poor analogy). Maybe a better example is "what you can tell from looking into someone's eyes"
  2. There's data that is interwound with our internal experience. So, for example, I can't tell you the complex matrix of muscular tension I feel, but I can analyze my body and almost subconsciously decide "I need to stretch my left leg". Similarly, I might not be able to tell you what the perfect sauce is for me or what patterns of activity it triggers in my brain, or how its molecules bind to my taste buds, but I can keep tasting the sauce and adding stuff and conclude "voila, this is perfect"
  3. There are things beyond data that one can never quantify, like revelations from god or querying the global consciousness or whatever

I myself am pretty convinced there are a lot of things falling under <1> and <2> that are practically impossible to quantify (not fundamentally or theoretically impossible), even provided 1000x better camera, piezo, etc sensors and even provided 0.x nm transistors making perfect use of all 3 dimensions in their packing (so, something like 1000x better GPUs).

I think <3> is false and mainly make fun of the people that believe in it (I've taken enough psychedelics not to be able to say this conclusively, but still). However, I still think it will be a generator of disagreement with AI alignment for the vast majority of people.

I can see very good arguments that both 1 and 2 are uncritical and not that hard to quantify, and obviously that 3 is a giant hoax. Alas, my positions have remained unchanged on those, hence why I said a discussion around them may be unproductive.

  • I'm not arguing you should take any position on those axi, I am just suggesting them as potential axi.
  • I think that falling on one extreme of the spectrum is equivalent to thinking the spectrum doesn't exist -- so yes, I guess people that are very aligned with a MIRI style position on AI wouldn't even find the spectrum valid or useful. Much like, say, an atheist wouldn't find a "how much you believe in the power of prayer" spectrum insightful or useful. This was not something I considered while originally writing this, but even with it in mind now, I can't think of any way I could address it.
  • In-so-far as your object level arguments against the spectrums I present being valid and/or of one extreme being nonsensical, I can't say that, right now, I could say anything of much value on those topics that you haven't probably already considered yourself.

To address your later point, I doubt I fall into that particular fallacy. Rather, I'd say, I'm on the opposite spectrum where I'd consider most people and institutions to be beyond incompetent.

Hence why I've reached the conclusion that improving on rationally legible metrics seems low ROI, because otherwise rationlandia would have arisen and ushered prosperity and unimaginable power in a seemingly dumb world.

But I think that's neither here nor there, as I said, I'm really not trying to argue my view here is correct, I'm trying to figure out why wide differences in view in both directions exist.

A negative utilitarian could easily judge that something that had the side effect of making people infertile would cause far less suffering than not doing it, causing immense real world suffering amongst the people who wanted to have kids, and ending civilizations. If they were competent enough, or the problem slightly easier than expected, they could use a disease that did that without obvious symptoms, and end humanity.

 

But you're thinking of people completely dedicated to an ideology.

That's why I'm saying a "negative utilitarian charter" rather than "a government formed of people autistically following a philosophy"... much like, e.g. the US government has a "liberal democratic" charter, or the USSR had a "communist" charter of sorts.

In practice these things don't come about because member in the organization disagree, secret leak, conspiracies are throttled by lack of consensus, politicians voted out, engineered solutions imperfect (and good engineers and scinetists are aware of as much)

For gwern's specific story, I agree it's somewhat implausible that one engineer (tho with access to corporate compute) trains Clippy and there's not lots of specialized models;

I  think the broader argument of "can language models become gods" is a separate one.

My sole objective there was to point out flaws in this particular narrative (which hopefully I stated clearly in the beginning).

I think the "can language models become gods" debate is broader and I didn't care much to engage with it, superficially it seems that some of the same wrong abstractions that lead to this kind of narrative also back up that premise, but I'm in no position to make a hands-down argument for that.


The rest of your points I will try to answer later, I don't particularly disagree with any stated that way except on the margins (e.g. GLUE is a meaningless benchmark that everyone should stop using -- a weak and readable take on this would be - https://www.techtarget.com/searchenterpriseai/feature/What-do-NLP-benchmarks-like-GLUE-and-SQuAD-mean-for-developers ), but I don't think the disagreements are particularly relevant(?)

Last time I checked that wouldn't work for a sizeable amount. Maybe I'm wrong? I claim no expertise in crypto and, as I said, I think that's my weakest point. In principle, I can see smart-contract-based swapping with a large liquidity pool + ETH VM sidechain being sufficient to do this.

Wouldn't fit the exact description in the story but would server roughly the same purpose and be sufficient if you assume the ETH VM-optimized sidechain has enough volume (or a similar thing, with whatever would overthrone ETH in 20xx)

See cases above, even if you assume asymmetry (how does using banks square with that belief?), you still are left with the adversarial problem that all easy to claim exploits are taken and new exploits are usually found on the same (insecure, old) software and hardware.

So all exploitable niches are close to saturation at any given time if an incentive exists (and it does) to find them.

Load More