Wiki Contributions

Comments

Depending on the kind of support they're looking for https://ceealar.org could be an option. At any one time there are a handful of people staying there working independently on AI Safety stuff.

Wholely agree with the 'if it works it works' perspective.

Two minor niggles are worth mentioning:

  1. As I understand it, eating any amount will signal the body to stop fasting. The overnight fast is the only one most people have and it seems to be quite important for long term metabolic health.
  2. Your body has several inputs to its internal clock, and the two most significant ones are light and food. So there's a pathway where this 'solution' might also be reinforcing the problem.

Niggles aside, if it works it works. And nothing is more important than sleep for health. If you are currently chronically sleep deprived and the usual things aren't helping - absolutely try this.

Thanks Elizabeth for sharing another potential tool for helping people.

So I'm basically the target audience for the OP - I read a lot, of all kinds of stuff, and almost zero papers. I'm an autodidact with no academic background.

I appreciated the post. I usually need a few reminders that 'this thing has value' before I finally get around to exploring it :)

I would say, as the target audience, I'm probably representative when I say that a big part of the reason we don't read papers is a lack of access, and a lack of discovery tools. I signed up for Elicit a while back, but as above - haven't gotten around to using it yet :D

In my experience the highest epistemic standard is achieved in the context of 'nerds arguing on the internet'. If everyone is agreeing, all you have is an echo chamber.

I would argue that good faith, high effort contributions to any debate are something we should always be grateful for if we are seeking the truth.

I think the people who would be most concerned with 'anti-doom' arguments are those who believe it is existentially important to 'win the argument/support the narrative/spread the meme' - that truthseeking isn't as important as trying to embed a cultural norm of deep deep precaution around AI.

To those people, I would say: I appreciate you, but there are better strategies to pursue in the game you're trying to play. Collective Intelligence has the potential to completely change the game you're trying to play and you're pretty well positioned to leverage that.

Re: EMH is false, long GOOG

I wish you'd picked a better example.

... but wait it gets worse

tl;dr LLMs make search cost more, much more, and thus significantly threaten GOOG's bottom line.
MSFT knows this, and is explicitly using Bing Sydney as an attack on GOOG.

I'm not questioning the capabilities of GOOG's AI department, I'm sure Deepmind have the shiniest toys.

But it's hardly bullish for their share price if their core revenue stream is about to be decapitated or perhaps even entirely destroyed - ad based revenue has been on shaky ground for a while now, I don't think it's inconceivable that one day the bottom will fall out.

re: EMH in general

EMH gets weaker the less attention an asset has, the further out in time relevant information is (with significant drops around 1yr, 2yr, 5yr), and the more antimemetic that relevant information is (i.e. Sin is consistently undervalued because it makes people feel bad to think about. Most recently we saw this in coal, and I'm kicking myself for not getting in on that trade.).

Will GOOG go up? Maybe.
Is GOOG undervalued? Extremely unlikely.
 

AI Therapy isn't the first domino to fall, AI Customer Service is (it's already falling).

95% of customer service humans can be replaced by a combination of Whisper+GPT; they (the humans) are already barely agentic, just following complex scripts. It's likely that the AI customer service will provide a superior experience most of the time (less wait times, better audio quality at a minimum, often more competent and knowledgeable too, plausibly capable of supporting many languages).

Obviously huge cost savings so massive incentive for companies to replace humans (and why it's already started with even weak chatbots).

Investing in it is tricky, same problem you mentioned at the start - picking which horse is going to win this race, most probably either don't exist or aren't publicly tradeable.

Zoom is a potential frontrunner, they acquired Solvvy last year which suggests some strategic awareness of this trend/potential market.

Thanks for your post, just wanted to contribute by deconfusing ADHD a little (hopefully). I agree that you and OP seem to be agreeing more than disagreeing.

So speaking from a pretty thorough ignorance of the topic itself, my guess based on my priors is that the problem-ness of ADHD has more to do with the combo of (a) taking in the culture's demand that you be functional in a very particular way combined with (b) a built-in incapability of functioning that way.

Correct. However that problem-ness is often a matter of survival/highly non-optional. ADHD can be an economic (and thus kinda literal) death sentence - if it wasn't for the support of my family I'd be homeless.

I think what the OP is referring to, why they raised ADHD specifically in this context, is because this habitualized conscious forcing/manipulation of our internal state (i.e. dopamine) is a crutch we can't afford to relinquish - without it we fall down, and we don't get back up.

I'm speaking as someone only recently (last year) diagnosed with (and medicated for) ADHD. I am easily twice as functional now as I was before I had medication (and I am still nowhere near as functional as the average person, let alone most of this crowd xD)

And, quite tidily, ADHD is one of the primary reasons I learned to develop slack - why I'm capable of grokking your position. ADHD is a neverending lesson in the necessity of slack, in learning to let go. 

ADHD is basically an extreme version of slack philosophy hardwired into your brain - it's great from a certain perspective, but it kinda gives us a healthy appreciation for the value of being able to force outcomes - in a 'you don't know what you've got til its gone' sense.

Thanks for this post, it was insightful and perfectly timed; I've been intermittently returning to the problem of trust for a while now and it was on my mind this morning when I found your post.

I think shared reality isn't just a 'warm fuzzies' thing, it's a vital component of cooperation.

I think it's connected with the trust problem; your ability to trust someone is dependent to some degree on a shared reality.

I think that these problems have been severely exacerbated by our current technologies and the social landscape they've shaped, but I'm also highly intrigued by the possibility that we can throw this in reverse - that there is an achievable engineering solution to this problem; that this is something we can not only 'fix' with the right technologies, but also empower far beyond 'baseline'.

I'm interested in talking with anyone who's exploring the trust problem in some way. I think even a 20% effective solution to this problem would be world changing; the trust problem is at (or near) the root of many of the dysfunctional aspects of our civilization.

I'm especially interested in anyone who strongly disagrees with me - about either the importance of the problem or the feasibility of finding a solution.

To start with, I agree.

I really agree: about timescales, about the risks of misalignment, about the risks of alignment. In fact I think I'll go further and say that in a hypothetical world where an aligned AGI is controlled by a 99th percentile Awesome Human Being, it'll still end in disaster; homo sapiens just isn't capable of handling this kind of power.[1]

That's why the only kind of alignment I'm interested in is the kind that results in the AGI in control; that we 'align' an AGI with some minimum values that anchor it in a vaguely anthropocentric meme-space (e.g. paperclips boring, unicorns exciting) and ensures some kind of attachment/bond to us (e.g. like how babies/dogs get their hooks into us) and then just let it fly; GPT-Jesus take the wheel.

(So yes, the Minds from the Culture)

No other solution works. No other path has a happy ending.[2]

This is why I support alignment research - I don't believe the odds are good, I don't believe the odds are good even if they solve the technical problem, but I don't see a future in which homo sapiens flourishes without benevolent GPT-Jesus watching over us.

Because the human alignment problem you correctly identify as the root of our wider problems - that isn't going away by itself.

  1. ^

    Not a 'power corrupts' argument, just stating the obvious: godlike power directed by monkeylike intelligence doesn't end well, no matter how awesome the individual monkey.

  2. ^

    Maaaaaybe genetic engineering; if we somehow figured out how to create Homo Sapiens 2.0, and they figured out 3.0 etc etc.

    This pathway has a greater margin for error, and far fewer dead ends where we accidentally destroy everything. It can go slow, we can do it incrementally, we can try multiple approaches in parralel; we can have redundancy, backups etc.

    I think if we could somehow nuke AI capabilities, this path would be preferable. But as it is AI capabilities is going to get to the finish line before genetics has even left the lab.

Load More