All of 25Hour's Comments + Replies

It's a beautiful dream, but I dunno, man.  Have you ever seen Timnit engage charitably and in-good-faith with anyone she's ever disagreed publicly with?

And absent such charity and good faith, what good could come of any interaction whatsoever?

This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith.  They are also probably a single-digit number of people, albeit a group really good at getting under techies' skin.

1Christopher King2mo
I'm mostly thinking about cost-benefit. I think even a tiny effort towards expressing empathy would have a moderately beneficial effect, even if only for the people we're showing empathy towards.
2the gears to ascension2mo
I'd further add - they appropriate the language of anti-appropriation, but are not themselves skilled at recognizing the seeking of equity in social systems. They seem socially disoriented by a threat they see, in a similar way to how I see yudkowsky crashing communicatively due to a threat. It doesn't surprise me to see them upset at yudkowsky; both they and yudkowsky strike me as instantiating the waluigi of their own resistance to a thing as partly containing the thing they are afraid of. The things they claim to care about are things worth caring about, but I cannot endorse their strategy. Care for workers, but some of the elements of their acronym very much do intend to prioritize that, and it's possible to simply ignore them and just keep on doing the right thing. Nobody can make you be a good person, and if someone is trying to, the only thing you can do is let their emotive words pass over you and their thoughtful words act as a claim about their own perspective. Like yudkowsky, their perspectives on the threat are useful. But there's no need for either to dismiss the other, in my view - they see the same threat and feel the other side can't see it. Just keep trying to make the world better and it'll solve both their problems. So - anyone have any ideas for how to drastically improve the memetic resistance to confusion attacks of all beings, computer or chemical, and strengthen and broaden caring between circles of concern?

Re: blameless postmortems, i think the primary reason for blamelessness is because if you have blameful postmortems, they will rapidly transform (at least in perception) into punishments, and consequently will not often occur except when management is really cheesed off at someone. This was how the postmortem system ended up at Amazon while i was there.

Blameful postmortems also result in workers who are very motivated to hide issues they have caused, which is obviously unproductive.

2the gears to ascension2mo
+1; to expand on a related thought - it also seems to me like it connects well to most of the suggested changes that have been sitting as unhandled pull requests for the justice system for the past 50+ years. various types of restorative, rehabilitative, etc justice also focus on reducing or, for some offenses, even removing, the "blame" - even from some of the most blameful of human situations.

Reasonable points, all! I agree that the conflation of legality and morality has warped the discourse around this; in particular the idea of Stable Diffusion and such regurgitating copyrighted imagery strikes me as a red herring, since the ability to do this is as old as the photocopier and legally quite well-understood.

It actually does seem to me, then, that style copying is a bigger problem than straightforward regurgitation, since new images in a style are the thing that you would ordinarily need to go to an artist for; but the biggest problem of all i... (read more)

2Brendan Long4mo
I'm confused about how style copying is a new problem. You can trivially find people willing a capable of drawing convincing Disney or specific-anime-studio art, and there's an entire town in China [https://en.m.wikipedia.org/wiki/Dafen_Village] dedicated to making paintings in famous styles. This has existed for a long time and the moral panic is just because now scary computers are doing it.
2abramdemski4mo
I don't think this is true in the short term. Artists are currently dealing with issues like scam social media accounts which copy their style and claim to be the artist. (Not sure how big this is, I only heard about this as a rumor -- but it's something that is now possible, where before you'd only be able to do something like this by re-posting existing works.)

Interestingly i believe this is a limitation that one of the newest (as yet unreleased) diffusion models has overcome, called DeepFloyd; a number of examples have been teased already, such as the following Corgi sitting in a sushi doghouse:

https://twitter.com/EMostaque/status/1615884867304054785?t=jmvO8rvQOD1YJ56JxiWQKQ&s=19

As such the quoted paragraphs surprised me as an instance of a straightforwardly falsifiable claim in the documents.

I think that your son is incorrectly analogizing heroin/other opiate cravings to be similar to "desire for sugar" or "desire to use X social media app" or whatever.  These are not comparable.  People do not get checked into sugar rehab clinics (which they subsequently break out of); they do not burn down each one of their social connections to get to use an hour of TikTok or whatever; they do not break their own arms in order to get to go to the ER which then pumps them full of Twitter likes.  They do routinely do these things, and worse, to... (read more)

I actually think you can get an acceptable picture of whether something is priced in by reading stock analysts on the topic, since one useful thing you can get from them is a holistic perspective of what is on/off the radar of finance types, and what they perceive as important.

Having done this for various stocks, i actually do not think LLM-based advances are on anyone's radar and i do not believe they are priced in meaningfully.

I don't think i ever heard about tesla doing LLM stuff, which seems like the most relevant paradigm for TAI purposes. Can you elaborate?

One possible options play is puts on shutterstock, since as of about 2 weeks ago midjourney got up to a level where you can for a pittance replicate the most common and popular stock image varieties at an extremely high level of quality. (E.g. girl holding a credit card and smiling).

I think the most likely way this shakes out is adobe integrates image generation with figma and its other products, leaving "buying a stock image" as an increasingly niche and limited option for people who want an image to decorate a thing where they aren't all that particular about what the image is.

Primary question to me is on what time scale the SSTK business model dissolves in, since these changes take time.

Answer by 25HourSep 27, 202292

Having a Ph.D. confers relatively few benefits outside of academia. The writing style and skills taught in academia are very very different from that of industry, and the opportunity cost of pursuing a Ph.D. vs going into software engineering (or something similarly renumerative) is in the hundreds of thousands of dollars.

I would suggest that if you don't know exactly what you want to do with your life, you would be well-suited to doing something that earns you a bunch of money. This money can later be used to finance grander ambitions when you have figu... (read more)

My response comes in two parts.

First part!  Even if, by chance, we successfully detect and turn off the first AGI (say, Deepmind's), that just means we're "safe" until Facebook releases its new AGI.  Without an alignment solution, this is a game we play more or less forever until either (A) we figure out alignment, (B) we die, or (C) we collectively, every nation, shutter all AI development forever.  (C) seems deeply unlikely given the world's demonstrated capabilities around collective action.

Second part:

I like Bitcoin as a proof-of-concept... (read more)

2mukashi1y
First part. It seems we agree! I just consider that A is more likely because you are already in a world where you can use those AGIs to produce results. This is what a pivotal act would look like. EY et al would argue, this is not going to happen because the first machine will already kill you. What I am criticizing is the position in the community where it is taking for granted that AGI = doom Second part, I also like that scenario! I don't consider especially unlikely that an AGi would try to survive like that. But watch out, you can't really derive from here that machine will have the capacity of killing humanity. Only that a machine might try to survive like this. If you want to continue with the Bitcoin analogy, nothing prevents me from forking the code and create Litecoin, and tune the utility function to make it work for me

"If you think this is a simplistic or distorted version of what EY is saying, you are not paying attention. If you think that EY is merely saying that an AGI can kill a big fraction of humans in accident and so on but there will be survivors, you are not paying attention."

Not sure why this functions as a rebuttal to anything i'm saying.

3mukashi1y
Sorry, it is true that I wasn't clear enough and that I misread part of your comment. I would love to give you a properly detailed answer right now but I need to go, will come back to this later

You ask elsewhere for commenters to sit down and think for 5 minutes about why an agi might fail. This seems beside the point, since averting human exctinction doesn't require averting one possible attack from an agi. It involves averting every single one of them, because if even one succeeds everyone dies.

In this it's similar to human security-- "why might a hacker fail" is not an interesting question to system designers, because the hacker gets as many attempts as he wants. For what attempts might look like, i think other posts have provided some reas... (read more)

2mukashi1y
Three things. 1. averting human exctinction doesn't require averting one possible attack from an agi. It involves averting every single one of them, because if even one succeeds everyone dies. Why do you think that humans won't retaliate? Why do you think that an AGI, knowing that humans will retaliate, will attack in the first place? Why do you think that this won't give us a long enough time window to force the machine to work on specific plans? 2.In [http://2.In] this it's similar to human security-- "why might a hacker fail" is not an interesting question to system designers, because the hacker gets as many attempts as he wants. For what attempts might look like, i think other posts have provided some reasonable guesses. I guess that in human security you assume that the hacker can succeed at stealing your password and take contermeasures to avoid that. You don't assume that the hacker will break into your house and eat your babies while you are sleeping. This might sound like a strange point, but hear me out for a second: if you have that unrealistic frame to begin with, you might spend time not only protecting your computer, but also building a 7 m wall around your house and hiring a professional bodyguard team. Having false beliefs about the world has a cost. In this community, specifically, I see people falling into despair because doom is getting close, and failing to see potential solutions to the alignment problem because they do have unrealistic expectations 1. Imagine that an AGI distributes itself among human computer systems in the same way as bitcoin mining software is today. That it IS a possibility and I lack the knowledge myself to evaluate the likelihood of such scenario. Which leaves me more or less as I was before: maybe it is possible doing that but maybe not. The little I know suggests that a model like that would be pretty heavy and not easily distributable across the internet.
-1mukashi1y
Besides the point? That is very convenient to people who don't want to find that they are wrong. Did you read what I am arguing against? I don't think I said at any point that an AGI won't be dangerous. Can you read the last paragraph of the article please?

but it's such a good pun!

I'm not sure whether the unspoken context of this comment is "We tried to hire Terry Tao and he declined, citing lack of interest in AI alignment" vs "we assume, based on not having been contacted by Terry Tao, that he is not interested in AI alignment."

If the latter: the implicit assumption seems to be that if Terry Tao would find AI alignment to be an interesting project, we should strongly expect him to both know about it and have approached MIRI regarding it, neither which seems particularly likely given the low public profile of both AI alignment in general and MIRI in particular.

If the former: bummer.

From a rando outsider's perspective, MIRI has not made any public indication that they are funding-constrained, particularly given that their donation page says explicitly that:

We’re not running a formal fundraiser this year but are participating in end-of-year matching events, including Giving Tuesday.

Which more or less sounds like "we don't need any more money but if you want to give us some that's cool"

It might be worth doing some goal-factoring on why you want the PhD in the first place.

If you just want to advance human knowledge, one plausible option is to get a fancy tech job, save up enough money to fund the project you're interested in, then commission someone to do the project.  Feasibility naturally depends on the specifics of the project.

PhDs can involve dealing with a lot of financial insecurity and oftentimes personal hardship to get through (with six years of opportunity cost and no guarantee of getting funding for your research interests at the end), so it's probably worth verifying that a PhD is actually your best option for whatever your personal goals are.

>  Give Terrence Tao 500 000$ to work on AI alignement six months a year, letting him free to research crazy Navier-Stokes/Halting problem links the rest of his time... If money really isn't a problem, this kind of thing should be easy to do.

Literally that idea has been proposed multiple times before that I know of, and probably many more times many years ago before I was around.

 

What was the response?  (I mean, obviously it was "not interested", otherwise it would've happened by now, but why?)

8Daniel Kokotajlo1y
IDK, I would love to know! Like I said, it's a mystery to me why this is happening. I do remember some OpenPhil people telling me once that they tried paying high-caliber / excellent-CV academics a wad of cash to work on research topics relevant to OpenPhil... and they succeeded in getting them to do the research... but the research was useless.
6Yitz1y
Seconding this, considering we’ve got plenty of millionaires (and maybe a few billionaires?) in our community, I don’t see any good reason not to try something along these lines.

I think you're totally right that to the extent that the stock market is a zero-sum game retail traders will lose almost every time, since the big players on the other end will always have more information and power to leverage that information than retail.

I think a lot of the relevance of this comment depends on your view of stock-market-as-casino vs stock-market-as-generator-of-wealth-at-several-steps-removed.  I take the view that it's mostly the latter; widget maker IPOs, accepts money from big institutional IPO investor and buys capital with it i... (read more)

2Nicole Dieker1y
agreed agreed agreed but hey guess what the market rebounded today so yay for that?

I'd like to see the intuition expanded upon here:

And yet when I write that, I start asking myself “but what is a dollar if not an investment that is only worth what someone else is willing to trade for it” and then “wait, what if a stock is a better investment than a dollar” and then “no no no no no investing on top of investing is like double risk” 

Is it double risk?  We're going from a situation where we're talking to a widget producer and saying "yes I would like to exchange a dollar for a widget" to a situation where we're saying "I would lik... (read more)

2Nicole Dieker1y
That paragraph was meant to be less intuitive and more "wait if you really follow this line of thought it takes you to some nonsensical arenas..." But we don't get to say "I'd like to exchange a fractional share of Microsoft for a widget." You can only exchange a fractional share of Microsoft for A) cash or B) shares in something else, and you can only do so if someone else is willing to make the trade. There are situations in which you could have an asset you want to sell and nobody wants to buy it, [https://www.investopedia.com/ask/answers/selling-bear-market-does-your-broker-buy-your-shares/] which is also true for other assets like houses (and, if you own a business, whatever your business produces [and, if you are a worker with specific skills, the value those skills could bring to an employer]). As to your last point, there's a non-trivial reason why some people suggest stockpiling a year's worth of food...

I look forward to Thursdays specifically for these updates.

This is an interesting argument!  I certainly acknowledge that if you can become non-obese via purely dietary means, that is best.

I wonder whether your analogy holds in the circumstance where dietary means have been attempted and failed, as often happens judging by the truly staggering number of posts online on this very topic-- whether becoming non-obese via medication constitutes a short-term win outweighed by long-term detriments, and whether the effects of the pills turn out to be more harmful than the original obesity it was meant to treat.

But it's not totally clear to me that you have attempted to make an affirmative case for this being true, as opposed to suggesting it as a pure hypothetical.

Oh, Wellbutrin (bupropion) is totally a thing you can use for weight loss, and is even found in Contrave (one of the drugs I listed) for that reason.  Lesser effect, though, since its weight loss effects are additive with naltrexone.

Berberine is one I hadn't heard of before; unfortunately I can't find any articles discussing its use in weight loss.

2romeostevensit2y
whoops, meant metformin. Always confuse those two.

I suppose that's reasonable, though i will point out that this is a fully-general argument against taking any drugs long-term at all.

0ChristianKl2y
Yes, you should generally minimize the amount of drugs you take long-term. 

Oh, I'm guessing based on purely correlational studies, with all the uncertainty and fuzziness that implies. Added a disclaimer to the relevant section to this effect, since it's worth calling out.  

That said, I'd be shocked if the whole effect was due to confounders, since there are so many negative conditions comorbid with obesity, along with the existence of some animal studies also pointing in the direction of improved lifespan with caloric restriction.

Unfortunately, we don't have the ability to run controlled studies over a human lifespan, so we ... (read more)

1ChristianKl2y
Pretending you know something that you don't is not something you want to do in the presence of incomplete information. In this case you would want to look for studies that actually look at the lifespan effects of successful weight loss. Estimating confidence intervals to be more explicit about one's uncertainty is also helpful.
2gilch2y
Is that really how all of them work? In the case of ECA, I thought it was due to increased metabolism. But it might also have an effect on appetite. And even when it is, is that good enough? It's possible for dietary changes to promote weight loss, but still be unhealthy. If you just eat junk food, and then the drugs reduce your appetite so you eat less food, but it's still junk food, then technically that's "dietary changes", but you're still not getting the micronutrients, fiber, prebiotics, and possibly bacteria that you would from fruits and vegetables. To the extent that the poor health is caused by excess Calories, it helps. But to the extent that poor health is caused by eating the wrong things, then simply eating less of them can only go so far. Of course, I expect that using the prescription drugs as directed would be a last resort after dietary improvements prove insufficient, but doctors can only do so much to influence behavior.

Yup!  It's branded as "Topamax", but I've heard that some users refer to it as "Stupamax" because of the brain fog effect.  It doesn't sound awesome.

Also, it sounded like it increases probability of getting a kidney stone by a lot, though I'd need to track down the reference.  All told, feels like one of the worse options out there.

As far as I understand it, "combination" drugs don't really do anything together that each component doesn't do alone.  For example, bupropion causes weight loss if you take it alone; it just causes more when you pair it with naltrexone, which also causes weight loss.

1noggin-scratcher2y
God to know, thanks.  I could have imagined a case where the weight-loss effects were coming solely from the phentermine, with topiramate added to the combination for other reasons. But it having its own independent/added effect makes sense. For the person I know taking it, her appetite does seem quite reduced since switching from amitriptyline (I think that one increases appetite, so there will have been a double-effect from switching off that and onto something that suppresses it) The brain fog is unfortunate, but also still less bad overall than the migraines were. 

Also, good point about highlighting the uncertainty; I've added a disclaimer to that effect at the beginning of the section.

Can you give any examples of that happening, where a drug reduces lifespan but not by causing any specific fatal effect?

6ChristianKl2y
There's very little money into chosing causation in cases like this but sleeping pills [https://sethroberts.net/2013/01/11/dangerous-sleeping-pills/]would be one example. In general many drugs cause the liver to have to do extra work and put stress on it.

All fair points!  That said, I think extended lifespan is a very reasonable thing to expect, since IIRC from longevity research that caloric restriction extends lifespan (from animal studies); this seems like a very natural extrapolation from that.

2gilch2y
I think metformin was supposed to have effects similar to caloric restriction, and does appear to reduce all-cause mortality, even though most users are diabetic.

I'd be concerned that our instincts toward vengeance in particular would result in extremely poor outcomes if you give humans near-unlimited power (which is mostly granted by being put in charge of an otherwise-sovereign AGI); one potential example is the AGI-controller sending a murderer to an artificial, semi-eternal version of Hell as punishment for his crimes.  I believe there's a Black Mirror episode exploring this.  In a hypothetical AGI good outcome, this cannot occur.

The idea of a committee of ordinary humans, ems, and semi-aligned AI whi... (read more)

I'd definitely agree with this.  Human institutions are very bad at making a lot of extremely crucial decisions; the Stanislav Petrov story, the Holocaust, and the prison system are all pretty good examples of cases where the institutions humans have created have (1) been invested with a ton of power, human and technological, and (2) made really terrible decisions with that power which either could have or did cause untold suffering.

Which I guess is mostly a longer way of saying +1.