I was hoping to understand why people who are concerned about the climate ignore greentech/srm.
One effect, is that people who want to raise awareness about the severity of an issue have an incentive to avoid acknowledging solutions to it, because that diminishes its severity. But this is an egregore-level phenomenon, there is no individual negative cognitive disposition that's driving that phenomenon as far as I can tell.
Mostly, in the case of climate, it seems to be driven by a craving for belonging in a political scene.
There's a sense in which negativity bias is just rationality; you focus on the things you can improve, that's where the work is. These things are sometimes called "problems". The thing is, the healthy form of this is aware that the work can actually be done, so, should be very interested in, and aware of technologies of existential safety, and that is where I am and have been for a long time.
I notice it also makes sure that if the participants know anything at all about the research, they know it's supposed to be voluntary, even if they're still forced to sign it, they learn that the law is supposed to be on their side and there is in theory someone they could call for help.
Probably has something to do with the fact that a catastrophe is an event, and safety is an absence of something. It's just inherently harder to point at a thing and say that it caused fewer catastrophes to happen. Show me the non-catastrophes. Bring them to me, put them on my table. You can't do it.
https://magnusvinding.com/2023/06/11/what-credible-ufo-evidence/ Is a good roundup of the reports taken most seriously
I think what's so crushing about it, is that it reminds me that the wrong people are designing things, and that they wont allow them to be fixed, and I can only find solace in thinking that the inefficiency of their designs is also a sign that they can be defeated.
There's something very creepy to me about the part of research consent forms where it says "my participation was entirely voluntary."
Yes, if this has truly never reached congress. Kinda under the impression that some congress members (especially presidents) have probably seen it, and for whatever reason, once they knew, all of these people decided not to publicize it.
And that could just keep happening.
I don't see possible next steps for simultaneous disclosure of superpower military R&D from all sides.
This might not be possible until the danger of arms-races in agentic AI has become more obvious. I'm not familiar enough with the nuclear situation to say whether it's feasible today, but it probably will be at some point in the near future.
It seems probable to me that monitoring has, over the past 40 years, become a lot cheaper and more feasible than our geopolitical institutions recognize.
Increases in mutual transparency may have to come in train wit...
I haven't seen any civilians, including on LW, actually weigh the cost of assymetric disclosure of US military R&D, which this would probably require? So I think you're really going to have to systematize this framework a bit more before you can justify the lets see them aliens stance.
I'd instead call for a simultaneous disclosure of superpower military R&D from all sides, primarily to halt the omnicidal arms-racing dynamics we're all currently living under, and I really mean that, but by the way, this policy would also justify seeing the aliens.
It's a story that's been around for a while, but previous tellings of it, Bob Lazar, Majestic 12, or Steven Greer's stories, had major blemishes.
Bob Lazar seems to have lied about his educational background and the extent of his contract with LANSCE, Majestic 12 was apparently a clear hoax, and I've heard Steven Greer sells very cringey phony CE5 meditation tours and engages in a lot of clear wishful thinking.
While this one is a very qualified staffmember who seems to represent and have the endorsement of many even more qualified staffmembers.
Does the incidence rate of schizophrenia actually tell you anything about the incidence rate of the much higher functioning delusion disorders that might be involved here?
Do you mean how placing long-term bets in prediction markets ties up capital that could otherwise be put to use?
Yes, so canny traders have an incentive to ignore long-term questions even though that makes the market somewhat useless. There's an adverse selection effect where the more someone has going on the more they'll focus on short closing dates.
(But I realize that long-term bets are still worth something to them, this doesn't necessarily prevent them from weighing in at all. If a market consisted solely of very good forecasters, I'd expect the predict...
Derivatives solve prediction market shorttermism? I didn't realize that. Has that been written up anywhere?
Yes, if I did B, I might even say something like "I'm not sure what you mean" (though not in those words, but others might), but what I mean is I'm not sure what your intentions are in asking, and I am way more interested in that than guessing about baguettes. The mismatch in interest is so acute that if you don't answer my question I don't think it would be especially mean of me to decline to answer yours.
The incredible drop in crypto financing seems like it might not be credible https://twitter.com/panekkkk/status/1665550869494890496
Regarding Play to Earn games, I wonder if we could see a situation where some "players" (performers) are paid to play less popular (less fun) roles in multiplayer games. This wouldn't destroy the game by making it entirely about earning money, because it will also be about spending money in exchange for whatever services players decide they enjoy.
But these roles will very quickly be filled by full time workers from poor countries, and then some gamers will become uncomfortable and leave, but some wont, and then the artform will start to evolve. We talk abo...
I'm wondering whether people within or on the peripheries of these recovery and reverse engineering programs have decided that convincing people that they're UFO recovery programs is beneficial on net. I've seen some people dismiss this possibility, but it seems like they're presuming to know a lot about the strategic landscape that they're not players in.
That phrasing makes it clear what is meant, but I think the phrase "updating your priors" is still carrying the confused terminology and we need to stop using it, again, priors don't change in response to observations (though in physically constrained/embedded cognition [that is, the profane/imperfect/physically possible version of bayesian reasoning that physical human beings can do] there has to be some sense in which a prior can change in response to arguments/reasoning/reflection, which complicates the issue a lot).
The updated probability, ...
I'm going to put an end to it.
I'm not sure anyone else has the notice when a word has implicit parameters ("prior" practically always does btw) and notice when the binding is ambiguous method yet so I feel it's my responsibility.
I don't think you know what "prior" means. Prior to what update? "the priors against this all being untrue seem at least a little bit lower considering the above" Priors don't change.
There needs to be a little more emphasis of the fact that as a result of their deal, some measure of the grasshopper will get to sing again along with the other minds who blossom in the summer. The paragraph simply doesn't convey that. The impression I got was more like, as a result of their deal, the grasshopper was consumed quicker and less painfully and then merely remembered. I don't think that was your intention?
I'm not sure what job "unexpected" is doing here. Any self-improvement is going to be incomprehensible to humans (humans can't even understand the human brain, nor current AI connectomes, and we definitely wont understand superhuman improvements). Comprehensible self-improvement seems fake to me.
Are people really going around thinking they understood how any of the improvements of the past 5 years really work, or what their limits or ramifications are. These things weren't understood before being implemented. They just tried them and then the number went up and then they made up principles and explanations many years after the fact.
Possible exception: If we can separate and discretize self-improvement cycles from regular operation, it could allow for any given number of them before shutdown.
IE, Make a machine that wants to make a very competent X, and then shut itself down.
X = a machine that wants to make a very competent X2, and then shut itself down.
X2 = a machine that wants overwhelmingly to shut itself down, but failing that, to give some very good advice to humans about how to optimize eudaimonia
We don’t want AIs to be able to engage in unexpected recursive self-improvement
Not true, lots of people do want that, and they probably should. Human-level generality probably isn't possible without some degree of self-improvement (the ability to notice and fix its blindspots, to notice missing capabilities and implement them). And without self-improvement it's probably not going to be possible to provide security against future systems that have it.
And as soon as a system is able to alter its mechanism in any way, it's going to be able to shut itself down, and so what you'll have is a very expensive brick.
Does anyone see any hardware names?
What is it about hardware. I've never seen anyone from there express concern.
I wonder if it's that, for anyone else in AI, their research is either fairly neutral - not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there's no way for you to keep your job and stay sane?
How sure are you that we're not going to end up building AGI with cognitive architectures that consist of multiple psuedo-agent specialists coordinating and competing in an evolutionary economic process that, at some point, constitutionalises, as an end goal, its own perpetuation, and the perpetuation of this multipolar character?
Because, that's not an implausible ontogeny, and if it is the simplest way to build AGI, then I think cosmopolitanism basically is free after all.
And ime cosmopolitanism-for-free often does distantly tacitly assume that this archi...
A statement of concern signed by all (famous) major players and many other respected technologists https://www.safe.ai/statement-on-ai-risk
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Signatories:
The first one would work, but the details are too fine. Second works okay I guess. I also like this one https://thenounproject.com/icon/annoyed-4979582/
"Concrete" reaction should be a concrete truck imo https://thenounproject.com/icon/concrete-truck-1791966/ the current one is a brick wall, which is not really suggestive of concrete at all, and a wall in this context is far more suggestive of an accusation of obtuseness.
A cement truck is an entity who brings concreteness, so it would be like saying "you have the virtue of a concrete truck" and I find it delightful.
I think the "no easy way to respond to a reaction" is an important point. Maybe there should be a way to respond to a reaction!
I think in a very complete, robust social media system, reactions might just end up being very short comments that the site lays out in a more compact way (and if a flag is checked, omits immediate mention of the author and aggregates identical reacts with a number by default). The point would be that you could link directly to an interesting reaction, or reply to it, and if you reply to it it can be layed out as a comment the usual way.
Feedback on "I saw this": I think that's the wrong icon, the eyes are open too wide, that emoji ends up being used to mean like "holy shit, omfg, WOW", which is different. It's worth having that as a reaction, but you already have "changed my mind", which is a more appropriate way to express that in the LW context.
I'd recommend one of the single eye logos like this. instead https://thenounproject.com/icon/eye-100409/
Or a single eye that's looking up and to the left, at the comment, would be better. So I drew one like that, uploaded to the noun project. It's awaiting moderation.
This would be a good time to streamline requesting explanations from voters.
It's very common that the person issuing the reaction doesn't know whether the recipient wants or needs a full explanation, and it's very common for the recipient to really really want an explanation for the reaction after all, and for the recipient to be too worried about being rude or annoying or looking dumb or wasting space by asking for one, so I think it would be worth making that a private, two-click action.
Suggestion: 'true but unhelpful' react with the icon being a bored/tired face. I currently express this with karma downvote and agreement upvote, but the sorts of people who write these sorts of comments will usually have difficulty interpreting that feedback signal, given that the behavior often correlates with not understanding how agreement and discursive productivity are very different things.
Reason we need this: It's a good way of pointing out choir-preaching and succinctly explaining how it can make the site worse (it's boring and unproductive).
Would you say that China knows the bankruptcy of heroic-sacrifice culture (collectivist duty), and westerners have not really experienced that, and they know that westerners just romanticize it from a distance without ever really living it?
EA is funny in that it consists mostly of people who know about incentives and consequences and they try to pay their people well and tell them not to burn themselves out and to keep room for themselves, but it is still named "altruism", and it still does a bit of hero-eating on the side from time to time.
Can't you make human food production a lot more efficient with biotech? Algae, for instance? Spirulina maybe? Tastes bitter, grows fast, highly nutritious. (Are plants or algae as efficient at generating sugars from sunlight as new forms of life evolved to directly use electricity from a solar panel would be?)
Even if that wasn't practical for humans, if such an organism would be very easily imaginable I think that still gives us some weird biopunk menu options for the medium-term future of intelligence?
If this does continue - and I hope it does - please check in on the effects of creatine and choline supplementation if possible. Here's Dr Gregger admitting creatine seems to have cognitive benefits for vegans (IE, may be defficient) (then dismissing supplementing it due to concerns about contaminants) https://nutritionfacts.org/video/creatine-brain-fuel-supplementation/
And for the sake of keeping relevant conversations linked, here's a mediocre questionpost I made on the EA forum about this whole question.
I've never known how to make sense of the lack of humility people have about word definitions, especially when it's children.
Since words don't have objective meanings, I wonder if they're kind of reporting an experience that they're having, on which they really could speak authoratatively, something like "Trust me, from the perspective of a learner, the distinction between cats, and dogs, is simply too subtle, and insufficiently important, for the English language to continue to demarcate it. History will come to agree with me."
I notice that this developmental process makes categories.. 'differentiable'?, or iteratively improvable, which is often really important. Shows a preference for starting broad, then narrowing in later.
I wonder if we should consider teaching children the word "sibling" before we try to teach them the names of their siblings. We do that with parents, however, we do it because and despite the fact that they'll usually only have one mama or papa so it doesn't really count. Except, it may make it easier to have the insight that other people have their own parents, including their parents.
Some of these problems vanish if you use an asynchronous textual format for debate. I'd really like to see decisionmaking processes that formalize the use of some sort of text forum, so I'll propose some systems such a thing would use:
My guess was that some people involved in foreign craft recovery and reverse engineering programs (we know that these exist) want their programs to be exposed to more oversight, because they're currently too closed off to be cost-effective or useful, or because they're being badly mismanaged in some other way.
So they're telling congress, and maybe Grusch, that it's an alien craft recovery and reverse engineering program, because in the current climate that's going to get it done quicker. I think there might be some new legal protection for UAP reports too.... (read more)