All of mako yass's Comments + Replies

My guess was that some people involved in foreign craft recovery and reverse engineering programs (we know that these exist) want their programs to be exposed to more oversight, because they're currently too closed off to be cost-effective or useful, or because they're being badly mismanaged in some other way.

So they're telling congress, and maybe Grusch, that it's an alien craft recovery and reverse engineering program, because in the current climate that's going to get it done quicker. I think there might be some new legal protection for UAP reports too.... (read more)

I was hoping to understand why people who are concerned about the climate ignore greentech/srm.

One effect, is that people who want to raise awareness about the severity of an issue have an incentive to avoid acknowledging solutions to it, because that diminishes its severity. But this is an egregore-level phenomenon, there is no individual negative cognitive disposition that's driving that phenomenon as far as I can tell.
Mostly, in the case of climate, it seems to be driven by a craving for belonging in a political scene.

1Noosphere893d
The point I was trying to make is that we click on and read negative news, and this skews our perceptions of what's happening, and critically the negativity bias operates regardless of the actual reality of the problem, that is it doesn't distinguish between the things that are very bad, just merely bad but solvable, and not bad at all. In essence, I'm positing a selection effect, where we keep hearing more about the bad things, and hear less or none about the good things, so we are biased to believe that our world is more negative than it actually is. And to connect it to the first comment, the reason you keep noticing precursors to existentially risky technology but not precursors existentially safe technology, or why this is happening: Is essentially an aspect of negativity bias because your information sources emphasize the negative over the positive news, no matter what reality looks like. The link where I got this idea is below: https://archive.is/lc0aY [https://archive.is/lc0aY]

What is a negative frame.

1Noosphere893d
It's essentially a frame that views things in a negative light, or equivalently a frame that views a certain issue as by default negative unless action is taken. For example, climate change can be viewed in the negative, which is that we have to solve the problem or we all die, or as a positive frame where we can solve the problem by green tech

There's a sense in which negativity bias is just rationality; you focus on the things you can improve, that's where the work is. These things are sometimes called "problems". The thing is, the healthy form of this is aware that the work can actually be done, so, should be very interested in, and aware of technologies of existential safety, and that is where I am and have been for a long time.

1Noosphere893d
The problem is that focusing on a negative frame enabled by negativity bias will blind you to solutions, and is in general a great way to get depressed fast, which kills your ability to solve problems. Even more importantly, the problems might be imaginary, created by negativity biases.

I notice it also makes sure that if the participants know anything at all about the research, they know it's supposed to be voluntary, even if they're still forced to sign it, they learn that the law is supposed to be on their side and there is in theory someone they could call for help.

Probably has something to do with the fact that a catastrophe is an event, and safety is an absence of something. It's just inherently harder to point at a thing and say that it caused fewer catastrophes to happen. Show me the non-catastrophes. Bring them to me, put them on my table. You can't do it.

-1Noosphere893d
I'd say it's an aspect of negativity bias, where we focus more on the bad things than on the good things. It's already happening in AI safety, and AI in general, so your bias is essentially a facet of negativity bias.

I think what's so crushing about it, is that it reminds me that the wrong people are designing things, and that they wont allow them to be fixed, and I can only find solace in thinking that the inefficiency of their designs is also a sign that they can be defeated.

There's something very creepy to me about the part of research consent forms where it says "my participation was entirely voluntary."

  1. Do they really think an involuntary participant wouldn't sign that? If they understand that they would, what purpose could this possibly serve, other than, as is commonly the purpose of contracts; absolving themselves of blame and moving blame to the participant? Which would be downright monstrous. Probably they just aren't fucking consequentialists, but this is all they end up doing.
  2. This is a minor thing, but it adds an addi
... (read more)
2ChristianKl2d
If someone explicitely writes into their consent forms "my participation was entirely voluntary" and the participation isn't voluntary it might be easier to attack the person running the trial later. 
1rodeo_flagellum3d
  Important to remember and stand by the Nuremberg Code [https://history.nih.gov/download/attachments/1016866/nuremberg.pdf?version=1&modificationDate=1589152811742&api=v2] in these contexts. 
3frontier643d
The reason is to prevent the voluntary participant from later claiming that their participation was involuntary and telling that to the IRB. 'Well if your participation was involuntary, why did you sign this document?' It kind of limits the arguments someone could make attacking the ethics of the study. The attacker would have to allege coercion on the order of people being forced to lie on forms under threat.
6Viliam4d
Maybe it's some legal hack, like maybe in some situations you can't dismiss unethical research, but you can dismiss fraudulent research... and a research where people were forced to falsely write that their participation was voluntary, is technically fraudulent.

I'm learning that presidents are explicitly excluded from the category of congressmembers.

Does he say thought they had crafts? There's a line where he says he was never sure.

4ChristianKl5d
He said that people told them were the craft were but he had no direct proof of them. I don't see a reason to assume that there were Congressman better informed than him. 

Yes, if this has truly never reached congress. Kinda under the impression that some congress members (especially presidents) have probably seen it, and for whatever reason, once they knew, all of these people decided not to publicize it.
And that could just keep happening.

4ChristianKl6d
We do have the account from Harry Reid [https://www.newyorker.com/magazine/2021/05/10/how-the-pentagon-started-taking-ufos-seriously], who did decide to publicize that he thinks Lockheed Martin had the crafts and the military didn't want to give him the clearance to see them. It's worthwhile to note that Harry Reid did not share that information this way before he retired. There's a massive stereotype against taking UFO's seriously and sharing such information was bad politics. 

I don't see possible next steps for simultaneous disclosure of superpower military R&D from all sides.

This might not be possible until the danger of arms-races in agentic AI has become more obvious. I'm not familiar enough with the nuclear situation to say whether it's feasible today, but it probably will be at some point in the near future.

It seems probable to me that monitoring has, over the past 40 years, become a lot cheaper and more feasible than our geopolitical institutions recognize.

Increases in mutual transparency may have to come in train wit... (read more)

4ChristianKl6d
In recent news: China rejects nuclear talks with the U.S. as it looks to strengthen its own arsena [https://www.semafor.com/article/06/08/2023/china-rejects-nuclear-talks-us]l There's currently that war going on in Ukraine. 

I haven't seen any civilians, including on LW, actually weigh the cost of assymetric disclosure of US military R&D, which this would probably require? So I think you're really going to have to systematize this framework a bit more before you can justify the lets see them aliens stance.

I'd instead call for a simultaneous disclosure of superpower military R&D from all sides, primarily to halt the omnicidal arms-racing dynamics we're all currently living under, and I really mean that, but by the way, this policy would also justify seeing the aliens.

2ChristianKl6d
When calling for action then it's worth thinking about possible next steps. I don't see possible next steps for simultaneous disclosure of superpower military R&D from all sides. I do believe that it's good to empower whistleblowers. If certain secrets are very important to a country then it should be able to convince all the people who hold the secrets to keep them secret.  In this case, we seem to have information that's illegally withheld from Congress and multiple people speaking to the ICIG who think that's a problem.   The Above the Law [https://abovethelaw.com/2023/06/serious-fed-unafraid-of-legal-jeopardy-in-claim-of-recovered-alien-craft-government-ufo-coverup/]article suggests that illegal withholding takes place. In a Democracy, the miliary has to share its secrets with Congress (or at least the committees of Congress that relate to it). 

It's a story that's been around for a while, but previous tellings of it, Bob Lazar, Majestic 12, or Steven Greer's stories, had major blemishes.

Bob Lazar seems to have lied about his educational background and the extent of his contract with LANSCE, Majestic 12 was apparently a clear hoax, and I've heard Steven Greer sells very cringey phony CE5 meditation tours and engages in a lot of clear wishful thinking.

While this one is a very qualified staffmember who seems to represent and have the endorsement of many even more qualified staffmembers.

Does the incidence rate of schizophrenia actually tell you anything about the incidence rate of the much higher functioning delusion disorders that might be involved here?

Do you mean how placing long-term bets in prediction markets ties up capital that could otherwise be put to use?

Yes, so canny traders have an incentive to ignore long-term questions even though that makes the market somewhat useless. There's an adverse selection effect where the more someone has going on the more they'll focus on short closing dates.

(But I realize that long-term bets are still worth something to them, this doesn't necessarily prevent them from weighing in at all. If a market consisted solely of very good forecasters, I'd expect the predict... (read more)

4lsusr7d
Here's the short version: Suppose you think a prediction is mispriced but it's distant in the future. Instead of buying credits that pay out on resolution, you buy futures instead. You don't have to tie up capital, since payment is due on resolution instead of upfront. Your asset equals your liability. There is no beta [https://www.lesswrong.com/posts/oimYpPnjyCeCwv3K6/alpha-a-and-beta-v]. Financial derivatives solve the shorttermism problem in traditional securities markets. If you use them in prediction markets, then they will (theoretically) do the exact same thing, by (theoretically) operating the exact same way. In practice [https://www.lesswrong.com/posts/D5Dq9AyXqYfhLiaC3/prediction-markets-are-for-outcomes-beyond-our-control], things [https://www.lesswrong.com/posts/LsyuX5rLsApZFdQGp/bet-on-rare-undesirable-outcomes-when-seeding-a-prediction] get [https://www.lesswrong.com/posts/RzDehSs7KQcBpa2bf/your-enemies-can-use-your-prediction-markets-against-you] causal [https://www.lesswrong.com/posts/3cmqgjmKgXgxn24KZ/using-prediction-markets-to-guide-government-policy], and that's before you add leverage and derivatives. Doesn't matter. It just means that prediction markets have to be sufficiently capitalized to work. A hedge fund isn't going to throw its smartest brains at a market with only $100 of total market capitalization.

Derivatives solve prediction market shorttermism? I didn't realize that. Has that been written up anywhere?

8lsusr7d
"Predictive market derivatives" is on my list of things I should write about. What, precisely, do you mean by shorttermism? Do you mean how placing long-term bets in prediction markets ties up capital that could otherwise be put to use? (I think I understand you, but I want to be certain first.) Financial derivatives work the same way in prediction markets as they do on existing securities markets. I already wrote a little bit about derivatives in existing securities markets [https://www.lesswrong.com/s/nJzqnsZCfg2k4s2BP/p/r7PBDGWf9Jbdj4rnx], but am not sure that post fully answers your question.

Yes, if I did B, I might even say something like "I'm not sure what you mean" (though not in those words, but others might), but what I mean is I'm not sure what your intentions are in asking, and I am way more interested in that than guessing about baguettes. The mismatch in interest is so acute that if you don't answer my question I don't think it would be especially mean of me to decline to answer yours.

The incredible drop in crypto financing seems like it might not be credible https://twitter.com/panekkkk/status/1665550869494890496

4Richard_Kennaway9d
It wouldn't be, would it?

Regarding Play to Earn games, I wonder if we could see a situation where some "players" (performers) are paid to play less popular (less fun) roles in multiplayer games. This wouldn't destroy the game by making it entirely about earning money, because it will also be about spending money in exchange for whatever services players decide they enjoy.

But these roles will very quickly be filled by full time workers from poor countries, and then some gamers will become uncomfortable and leave, but some wont, and then the artform will start to evolve. We talk abo... (read more)

I'm wondering whether people within or on the peripheries of these recovery and reverse engineering programs have decided that convincing people that they're UFO recovery programs is beneficial on net. I've seen some people dismiss this possibility, but it seems like they're presuming to know a lot about the strategic landscape that they're not players in.

That phrasing makes it clear what is meant, but I think the phrase "updating your priors" is still carrying the confused terminology and we need to stop using it, again, priors don't change in response to observations (though in physically constrained/embedded cognition [that is, the profane/imperfect/physically possible version of bayesian reasoning that physical human beings can do] there has to be some sense in which a prior can change in response to arguments/reasoning/reflection, which complicates the issue a lot).

The updated probability, ... (read more)

I'm going to put an end to it.

I'm not sure anyone else has the notice when a word has implicit parameters ("prior" practically always does btw) and notice when the binding is ambiguous method yet so I feel it's my responsibility.

I don't think you know what "prior" means. Prior to what update? "the priors against this all being untrue seem at least a little bit lower considering the above" Priors don't change.

3awg10d
Sorry, definitely not the most proficient with the lingo. I believe I should have said: "That said, considering the above, I would suspect updating your priors in the direction of this all being true, at least a little bit, seems reasonable." I think? Is that closer?
-1lc10d
Misusing the term prior is long-held LessWrong tradition

A more specific one, also with an unreasonably soon closing date.

 

2ChristianKl11d
It doesn't seem very specific to me. In particular, it doesn't seem to think about how the likely future progress of this will go. If the claim that he gave evidence to Congress is true, the thing that might reasonably happen this year is another congressional inquiry. In it, the authorities might say "Grusch's claim that the Unidentified Aerial Phenomena Task Force has gathered physical evidence under program XYZ that we didn't tell Congress about before is true. The evidence is highly classified and we didn't think it matched what you asked for when you asked for our monitoring of Unidentified Aerial Phenomena because we think program XYZ is not within the sphere of what you asked for. We can give you a few classified documents about that physical evidence but it's important that you don't share anything more specific because of the necessity to keep national secrets." I would expect that there's some substance to Grusch's claims but that what will be revealed about that in this year won't be very convincing on the larger question of whether non-human tech really exist.

The rotating one seems well enough explained as the rotation of a lens artifact to me?

Typo "Find something you wish you didn’t but don’t", should be did right?

2Screwtape11d
Yep, you are correct, should be fixed now. Thank you!

There needs to be a little more emphasis of the fact that as a result of their deal, some measure of the grasshopper will get to sing again along with the other minds who blossom in the summer. The paragraph simply doesn't convey that. The impression I got was more like, as a result of their deal, the grasshopper was consumed quicker and less painfully and then merely remembered. I don't think that was your intention?

4Richard_Ngo12d
I intended to convey it via "The grasshopper’s mind is ... waiting to be born again in a fragment of a fragment of a supercomputer made of stars", but there's a lot in between those two phrases so it's reasonable to miss that implication. Have edited to fix.

I'm not sure what job "unexpected" is doing here. Any self-improvement is going to be incomprehensible to humans (humans can't even understand the human brain, nor current AI connectomes, and we definitely wont understand superhuman improvements). Comprehensible self-improvement seems fake to me.
Are people really going around thinking they understood how any of the improvements of the past 5 years really work, or what their limits or ramifications are. These things weren't understood before being implemented. They just tried them and then the number went up and then they made up principles and explanations many years after the fact.

Possible exception: If we can separate and discretize self-improvement cycles from regular operation, it could allow for any given number of them before shutdown.

IE, Make a machine that wants to make a very competent X, and then shut itself down.

X = a machine that wants to make a very competent X2, and then shut itself down.

X2 = a machine that wants overwhelmingly to shut itself down, but failing that, to give some very good advice to humans about how to optimize eudaimonia

We don’t want AIs to be able to engage in unexpected recursive self-improvement

Not true, lots of people do want that, and they probably should. Human-level generality probably isn't possible without some degree of self-improvement (the ability to notice and fix its blindspots, to notice missing capabilities and implement them). And without self-improvement it's probably not going to be possible to provide security against future systems that have it.
And as soon as a system is able to alter its mechanism in any way, it's going to be able to shut itself down, and so what you'll have is a very expensive brick.

2mako yass15d
Possible exception: If we can separate and discretize self-improvement cycles from regular operation, it could allow for any given number of them before shutdown. IE, Make a machine that wants to make a very competent X, and then shut itself down. X = a machine that wants to make a very competent X2, and then shut itself down. X2 = a machine that wants overwhelmingly to shut itself down, but failing that, to give some very good advice to humans about how to optimize eudaimonia
1Tetraspace15d
Not unexpected! I think we should want AGI to, at least until it has some nice coherent CEV target, explain at each self-improvement step exactly what it's doing, to ask for permission for each part of it, to avoid doing anything in the process that's weird, to stop when asked, and to preserve these properties. 

Does anyone see any hardware names?

What is it about hardware. I've never seen anyone from there express concern.

I wonder if it's that, for anyone else in AI, their research is either fairly neutral - not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there's no way for you to keep your job and stay sane?

How sure are you that we're not going to end up building AGI with cognitive architectures that consist of multiple psuedo-agent specialists coordinating and competing in an evolutionary economic process that, at some point, constitutionalises, as an end goal, its own perpetuation, and the perpetuation of this multipolar character?

Because, that's not an implausible ontogeny, and if it is the simplest way to build AGI, then I think cosmopolitanism basically is free after all.
And ime cosmopolitanism-for-free often does distantly tacitly assume that this archi... (read more)

A statement of concern signed by all (famous) major players and many other respected technologists https://www.safe.ai/statement-on-ai-risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Signatories:

The first one would work, but the details are too fine. Second works okay I guess. I also like this one https://thenounproject.com/icon/annoyed-4979582/

Reactions seem like they're going to be far more likely to be useful for that purpose.

"Concrete" reaction should be a concrete truck imo https://thenounproject.com/icon/concrete-truck-1791966/ the current one is a brick wall, which is not really suggestive of concrete at all, and a wall in this context is far more suggestive of an accusation of obtuseness.
A cement truck is an entity who brings concreteness, so it would be like saying "you have the virtue of a concrete truck" and I find it delightful.

1Nate Showell20d
And since there's a "concrete" reaction, it seems like there should also be an "abstract" reaction, although I don't know what symbol should be used for it.

I think the "no easy way to respond to a reaction" is an important point. Maybe there should be a way to respond to a reaction!

I think in a very complete, robust social media system, reactions might just end up being very short comments that the site lays out in a more compact way (and if a flag is checked, omits immediate mention of the author and aggregates identical reacts with a number by default). The point would be that you could link directly to an interesting reaction, or reply to it, and if you reply to it it can be layed out as a comment the usual way.

Feedback on "I saw this": I think that's the wrong icon, the eyes are open too wide, that emoji ends up being used to mean like "holy shit, omfg, WOW", which is different. It's worth having that as a reaction, but you already have "changed my mind", which is a more appropriate way to express that in the LW context.

I'd recommend one of the single eye logos like this. instead https://thenounproject.com/icon/eye-100409/

Or a single eye that's looking up and to the left, at the comment, would be better. So I drew one like that, uploaded to the noun project. It's awaiting moderation.

1jam_brand21d
I agree and also I wanted to leave a thanks-react for making that submission, but apparently am short of the requisite karma threshold, so... thanks! :)

So, are you having similar thoughts about votes, too?

2jimrandomh23d
It's probably a thing with regular vote-score, but I think it's worth the tradeoff of having scores at the top because when there are too many comments to read everything, the score feeds into the decision of what to read vs what to skim.
4Dagon23d
Moving the whole "action bar" to the end of the comment makes a lot of sense to me.  Voting, agreement, and reactions should all happen AFTER reading, like reply does.   You MAY want to repeat the totals, non-interactively, at the top, if the intended use is filtering as a reader.  IMO, most comments are short enough that this isn't necessary.

This would be a good time to streamline requesting explanations from voters.

It's very common that the person issuing the reaction doesn't know whether the recipient wants or needs a full explanation, and it's very common for the recipient to really really want an explanation for the reaction after all, and for the recipient to be too worried about being rude or annoying or looking dumb or wasting space by asking for one, so I think it would be worth making that a private, two-click action.

Suggestion: 'true but unhelpful' react with the icon being a bored/tired face. I currently express this with karma downvote and agreement upvote, but the sorts of people who write these sorts of comments will usually have difficulty interpreting that feedback signal, given that the behavior often correlates with not understanding how agreement and discursive productivity are very different things.

Reason we need this: It's a good way of pointing out choir-preaching and succinctly explaining how it can make the site worse (it's boring and unproductive).

1Archimedes21d
Something like one of these? https://thenounproject.com/icon/bored-251902/ [https://thenounproject.com/icon/bored-251902/] https://thenounproject.com/icon/annoyed-4979573/ [https://thenounproject.com/icon/annoyed-4979573/]

Would you say that China knows the bankruptcy of heroic-sacrifice culture (collectivist duty), and westerners have not really experienced that, and they know that westerners just romanticize it from a distance without ever really living it?

EA is funny in that it consists mostly of people who know about incentives and consequences and they try to pay their people well and tell them not to burn themselves out and to keep room for themselves, but it is still named "altruism", and it still does a bit of hero-eating on the side from time to time.

Can't you make human food production a lot more efficient with biotech? Algae, for instance? Spirulina maybe? Tastes bitter, grows fast, highly nutritious. (Are plants or algae as efficient at generating sugars from sunlight as new forms of life evolved to directly use electricity from a solar panel would be?)

Even if that wasn't practical for humans, if such an organism would be very easily imaginable I think that still gives us some weird biopunk menu options for the medium-term future of intelligence?

7Maxwell Clarke24d
I saw some numbers for algae being 1-2% efficient but it was for biomass rather than dietary energy. Even if you put the brain in the same organism, you wouldn't expect as good efficiency as that. The difference is that creating biomass (which is mostly long chains of glucose) is the first step, and then the brain must use the glucose, which is a second lossy step. But I mean there is definitely far-future biopunk options eg. I'd guess it's easy to create some kind of solar panel organism which grows silicon crystals instead of using chlorophyll.

If this does continue - and I hope it does - please check in on the effects of creatine and choline supplementation if possible. Here's Dr Gregger admitting creatine seems to have cognitive benefits for vegans (IE, may be defficient) (then dismissing supplementing it due to concerns about contaminants) https://nutritionfacts.org/video/creatine-brain-fuel-supplementation/ 

And for the sake of keeping relevant conversations linked, here's a mediocre questionpost I made on the EA forum about this whole question.

I've never known how to make sense of the lack of humility people have about word definitions, especially when it's children.

Since words don't have objective meanings, I wonder if they're kind of reporting an experience that they're having, on which they really could speak authoratatively, something like "Trust me, from the perspective of a learner, the distinction between cats, and dogs, is simply too subtle, and insufficiently important, for the English language to continue to demarcate it. History will come to agree with me."

I notice that this developmental process makes categories.. 'differentiable'?, or iteratively improvable, which is often really important. Shows a preference for starting broad, then narrowing in later.

I wonder if we should consider teaching children the word "sibling" before we try to teach them the names of their siblings. We do that with parents, however, we do it because and despite the fact that they'll usually only have one mama or papa so it doesn't really count. Except, it may make it easier to have the insight that other people have their own parents, including their parents.

5Andrew Currall1mo
Our daughter went through a fairly long period of calling cats "dog", and would aggressively correct us if we tried to correct her. Possibly something of the same thing. 

Some of these problems vanish if you use an asynchronous textual format for debate. I'd really like to see decisionmaking processes that formalize the use of some sort of text forum, so I'll propose some systems such a thing would use:

  • Flags indicating whether a claim has been refuted by deeper investigations, and who ran those investigations.
  • Comments from the general public, but subjectively filtered.
  • Methods for determining which authors are causing changes in peoples' views, IE, which authors are doing the best work.
Load More