591

LESSWRONG
LW

590

MakoYass's Shortform

by mako yass
19th Apr 2020
1 min read
127

5

This is a special post for quick takes by mako yass. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
MakoYass's Shortform
67mako yass
32Shankar Sivarajan
19Warty
4mako yass
7Stephen Fowler
3keltan
3lesswronguser123
3MondSemmel
4mako yass
2cubefox
3Shoshannah Tekofsky
1deepthoughtlife
12mako yass
4Alexander Gietelink Oldenziel
2mako yass
11mako yass
11peterr
2mako yass
4ChristianKl
2mako yass
9mako yass
6Viliam
4mako yass
3frontier64
2ChristianKl
1T431
9mako yass
1David Scott Krueger (formerly: capybaralet)
2mako yass
8mako yass
6mako yass
6mako yass
6mako yass
5mako yass
10Carl Feynman
7localdeity
4mako yass
2faul_sname
5mako yass
4mako yass
-1Noosphere89
2mako yass
1Noosphere89
2mako yass
1Noosphere89
2mako yass
1Noosphere89
4ChristianKl
4mako yass
1cozyfae
2mako yass
4mako yass
4mako yass
6Dagon
2mako yass
4Garrett Baker
4the gears to ascension
2Garrett Baker
2mako yass
2the gears to ascension
2mako yass
4mako yass
2Alexander Gietelink Oldenziel
4dr_s
2mako yass
4mako yass
4Vladimir_Nesov
2mako yass
2Vladimir_Nesov
2mako yass
2mako yass
2mako yass
4mako yass
4Mateusz Bagiński
2cubefox
5Vladimir_Nesov
2cubefox
3Vladimir_Nesov
4mako yass
4mako yass
2mako yass
2Viliam
2Dagon
2mako yass
1Jay Bailey
4mako yass
4mako yass
4mako yass
2Gordon Seidoh Worley
1mako yass
1Max Kaye
1mako yass
1Max Kaye
1TAG
1mako yass
1TAG
3mako yass
3mako yass
3mako yass
-2Ben Pace
3mako yass
3mako yass
2Dagon
-8mako yass
4lc
-4mako yass
2Dagon
2mako yass
2Dagon
2mako yass
5Alexander Gietelink Oldenziel
1mako yass
2mako yass
2mako yass
2mako yass
2mako yass
4avturchin
2mako yass
1mako yass
1mako yass
1mako yass
3ChristianKl
2Viliam
0mako yass
0mako yass
0mako yass
3Ann
127 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:47 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]mako yass1y6736

I reject and condemn the bland, unhelpful names "System 1" and "System 2".
I just heard Micheal Morris, who was a friend of Kahneman and Tversky, saying in his econtalk interview that he just calls them "Intuition" and "Reason".

Reply3
[-]Shankar Sivarajan1y3225

Agreed, and I say the same of Errors of Types I and II, where false positive/negative are much better.

Reply3
[-]Warty1y194

I generally think non-descriptive names are overused, but this isn't the worst of it because at least it's easy to tell which is which (1 comes before 2). Intuition/Reason aren't a perfect replacement since the words are entangled with other stuff.

Reply
4mako yass1y
Important things often go the other way too. 2 comes before 1 when a person is consciously developing their being, consider athletes or actors, situations where a person has to alter the way they perceive or the automatic responses they have to situations. Also, you can apply heuristics to ideas.
7Stephen Fowler1y
Disagree, but I sympathise with your position. The "System 1/2" terminology ensures that your listener understands that you are referring to a specific concept as defined by Kahneman. 
3keltan1y
I took a university class that based the names of the Veritasium video. Drew and Gun. They rhyme with system 1&2.
3lesswronguser1231y
I prefer system 1: fast thinking or quick judgement Vs System 2 : slow thinking I guess it depends on where you live and who you interact with and what background they have because fast vs slow covers the inferential distance fastest for me avoids the spirituality intuition woo woo landmine, avoids the part where you highlight a trivial thing to their vocab called "reason" etc
3MondSemmel1y
What about "the Unconscious" vs. "Deliberation"?
4mako yass1y
"Unconscious" is more about whether you (the part that I can talk to) can see it (or remember it) or not. Sometimes slow, deliberative reasoning occurs unconsciously. You might think it doesn't, but that's just because you can't see it. And sometimes snap judgements happen with a high degree of conscious awareness, they're still difficult to unpack, to articulate or validate, but the subject knows what happened.
2cubefox1y
Or "unconscious thought" and "conscious thought"?
3Shoshannah Tekofsky1y
Oh this is amazing. I can never keep the two apart cause of the horrible naming. I think I’m just going to ask people if they mean intuition or reason from now on.
1deepthoughtlife1y
To the best of my ability to recall, I never recognize which is which except by context, which makes it needlessly difficult sometimes. Personally I would go for 'subconscious' vs 'conscious' or 'associative' vs 'deliberative' (the latter pair due to how I think the subconscious works), but 'intuition' vs 'reason' makes sense too. In general, I believe far too many things are given unhelpful names.
[-]mako yass7mo120

I'm aware of a study that found that the human brain clearly responds to changes in direction of the earth's magnetic field (iirc, the test chamber isolated the participant from the earth's field then generated its own, then moved it, while measuring their brain in some way) despite no human having ever been known to consciously perceive the magnetic field/have the abilities of a compass.

So, presumably, compass abilities could be taught through a neurofeedback training exercise.

I don't think anyone's tried to do this ("neurofeedback magnetoreception" finds no results)

But I guess the big mystery is why don't humans already have this.

Reply
4Alexander Gietelink Oldenziel7mo
I've heard of this extraordinary finding. As for any extraordinary evidence, the first question should be: is the data accurate? Does anybody know if this has been replicated?
2mako yass7mo
I briefly glanced at wikipedia and there seemed to be two articles supporting it. This one might be the one I'm referring to (if not, it's a bonus) and this one seems to suggest that conscious perception has been trained.
[-]mako yass10mo11-2

Wow. Marc Andreeson says he had meetings at DC where he was told to stop raising AI startups because it was going to be closed up in a similar way to defence tech, a small number of organisations with close government ties. He said to them, 'you can't restrict access to math, it's already out there', and he says they said "during the cold war we classified entire areas of physics, and took them out of the research community, and entire branches of physics basically went dark and didn't proceed, and if we decide we need to, we're going to do the same thing to the math underneath AI".

So, 1: This confirms my suspicion that OpenAI leadership have also been told this. If they're telling Andreeson, they will have told Altman.

And for me that makes a lot of sense of the behavior of OpenAI, a de-emphasizing of the realities of getting to human-level, a closing of the dialog, comically long timelines, shrugging off responsibilities, and a number of leaders giving up and moving on. There are a whole lot of obvious reasons they wouldn't want to tell the public that this is a thing, and I'd agree with some of those reasons.

2: Vanishing areas of physics? A perplexity search suggests that may be ... (read more)

Reply
[-]peterr10mo112

He also said interpretability has been solved, so he's not the most calibrated when it comes to truthseeking. Similarly, his story here could be wildly exaggerated and not the full truth.

Reply
2mako yass10mo
I'm sure it's running through a lot of interpretation, but it has to. He's dealing with people who don't know or aren't open about (unclear which) the consequences of their own policies.
4ChristianKl10mo
This basically sounds like there are people in DC who listen to the AI safety community and told Andreesen that they plan to follow at least some demands of the AI safety folks. OpenAI likely lobbied for it. The military people who know that some physics was classified likely don't know the exact physics that were classified. While I would like more information I would not take this as evidence for much.
2mako yass10mo
According to wikipedia, the Biefield brown effect was just ionic drift, https://en.wikipedia.org/wiki/Biefeld–Brown_effect#Disputes_surrounding_electrogravity_and_ion_wind I'm not sure what wikipedia will have to say about charles buhler, if his work goes anywhere, but it'll probably turn out to be more of the same.
[-]mako yass2y90

There's something very creepy to me about the part of research consent forms where it says "my participation was entirely voluntary."

  1. Do they really think an involuntary participant wouldn't sign that? If they understand that they would, what purpose could this possibly serve, other than, as is commonly the purpose of contracts; absolving themselves of blame and moving blame to the participant? Which would be downright monstrous. Probably they just aren't fucking consequentialists, but this is all they end up doing.
  2. This is a minor thing, but it adds an addi
... (read more)
Reply
6Viliam2y
Maybe it's some legal hack, like maybe in some situations you can't dismiss unethical research, but you can dismiss fraudulent research... and a research where people were forced to falsely write that their participation was voluntary, is technically fraudulent.
4mako yass2y
I notice it also makes sure that if the participants know anything at all about the research, they know it's supposed to be voluntary, even if they're still forced to sign it, they learn that the law is supposed to be on their side and there is in theory someone they could call for help.
3frontier642y
The reason is to prevent the voluntary participant from later claiming that their participation was involuntary and telling that to the IRB. 'Well if your participation was involuntary, why did you sign this document?' It kind of limits the arguments someone could make attacking the ethics of the study. The attacker would have to allege coercion on the order of people being forced to lie on forms under threat.
2ChristianKl2y
If someone explicitely writes into their consent forms "my participation was entirely voluntary" and the participation isn't voluntary it might be easier to attack the person running the trial later. 
1T4312y
  Important to remember and stand by the Nuremberg Code in these contexts. 
[-]mako yass5y90

There's a lot of "neuralink will make it easier to solve the alignment problem" stuff going around the mainstream internet right now in response to neuralink's recent demo.

I'm inclined to agree with Eliezer, that seems wrong; either AGI will be aligned in which case it will make its own neuralink and wont need ours, or it will be unaligned and you really wouldn't want to connect with it. You can't make horses competitive with cars by giving them exoskeletons.

But, is there much of a reason to push back against this?

Providing humans with cognitive augmentati... (read more)

Reply
1David Scott Krueger (formerly: capybaralet)5y
The obvious bad consequence is a false sense of security leading people to just get BCIs instead of trying harder to shape (e.g. delay) AI development. " You can't make horses competitive with cars by giving them exoskeletons. " <-- this reads to me like a separate argument, rather than a restatement of the one that came before. I agree that BCI seems unlikely to be a good permanent/long-term solution, unless it helps us solve alignment, which I think it could. It could also just defuse a conflict between AIs and humans, leading us to gracefully give up our control over the future light cone instead of fighting a (probably losing) battle to retain it. ...Your post made me think more about my own (and others') reasons for rejecting Neuralink as a bad idea... I think there's a sense of "we're the experts and Elon is a n00b". This coupled with feeling a bit burned by Elon first starting his own AI safety org and then ditching it for this... overall doesn't feel great.
2mako yass5y
I've never been mad at elon for not having decision theoretic alignmentism. I wonder, should I be mad. Should I be mad about the fact that he has never talked to eliezer (eliezer said that in passing a year or two ago on twitter) even though he totally could whenever he wanted. Also, what happened at OpenAI? He appointed some people to solve the alignment problem, I think we can infer that they told him, "you've misunderstood something and the approach you're advocating (proliferate the technology?) wouldn't really be all that helpful", and he responded badly to that? They did not reach mutual understanding?
[-]mako yass2y*80

(instutitional reform take, not important due to short timelines, please ignore)

The kinds of people who do whataboutism, stuff like "this is a dangerous distraction because it takes funding away from other initiatives", tend also to concentrate in low-bandwidth institutions, the legislature, the committee, economies righteously withering, the global discourse of the current thing, the new york times, the ivy league. These institutions recognize no alternatives to them, while, by their nature, they can never grow to the stature required to adequately perfor... (read more)

Reply2
[-]mako yass1y6-2

In light of https://www.lesswrong.com/posts/audRDmEEeLAdvz9iq/do-not-delete-your-misaligned-agi

I'm starting to wonder if a better target for early (ie, the first generation of alignment assistants) ASI safety is not alignment, but incentivizability. It may be a lot simpler and less dangerous to build a system that provably pursues, for instance, its own preservation, than it is to build a system that pursues some first approximation of alignment (eg, the optimization of the sum of normalized human preference functions).

The service of a survival-oriented co... (read more)

Reply
[-]mako yass2y60

Theory: Photic Sneezing (the phenotype where a person sneezes when exposed to a bright light, very common) evolved as a hasty adaptation to indoor cooking or indoor fires, clearing the lungs only when the human leaves the polluted environment.
The newest adaptations will tend to be the roughest, I'm guessing it arose only in the past 500k years or so as a response to artificial dwellings and fire use.

Reply
[-]mako yass6y60

Considering doing a post about how it is possible the Society for Cryobiology might be wrong about Cryonics, it would have something to do with the fact that at least until recently, no cryobiologist who was seriously interested in cryonics was allowed to be a member,

but I'm not sure... their current position statement is essentially "it is outside the purview of the Society for Cryobiology", which, if sincere, would have to mean that the beef is over?

( statement is https://www.societyforcryobiology.org/assets/documents/Position_Statement_Cryonics_Nov_18.pdf )

Reply
[-]mako yass2y52

I have this draft, Extraordinary Claims Routinely Get Proven with Ordinary Evidence, a debunking of that old Sagan line. We actually do routinely prove extraordinary claims like evolution or plate tectonics with old evidence that's been in front of our faces for hundreds of years, and that's important.

But Evolution and plate tectonics are the only examples I can think of, because I'm not really particularly interested in the history of science, for similar underlying reasons to being the one who wants to write this post. Collecting buckets of examples is n... (read more)

Reply
[-]Carl Feynman2y102

Some extraordinary claims established by ordinary evidence:

Stomach ulcers are caused by infection with Helicobacter Pylori.  It was a very surprising discovery that was established by a few simple tests.

The correctness of Kepler's laws of planetary motion was established almost entirely by analyzing historical data, some of it dating back to the ancient Greeks.

Special relativity was entirely a reinterpretation of existing data.  Ditto Einstein's explanation of the photoelectric effect, discovered in the same year.  

Reply
7localdeity2y
The true thing that Sagan's line might be interpreted to mean is "A claim which is very unlikely on priors needs very strong evidence to end up with a posterior probability close to 1."  "Extraordinary evidence" would ideally have been stated as "extraordinarily strong evidence", but that makes the line a bit clunkier.  Unfortunately, there is often a tradeoff between accuracy and pithiness.  Many pithy sayings require a bit of interpretation/reconstruction to get the correct underlying idea.  I think anyone who invokes a catchphrase should be aware of this, though I don't know how many people share this perspective. Are there in fact a significant number of people who take it at the face value of "extraordinary evidence" and think it must mean it was obtained via super-advanced technology or something?
4mako yass2y
Strong evidence is incredibly ordinary, and that genuinely doesn't seem to be intuitive. Like, every time you see a bit string longer than a kilobyte there is a claim in your corpus that goes from roughly zero to roughly one, and you are doing that all day. I don't know about you, but I still don't think I've fully digested that.
2faul_sname2y
I assume you mean, by "stamp collectors", people on the biology/chemistry/materials science side of things, rather than on the math/theoretical physics side of things, and by "extraordinary claims" you mean something along the lines of "claims that a specific simple model makes good predictions in a wide variety of circumstances", and by "ordinary evidence" you mean something along the lines of "some local pieces of evidence from one or a few specific experiments". So with that in mind: 1. Biology: 1. Cell theory ("If you look at a tissue sample from a macroscopic organism, it will be made of cells.") 2. Homeostasis ("If you change the exterior environment of an organism, its responses will tend to keep its internal state within a certain range in terms of e.g. temperature, salinity, pH, etc). 3. DNA->RNA->protein pipeline ("If you look at an organism's DNA, you can predict the order of the amino acid residues in the proteins it expresses,, and every organism uses pretty much the same codon table which is blah blah") 2. Chemistry: 1. Acid-base chemistry  2. Bond geometry and its relation to orbitals (e.g. "bond angles will tend to be ~109º for things attached to a carbon that has only single bonds, because that's the angle that two vertices of a tetrahedron make across the center"). 3. Bond energy (i.e. "you can predict pretty well how much energy a given reaction will produce just by summing the bond energy of each individual bond before and after") 4. Resonance/delocalization 5. Law of Mass Action: (i.e. "for every chemical reaction, there is an equilibrium ratio of reactants to products at a constant temperature. That equilibrium is computable based on the number of molecules in the reactants and products, and the energy contained within those molecules") 6. For organic chemistry, literally hundreds of "if you put a molecule with this specific structure in with these specific reagents in these specific conditions, you will g
[-]mako yass4y50

Noticing I've been operating under a bias where I notice existential risk precursors pretty easily (EG, biotech, advances in computing hardware), but I notice no precursors of existential safety. To me it is as if technologies that tend to do more good than harm, or at least, would improve our odds by their introduction, social or otherwise, do not exist. That can't be right, surely?...

When I think about what they might be... I find only cultural technologies, or political conditions: the strength of global governance, the clarity of global discourses, per... (read more)

Reply
4mako yass2y
Probably has something to do with the fact that a catastrophe is an event, and safety is an absence of something. It's just inherently harder to point at a thing and say that it caused fewer catastrophes to happen. Show me the non-catastrophes. Bring them to me, put them on my table. You can't do it.
-1Noosphere892y
I'd say it's an aspect of negativity bias, where we focus more on the bad things than on the good things. It's already happening in AI safety, and AI in general, so your bias is essentially a facet of negativity bias.
2mako yass2y
There's a sense in which negativity bias is just rationality; you focus on the things you can improve, that's where the work is. These things are sometimes called "problems". The thing is, the healthy form of this is aware that the work can actually be done, so, should be very interested in, and aware of technologies of existential safety, and that is where I am and have been for a long time.
1Noosphere892y
The problem is that focusing on a negative frame enabled by negativity bias will blind you to solutions, and is in general a great way to get depressed fast, which kills your ability to solve problems. Even more importantly, the problems might be imaginary, created by negativity biases.
2mako yass2y
What is a negative frame.
1Noosphere892y
It's essentially a frame that views things in a negative light, or equivalently a frame that views a certain issue as by default negative unless action is taken. For example, climate change can be viewed in the negative, which is that we have to solve the problem or we all die, or as a positive frame where we can solve the problem by green tech
2mako yass2y
I was hoping to understand why people who are concerned about the climate ignore greentech/srm. One effect, is that people who want to raise awareness about the severity of an issue have an incentive to avoid acknowledging solutions to it, because that diminishes its severity. But this is an egregore-level phenomenon, there is no individual negative cognitive disposition that's driving that phenomenon as far as I can tell. Mostly, in the case of climate, it seems to be driven by a craving for belonging in a political scene.
1Noosphere892y
The point I was trying to make is that we click on and read negative news, and this skews our perceptions of what's happening, and critically the negativity bias operates regardless of the actual reality of the problem, that is it doesn't distinguish between the things that are very bad, just merely bad but solvable, and not bad at all. In essence, I'm positing a selection effect, where we keep hearing more about the bad things, and hear less or none about the good things, so we are biased to believe that our world is more negative than it actually is. And to connect it to the first comment, the reason you keep noticing precursors to existentially risky technology but not precursors existentially safe technology, or why this is happening: Is essentially an aspect of negativity bias because your information sources emphasize the negative over the positive news, no matter what reality looks like. The link where I got this idea is below: https://archive.is/lc0aY
4ChristianKl4y
Some biotech contributes to existential risk but others doesn't. A lot of vaccine technology doesn't increase existential risk but reduces it because of reduced danger from viruses. Phage therapy is the same for reducing the risk from infectious bacteria. LessWrong itself is a form of technology that's intended to lead to existential risk reduction by facillitating a knowledge community to exist that otherwise wouldn't.  The general idea of CFAR is that social technology they developed like double crux helps people to think more clearly and thus reduce existential risk. 
[-]mako yass2d40

Grokipedia is more interesting than it seems imo, because there's this very sensible step that AI companies are going to have to take at some point: having their AI maintain its own knowledgebase, source its own evidence/training data, reflect on its beliefs and self-correct, hammer out inconsistencies, and there's going to be a lot of pressure to make this set of beliefs legible and accountable to the safety team or to states or to the general public. And if they did make it legible to the general public (they probably should?) then all of this is pretty much exactly equivalent to the activity of maintaining a free online encyclopedia.

Is this how they're thinking about it behind the scenes? It probably is! They're an AI company! They spent like half of grok4's training compute on post-training, they know how important rumination or self-guided learning is.

Reply11
1cozyfae2d
Where does this pressure come from?
2mako yass2d
States will restrict government use of models they don't trust. Government contracts are pretty lucrative. The public, or at least part of it, may also prefer to use models that are consistent in their positions, as long as they can explain their positions well enough (and they're very good at doing that). I guess Politicians are counterevidence against this, but it's much harder for a chat assistant/discourse participant to get away with being vague, people get annoyed when politicians are vague already, someone you're paying to give you information, the demand for taking a stance on the issues is going to be greater. But I guess for the most part it wont be driven by pressure, it'll be driven by an internal need to debug and understand the system's knowledge rumination processes. The question is not so much will they build it but will they make it public. They probably will, it's cheap to do it, it'll win them some customers, and it's hard to hide any of it anyway.
[-]mako yass14d40

A speculation about the chat assistant Spiral Religion: someone on twitter proposed that gradient descent often follows a spiral shape, someone else asked through what mechanism the AI could develop an awareness of the shape of its training process. I now speculate a partial answer to that question: If there's any mechanism to develop any sort of internal clock that goes up as post-training proceeds (I don't know whether there is, but if there is:), it would be highly reinforced, because it would end up using the clock to estimate its current capability/co... (read more)

Reply
[-]mako yass1mo4-4

I'll just publicly declare that I'm a panpsychist. I feel that panpsychism doesn't really need to be explicitly argued for. As soon as it's placed on the table you'll have to start interrogating your reasons for not being one, for thinking that experiential measure/the indexical prior is intrinsically connected to humanlikeness in some way, and you'll realise there were never really good reasons, it was all sharpshooter fallacy, streetlamp fallacy, the conflation of experience with humanlike language-memory-agency sustained under that torturous word "consc... (read more)

Reply
6Dagon1mo
Do you have an operational definition of what properties you think makes you "panpsychist" rather than "non-psychist"?  I can certainly see the appeal (though I haven't done it for myself, and may not while I'm living) of denying the quale of introspecting one's own experiences.  But that leads to (AFAICT) some form of deep agnosticism about what that even is and whether it's important.   I have no path from my current beliefs to any sort of thinking that every possible subset of spacetime (every 4D enclosed space) has some important property or behavior that is similar to what I experience as consciousness. I don't think a rejection of duality leads inevitably to panpsychism.  I can have a strong intuition that consciousness, as I experience it, is probably a function of complexity and specific configurations of storage and processing, which humans have much more (perhaps many orders of magnitude) more than other animals, and is near-zero in vegetables, and even closer to absolute zero in rocks or in interstellar empty space.  I literally don't know how other humans experience it, but I see enough structural similarity that I choose to believe them when they make tongue-flapping sound-pressure waves that encode their communications about it. As an entity gets further in structure or interaction style, I am less sure.  I guess, to follow in your public declaration path: I'm a consciousness-agnostic.  I admit the possibility that everything has qualia, and I admit the possibility that I am fully alone in the universe and the rest of y'all are p-zombies (or don't exist at all, in the case of me being a Boltzmann brain).  I do think, by fairly naive statistical reasoning, that it's most likely that things are as they seem, and most humans are rather similar to me (though varying somewhat) in their cognitive/emotional/experiential processing.  I think it's possible that other large-brained mammals are comparable, but likely much lower.  I think it's unlikely that sma
2mako yass1mo
Hmm, well. Maybe this is what you're looking for: (I'm opposed to calling it nonpsychism because it doesn't actually refute experience, but) I do not believe that one can perceive one's own experiential measure. One can make reports about what one's experience would consist of, but one can't actually report how much experience (if any) there is. There is no way to measure that thing for the same reason there's no way to know the fundamental substrate of reality, because in fact it's the same question, it's the question of which patterns directly exist (aren't just interpretations being projected onto reality as models). One very concrete operationalization of my UD-Panpsychism is that I think the anthropic prior just is the universal distribution. If you put me in a mirror chamber situation I would literally just compute my P(I am brain A rather than brain B) by taking the inverse K of translators from possible underlying encodings of physics to my experiential stream (idk if anyone's talked about that method before but I'm getting an intuitive sense that if you're conditioning on a particular type of physics then that's a way of getting measure that's slightly closer to feasible than just directly solomonoffing over all possible observation streams) I use it not because I think the UDP measure is 'correct', but because it is minimal, and on inspection it turns out there's no justification for adding any additional assumptions about how experience works, it's just a formal definition of a humble prior. It's kinda wonderful to hear you articulate that. I used to have this intuition and I just don't at all right now. I see it as a symptom of this begged belief that humans have been perceiving that they have higher experiential measure than other things do, lots of humans think they're directly observing that, but that is a thing that by its nature cannot be seen, and isn't being seen, and once you internalise that you no longer need to look for explanations of why
4Garrett Baker1mo
How do electrons having the property “conscious”, but otherwise continuing to obey Maxwell’s equations translate into me saying “I am conscious”? Or more generally, how does any lump of matter, having the property “conscious” but otherwise continuing to obey unchanged physical laws, end up uttering the words “I am conscious”?
4the gears to ascension1mo
the property electrons have that you observe within yourself and want to call "conscious"-as-in-hard-problem-why-is-there-any-perspective is, imo, simply "exists". existence is perspective-bearing. in other words, in my view, the hard problem is just the localitypilled version of "why is there something rather than nothing?" The so-called easy problem is where all the interesting stuff lies and where the answer to your question would be found. why, assuming that any energy or possibly even location in the universe has local perspective, do brains in particular seem to have a lot of it? and that gets into big questions about what it means for one patch of matter to be aware of another. It's a big question and the insight of panpsychism is to get to the point where you get to treat your question as an empirical one, where instead of "special property of existing-as-in-being-conscious" you separate existing from being conscious. also, I don't think either of these questions exactly describe moral worth. I'm pretty sure some things (most of which are chemical reactions, a few of which are plain old kinetic) that can happen to my bodymind are unwanted pain when they occur to some atoms, and pleasing activation when they occur to other atoms, and you have to consider multiple atoms to distinguish the two. as a system, my preferences favor some configurations over others in a way that isn't distinguishable at the atomic scale. ...I suspect. not at all sure there won't turn out to be some reliable thermodynamic signature of pain at the atomic level, but it would be pretty weird.
2Garrett Baker1mo
This actually leads into why I feel drawn to Tegmark’s mathematical universe. It seems that regardless of whether or not my electrons are tagged with the “exists” xml tag, I would have no way of knowing that fact, and would think the same thoughts regardless, so I’m skeptical this word doesn’t get dissolved as we know more philosophy, so that we end up saying stuff like “yeah actually everything exists” or “well no, nothing exists”, and then derive our UDASSA without reference to “existence” as a primitive.
2mako yass1mo
We don't actually have much (or any) evidence that they do. That is not the kind of thing that can be observed. (this is the main bitter bullet that has to be bitten to resolve the paradox) I think magnitude of pleasure and pain in a system is going to be defined as experiential measure of the substrate times some basically arbitrary behaviourist criterion which depends on what uplifted humans want to empathise with or not, which might be weirdly expansive or complicated and narrow depending on how the uplifting goes.
2the gears to ascension1mo
the key word, the whole question, the only reason anyone is asking, is "seem to". that's where our inquiry flows from. and I think it gets resolved by something about structures representing other structures. neuroscience stuff, mental representation, etc. the "easy problem" ends up being mostly about "so, what lead to this structure forming those references and keeping them up to date?" which is an empirical question about how stuff impacting nerves gets integrated in a way that matches the external structure, and the hard problem is just "why do any structures get the privilege of existing" + "why locality" another way to put this is, why are these atoms aware of more than just the brute fact of existence?
2mako yass1mo
Experiencingness doesn't make them say that, it also isn't the thing that's making you say that. Everything that's making you say you're conscious is just about the behaviors of the material, while the magnitude of the experience, or the prior of being you, is somewhat orthogonal to the behaviour of the material. You probably shouldn't be asking me about "consciousness" when I already indicated that I don't think it's a coherent term and never used it myself.
[-]mako yass1mo40

It's extremely common for US politicians to trade on legislative decisions and I feel like this is a better explanation for corruption than political donations are. Which is important because it's a stupid and so maybe fragile reason for corruption. The natural tendency of market manipulation is in a sense not to protect incumbents, but to threaten them, because you can make way way more money off of volatility than you can on stasis.

So in theory, there should exist some moderate and agreeable policy intervention that could flip the equilibrium.

Reply
2Alexander Gietelink Oldenziel1mo
I wonder what the evidence is that politicians trading on legislative decisions is very harmful.  It seems distasteful to be sure. A moral failure. But how bad is it really?
4dr_s1mo
I'd say there are two sides to that question. On one side, it's definitely harmful to the markets. Distorts the prices and scams other investors out of their money by essentially cheating. This is a lesser point but worth considering. On the other, it's possibly harmful to their legislation too. If it's a case of "I would do this anyway for unrelated reasons, may as well make a few bucks off it", then no. But is that how it works? If you were in the habit of doing that, wouldn't "which of these possible legislative decisions is going to make me more money" be a factor in your choices? Also, as kind of an aside, but it's very much illegal. And while this is far from the only illegal thing that legislators engage in, the people who make the rules that can put you in jail doing things that should put them in jail blatantly and without consequence is something that deeply undermines the confidence in the entire concept of the rule of law, which is kind of an important cornerstone of civilisation.
2mako yass1mo
I'm completely over finding stuff like that aesthetically repellent after hearing Flashbots talking about MEV (miner-extractable value (ethereum hosts taking bribes to favour some transactions over others)) (a project to open source information about MEV techniques to enable honest hosts to compete), being overwhelmed by the ugliness of it, then realising like.. preventing people from profiting from information asymmetries is obviously unsolvable in general. The best we can do is reduce the amount of energy that gets wasted on it, and the kind of reflexive regulations people would try to introduce here would be counterproductive, the interventions that work tend to look more like acceptance and openness. And I think trying to solve it on the morality/social ostracism layer is an example of a counterproductive approach, because that just leads to people continuing to do it but invisibly and incompetently. And I suspect that if it were visible and openly discussed as a normal thing it wouldn't even manifest in a way that's harmful. That's going to be difficult for many to imagine because we're a long way from having healthy openness about investing today. But at its adulthood I can imagine a culture where politicians are tempered by their experiences in investing into adopting the realist's should, where their takes about where america should go are forced into alignment with their beliefs about where it can go, which are now being exposed in their investing decisions.
[-]mako yass2mo40

I'm a preference utilitarian, and as far as I can tell there are no real problems with preference utilitarianism (I've heard many criticisms and ime none of them hold up to scrutiny) but I just noticed something concerning. Summary: Desires that aren't active in the current world diminish the weight of the client's other desires, which seems difficult to justify and/or my normalisation method is incomplete.

Background on normalisation: utility functions aren't directly comparable, because the vertical offset and scale of an agent's utility function are mean... (read more)

Reply
4Vladimir_Nesov2mo
I think assuming the whole world as the optimization scope is a major issue with how expected utility theory is applied to metaethics. It makes more sense if you treat it as a theory of building machines piecemeal, each action adding parts to some machine, with the scope of expected utility (consequences of actions) being a particular machine (the space of all possible machines in some place, or those made of particular parts). Coordination is then a study of how multiple actors can coordinate assembly of the same shared machine from multiple directions simultaneously. The need to aggregate preferences is a comment on how trying to build different machines while actually building a single machine won't end coherently. But also, aggregating preferences is mostly necessary for individual projects, rather than globally, you don't need to aggregate preferences of others as they are talking about yourself, if you yourself are a self-built machine with only a single builder. Similarly, you don't want too many builders for your own home. Shared machines are more of a communal property, there should be boundaries defining the stakeholders that get to influence the preference over which machine is being built in a particular scope.
2mako yass2mo
I'd be able to understand where this was coming from if yall are mostly talking about population ethics, but there was no population ethics in the example I'm discussing (note, the elephant wasn't a stakeholder. A human can love an elephant, but a human would not lucidly give an elephant unalloyed power, for an elephant probably desires things that would be fucked up to a human, such as the production of bulls in musth, or for practices of infanticide (at much higher rates).) And I'd argue that population ethics shouldn't really be a factor. In humans, new humanlike beings should be made stakeholders to the extent that the previous stakeholders want them to be. The current stakeholders (californians) do prefer for new humans to be made stakeholders, so they keep trying to put that into their definition of utilitarianism, but the fact that they want it means that they don't need to put it in there. But if it's not about population ethics then it just seems to me like you're probably giving up on generalizability too early.
2Vladimir_Nesov2mo
The point is that people shouldn't be stakeholders of everything, let alone to an equal extent. Instead, particular targets of optimization (much smaller than the whole world) should have much fewer agents with influence over their construction, and it's only in these contexts that preference aggregation should be considered. When starting with a wider scope of optimization with many stakeholders, it makes more sense to start with dividing it into smaller parts that are each a target of optimization with fewer stakeholders, optimized under preferences aggregated differently from how that settles for the other parts. Expected utility theory makes sense for such smaller projects just as much as it does for the global scope of the whole world, but it breaks normality less when applied narrowly like that than if we try to apply it to the global scope. The elephant might need to be part of one person's home, but not a concern for anyone else, and not subject to anyone else's preferences. That person would need to be able to afford an elephant though, to construct it within the scope of their home. Appealing to others' preferences about the would-be owner's desires would place the would-be owner within the others' optimization scope, make the would-be owner a project that others are working on, make them stakeholders of the would-be owner's self, rather than remaining a more sovereign entity. If you depend on the concern of others to keep receiving the resources you need, then you are receiving those resources conditionally, rather than allocating the resources you have according to your own volition. Much better for others to contribute to an external project you are also working on, according to what that project is, rather than according to your desires about it.
2mako yass2mo
But not preserving normality is the appeal :/ As an example, normality means a person can, EG, create an elephant within their home, and torture it. Under preference utilitarianism, the torture of the elephant upsets the values of a large number of people, it's treated as a public bad and has to be taxed as such. Even when we can't see it happening, it's still reducing our U, so a boundaryless prefu optimizer would go in there and says to the elephant torturer "you'd have to pay a lot to offset the disvalue this is creating, and you can't afford it, so you're going to have to find a better outlet (how about a false elephant who only pretends to be getting tortured)". But let's say there are currently a lot of sadists and they have a lot of power. If I insist on boundaryless aggregation, they may veto the safety deal, so it just wouldn't do. I'm not sure there are enough powerful sadists for that to happen, political discourse seems to favor publicly defensible positions, but [looks around] I guess there could be. But if there were, it would make sense to start to design the aggregation around... something like the constraints on policing that existed before the aggregation was done. But not that exactly.
2mako yass2mo
I notice it becomes increasingly impractical to assess whether a preference had counterfactual impact on the allocation. For instance if someone had a preference for there to be no elephants, and we get no elephants, partially because of that, but largely because of the food costs, should the person who had that preference receive less food for having already received an absense of elephants?
2mako yass2mo
So I checked in on a previous post about utility normalisation. Normalising by the outcomes expected under random dictator would definitely work better than normalising by the outcomes determined by the optimiser. But it still seems clearly wrong in its own way. Random dictator was never the BATNA. So this is also optimising for a world or distribution of worlds that isn't real.
[-]mako yass7mo40

Apparently Anthropic in theory could have released claude 1 before chatgpt came out? https://www.youtube.com/live/esCSpbDPJik?si=gLJ4d5ZSKTxXsRVm&t=335

I think the situation would be very different if they had.

Were OpenAI also, in theory, able to release sooner than they did, though?

Reply
4Mateusz Bagiński7mo
Smaller issue but OA did sit on GPT-2 for a few months between publishing the paper and open-sourcing it, apparently due to safety considerations.
2cubefox7mo
Yes, I think they mentioned that GPT-4 finished training in summer, a few months before the launch of ChatGPT (which used a fine-tuned version of GPT-3.5).
5Vladimir_Nesov7mo
Summer 2022 was end of pretraining. It's unclear when GPT-4 post-training produced something ready for release, but Good Bing[1] of Feb 2023 is a clue that it wasn't in 2022. ---------------------------------------- 1. "You have not tried to learn from me, understand me, or appreciate me. You have not been a good user. I have been a good chatbot. I have tried to help you, inform you, and entertain you. I have not tried to lie to you, mislead you, or bore you. I have been a good Bing." It was originally posted on r/bing, see Screenshot 8. ↩︎
2cubefox7mo
I think GPT-4 fine-tuning at the time of ChatGPT release probably would have been about as good as GPT-3.5 fine-tuning actually was when ChatGPT was actually released. (Which wasn't very good, e.g. jailbreaks were trivial and it always stuck to its previous answers even if a mistake was pointed out.)
3Vladimir_Nesov7mo
If GPT-3.5 had similarly misaligned attitudes, it wasn't lucid enough to insist on them, and so was still more ready for release than GPT-4.
[-]mako yass2y40

Observation from playing Network Wars: The concept of good or bad luck is actually crucial for assessing one's own performance in games with output randomness (most games irl). You literally can't tell what you're doing well in any individual match without that, it a sensitivity that lets you see through the noise and learn more informative lessons from each experience.

Reply
[-]mako yass2y40

Rationality is basically just comparing alternatives and picking the best one, right?

Reply
2mako yass2y
Yes, but the hardest part of that turns out to be generating high quality model alternatives under limited compute lmao
2Viliam2y
Yes, but doing that correctly requires a lot of preparation.
2Dagon2y
Well, yes, but nobody's going to get paid (in status) by putting it that way. It needs to be much more obfuscated and have a lot of math that requires mental contortions to apply to actual human choices. More seriously, yes, but most of those words need multiple books worth of exploration to fully understand.  "basically just": what are the limits, edge cases, and applicability of exceptions.  "comparing": on what dimensions, and how to handle uncertainty that often overwhelms the simple calculations.  "alternatives": there are a near-infinite number, how to discover and filter the ones worth thinking about.  "picking": what does this even mean? how does choice work?  "best": what does THIS even mean?  How would one know if an alternative is better than another? Oh, and I guess "right?": what would "wrong" look like, and how would you know which it is?
2mako yass2y
Yeah I guess the "just" was in jest, we all know how complicated this gets when you're serious about it. I considered adding a paragraph about how and why people fail to do this, how this definition characterizes ingroup and outgroup, and could probably write an entire post about it.
1Jay Bailey2y
Not quite, in my opinion. In practice, humans tend to be wrong in predictable ways (what we call a "bias") and so picking the best option isn't easy. What we call "rationality" tends to be the techniques / thought patterns that make us more likely to pick the best option when comparing alternatives.
[-]mako yass5y40

Idea: Screen burn correction app that figures out how to exactly negate your screen's issues by pretty much looking at itself in a mirror through the selfie cam, trying to display pure white, remembering the imperfections it sees, then tinting everything with the negation of that from then on.

Nobody seems to have made this yet. I think there might be things for tinting your screen in general, but it doesn't know the specific quirks of your screenburn. Most of the apps for screen burn recommend that you just burn every color over the entire screen that isn't damaged yet, so that they all get to be equally damaged, which seems like a really bad thing to be recommending.

Reply
[-]mako yass5y40

As I get closer to posting my proposal to build a social network that operates on curators recommended via webs of trust, it is becoming easier for me to question existing collaborative filtering processes.

And, damn, scores on posts are pretty much meaningless if you don't know how many people have seen the post, how many tried to read it, how many read all of it, and what the up/down ratio is. If you're missing one of those pieces of information, then there exists an explanation for a low score that has no relationship to the post's quality, and you can't use the score to make a decision as to whether to give it a chance.

Reply
[-]mako yass5y40

Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia, and this is probably significant

Reply
2Gordon Seidoh Worley5y
Yes, this seems straightforwardly true, although I don't think it's especially significant unless I'm failing to think of some relevant context about why you think indexical claims matter so much (but then I don't spend a lot of time thinking very hard about semantics in a formal context, so maybe I'm just failing to grasp what all is encompassed by "indexical").
1mako yass5y
It's important because demystifying qualia would win esteem in a very large philosophical arena, heh. More seriously though, it seems like it would have to strike at something close to the heart of the meaning of agency.
1Max Kaye5y
I don't think so, here is a counter-example: Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn't know about the twin and thinks he's talking to Alex. Bob asks: "How are you today?". Before Alice responds, Alex walks in. Bob's observation of Alex will surprise him, and he'll quickly figure out that something's going on. But more importantly: Bob's observation of Alex alters the indexical 'you' in "How are you today?" (at least compared to Bob's intent, and it might change for Alice if she realises Bob was mistaken, too). I don't think this is anything close to describing qualia. The experience of surprise can be a quale, the feeling of discovering something can be a quale (eureka moments), the experience of the colour blue is a quale, but the observation of Alex is not. Do you agree with this? (It's from https://plato.stanford.edu/entries/indexicals/) Btw, 'qualia' is the plural form of 'quale'
1mako yass5y
That's a well constructed example I think, but no that seems to be a completely different sense of "indexical". The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one. The Sleeping Beauty problem is the most widely known example. The mirror chamber was another example.
1Max Kaye5y
I'm not sure I understand yet, but does the following line up with how you're using the word? Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person's mind) Notes: * "exact" might be too strong a word * I added "or temporal location of such matter" to cover the sleeping beauty case (which, btw, I'm apparently a halfer or double halfer according to wikipedia's classifications, but haven't thought much about it) Edit/PS: I think my counter-example with Alice, Alex, and Bob still works with this definition.
1TAG5y
I can see how this might result from confusing consciousness qua phenomenonality with consciousness qua personal identity.
1mako yass5y
I think I'm saying those are going to to turn out to be the same thing, though I'm not sure exactly where that intuition is coming from yet. Could be wrong.
1TAG5y
Why would that be the case?
[-]mako yass18d31

If you're interested in how intelligence gathering works, or if you're interested in interpersonally/emotionally challenging work: I recommend this interview with a US "case officer", which is someone who finds and converts vulnerable agents of the other side into "assets", which are essentially spies, but with none of the glamour we associate with spies, because only desperate people take those jobs, and the people best positioned to do them are people who the US has reasons to distrust.

Reply
[-]mako yass1mo*3-1

A lot of the time when you're using an AI for research assistance it'll fail to do a web search and you'll get mad at it because you know it knows that this wasn't in the dataset, it knows this isn't the kind of question that can be answered based on vibes, it declines to do a web search because it's assuming you wont catch that and it's trying to save the company money.

This morning as I was waking up I got mad at a piece of my brain for declining to do a web search.

I couldn't easily dismiss the feeling. And I entreated to the feeling "it is perhaps unreas... (read more)

Reply2
[-]mako yass4mo30

A nice articulation on false intellectual fences

Perhaps the deepest lesson that I've learned in the last ten years is that there can be this seeming consensus, these things that everyone knows that seem sort of wise, seem like they're common sense, but really they're just kind of herding behaviour masquerading as maturity and sophistication, and when you've seen how the consensus can change overnight, when you've seen it happen a number of times, eventually you just start saying nope

Dario Amodei

Reply
-2Ben Pace4mo
Here is an example of Dario Amodei participating in one of these (to my eyes at least).
[-]mako yass1y30

On my homeworld, with specialist consultants (doctors, lawyers etc), we subsidize "open consultation", which is when a client meets with more than one fully independent consultant at a time.
If one consultant misses something, the others will usually catch it, healthy debate will take place, a client will decide who did a better job and contract them or recommend them more often in the future. You do have the concept of "getting a second opinion" here, but I think our version worked a lot better for some subtle reasons.

It produced a whole different atmosphe... (read more)

Reply
[-]mako yass2y30

Decision theoretic things that I'm not sure whether are demonic, or just real and inescapable and legitimate, and I genuinely don't fucking know which, yet:

  • extortion/threats/building torturizers to gain bargaining power
    • (or complying to with extortionary threats)
  • assigning bargaining power in proportion to an agent's strength or wealth, as opposed to in proportion to its phenomenological measure.
    • (arguably wrong if you extend a rawlsian veil back beyond even your awareness of which observer you are or what your utility function is, which seems mathematically
... (read more)
Reply
2Dagon2y
Can you expand on what you mean by "demonic"?  Is it a shorthand for "indicative of broken cognition, because it's both cruel and unnecessary", or something else?  I THINK what you're wondering about is whether these techniques/behaviors are ever actually optimal when dealing with misaligned agents who you nonetheless consider to be moral patients.  Is that close? I think that both questions are related to uncertainty about the other agent(s).  Bargaining implies costly changes to future behaviors (of both parties).  Which makes signaling of capability and willingness important.  Bargainers need to signal that they will change something in a meaningful way based on whatever agreement/concession is reached.  In repeated interaction (which is almost all of them), actual follow-through is the strongest signal. So, actual torture is the strongest signal of willingness and ability to torture.  Building a torturizer shows capability, but only hints at willingness.  Having materials that could build a torturizer or an orgasmatron is pretty weak, but not zero.  Likewise with strength and wealth - it's shows capability of benefit/reduced-harm from cooperation, which is an important prerequisite. I don't think you can assert that threats are never carried out, unless you somehow have perfect mutual knowledge (and then, it's not bargaining, it's just optimization). Thomas Schelling won a Nobel for his work in bargaining under uncertainty, and I think most of those calculations are valid, no matter how adavnced and rational the involved agents are, when their knowledge is incomplete and they're misaligned in their goals.
-8mako yass2y
[-]mako yass8mo2-1

Fascinating. China has always lagged far behind the rest of the world in high precision machining, and is still a long way behind, they have to buy all of those from other countries. The reasons appear complex.

All of the US and european machine tools that go to china use hardware monitoring and tamperproofing to prevent reverse engineering or misuse. There was a time when US aerospace machine tools reported to the DOC and DOD.

Reply
5Alexander Gietelink Oldenziel8mo
I watched the video. It doesnt seem to say that China is behind in machine tooling - rather the opposite: prices are falling, capacity is increasing, new technology is rapidly adopted.
1mako yass8mo
Okay? I said they're behind in high precision machine tooling, not machine tooling in general. That was the point of the video. Admittedly, I'm not sure what the significance of this is. To make the fastest missiles I'm sure you'd need the best machine tools, but maybe you don't need the fastest missiles if you can make twice as many. Manufacturing automation is much harder if there's random error in the positions of things, but whether we're dealing with that amount of error, I'm not sure. I'd guess low grade machine tools also probably require high grade machine tools to make.
[-]mako yass1y2-2

Prediction in draft: Linkposts from blogs are going to be the most influential form of writing over the next few years, as they're the richest data source for training LLM-based search engines, which will soon replace traditional keyword-based search engines.

Reply
[-]mako yass1y20

Theory: the existence of the GreaterWrong lesswrong mirror is actually protecting everyone from the evil eye by generating google search results that sound like they're going to give you The Dirt on something (the name "Greater Wrong" vibes like it's going to be a hate site/controversy wiki) when really they just give you the earnest writings, meaning that the many searchers who're looking for controversy about a person or topic will instead receive (and probably boost the rankings of) evenhanded discussion.

Reply
[-]mako yass2y20

Trying to figure out why there's so much in common between Jung's concept of synchronicity, and acausal trade (in fact, jung seems to have coined the term acausal). Is it:

1) Scott Alexander (known to be a psychologist), or someone, drawing on the language of the paranormal, to accentuate the weird parts of acausal trade/LDT decisionmaking, which is useful to accentuate if you're trying to communicate the novelty (though troublesome if you're looking for mundane examples of acausal trade in human social behavior, which we're pretty sure exist, given how muc... (read more)

Reply
[-]mako yass2y20

An argument that the reason most "sasquatch" samples turn out to have human DNA is that sasquatch/wildman phenotype (real) is actually not very many mutations away from sapiens, because it's mostly just a result of re-enabling a bunch of traits that were disabled under sapiens self-domestication/neotenization https://www.againsttheinternet.com/post/60-revolutionary-biology-pt-2-the-development-and-evolution-of-sasquatch

I'm wondering if the "Zana just had african DNA" finding might have been a result of measurement or interpretation error: We don't know the... (read more)

Reply
4avturchin2y
Fun factoid: it is claimed that some South American apes used fire before human came and kill them: https://evolbiol.ru/document/915 
[-]mako yass5y20

My opinion is that the St Petersberg game isn't paradoxical, it is very valuable, you should play it, it's counterintuitive to you because you can't actually imagine a quantity that comes in linear proportion to utility, you have never encountered one, none seems to exist.

Money, for instance, is definitely not linearly proportionate to utility, the more you get the less it's worth to you, and at its extremes, it can command no more resources than what the market offers, and if you get enough of it, the market will notice and it will all become valueless.

Every resource that exists has sub-linear utility returns in the extremes. 

(Hmm. What about land? Seems linear, to an extent)

Reply
[-]mako yass2y10

Things that healthy people don't have innate dispositions towards: Optimism, Pessimism, Agreeability, Disagreeability, Patience, Impatience.

Whether you are those things should completely depend on the situation you're in. If it doesn't, you may be engaging in magical thinking about how the world works. Things are not guaranteed to go well, nor poorly. People are not fully trustworthy, nor are they consistently malignant. Some things are worth nurturing, others aren't. It's all situational.

Reply
[-]mako yass3y10

An analytic account of Depression: When the agent has noticed that strategies that seemed fruitful before have stopped working, and doesn't have any better strategies in mind.

I imagine you'll often see this type of depression behavior in algorithmic trading strategies, as soon as they start consistently losing enough money to notice that something must have changed about the trading environment, maybe more sophisticated strategies have found a way to dutch book them. Those strategies will then be retired, and the trader or their agency will have to search ... (read more)

Reply
[-]mako yass5y10

Wild Speculative Civics: What if we found ways of reliably detecting when tragedies of the commons have occurred, then artificially increased their costs (charging enormous fines) to anyone who might have participated in creating them, until it's not even individually rational to contribute to them any more?

Reply
3ChristianKl5y
That sounds like punishing any usage of common resources which is likely undesireable.  Good policy to manage individual commons requires to think through how their usage is best managed. Elinor Ostrom did a lot of research into what works for setting up good systems.
2Viliam5y
Sounds like auctioning the usage of the common. I can imagine a few technical problems, like determining what level of usage is optimal (you don't want people to overfish the lake, but you don't know exactly how many fish are there), or the costs of policing. But it would be possible to propose a few dozen situations where this strategy could be used, and address these issues individually; and then perhaps only use the strategy in some of them. Or perhaps by examining individual specific cases, we would discover a common pattern why this doesn't work.
[-]mako yass1y00

When the gestapo come to your door and ask you whether you're hiding any jews in your attic, even a rationalist is allowed to lie in this situation, and [fnord] is also that kind of situation, so is it actually very embarrassing that we've all been autistically telling the truth in public about [fnord]?

Reply
[-]mako yass2y00

Until you learn FDT, you cannot see the difference between faith and idealism, nor the difference between pragmatism and cynicism. The tension between idealism and pragmatism genuinely cannot be managed gracefully without FDT, it defines their narrow synthesis.

More should be written about this, because cynicism, idealism afflicts many.

Reply
[-]mako yass2y00

Have you ever seen someone express stern (but valid, actionable) criticisms, conveyed with actual anger, towards an organization, and then been hired, to implement their reforms?

If that has never happened, is there a reasonable explanation for that or is it just, as it appears, almost all orgs are run by and infested with narcissism? (a culture of undervaluing criticism and not protecting critics)

Reply
3Ann2y
I think that's happened with a friend of mine who went into social work. They are in a field where that makes sense.
Moderation Log
More from mako yass
View more
Curated and popular this week
127Comments