Recent Discussion

This is a point of confusion I still have with the simulation argument: Upon learning that we are in an ancestor simulation, should we be any less surprised? It would be odd for a future civilization to dedicate a large fraction of their computational resources towards simulating early 21st century humans instead of happy transhuman living in base reality; shouldn't we therefore be equally perplexed that we aren't transhumans?

I guess the question boils down to the choice of reference classes, so what makes the reference class "early 21st century humans" so special? Why not ... (Read more)

[Event]Rationality Vienna Meetup, September 2019
2Sep 7thKaisermühlenstraße 24, Viedeň, RakúskoShow Highlight

Meetup is in the large building across the street from train station Vienna – Stadlau. Ground floor, the side of the building away from the station.

Schedule:

  • 15:00–15:30: arrival & social time with tea and coffee
  • 15:30: round of introductions
  • ~15:45: talks or open microphone – moderated discussions. (Topics TBD; open microphone.)
  • ~18:00: cleaning up and heading for dinner in the city, with discussions until late evening.

Directions to the location from U2 Stadlau: Common room of Kaisermuehlenstrasse 24

I'm assuming a high level of credence in classic utilitarianism, and that AI-Xrisk is significant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years).

Here's my current list (off the top of my head):

  • not your comparitive advantage
  • consider other Xrisks more threatening (top contenders: bio / nuclear)
  • infinite ethics (and maybe other fundamental ethical questions, e.g. to do with moral uncertainty)
  • S-risks
  • simulation hypothesis

Also, does anyone want to say why they think none of these should change the picture? Or point to a good reference discussing this ... (Read more)

1Answer by maximkazhenkov1h I'll add one more: * Doomsday Argument as overwhelming evidence against futures with large number of minds Also works against any other x-risk related effort and condones a carpe-diem sort of attitude on the civilizational level.
2Answer by Ninety-Three6h Without rejecting any of the premises in your question I can come up with: Low tractability: you assign almost all of the probability mass to one or both of "alignment will be easily solved" and "alignment is basically impossible" Currently low tractability: If your timeline is closer to 100 years than 10, it is possible that the best use of resources for AI risk is "sit on them until the field developers further" in the same sense that someone in the 1990s wanting good facial recognition might have been best served by waiting for modern ML. Refusing to prioritize highly uncertain causes in order to avoid the Winner's Curse outcome of your highest priority ending up as something with low true value and high noise Flavours of utilitarianism that don't value the unborn and would not see it as an enormous tragedy if we failed to create trillions of happy post-Singularity people (depending on the details human extinction might not even be negative, so long as the deaths aren't painful)
7Kaj_Sotala17h S-risks Not necessarily a reason to deprioritize AI x-risk work, given that unaligned AI could be bad [http://www.informatica.si/index.php/informatica/article/download/1877/1098] from an s-risk perspective as well: Pain seems to have evolved because it has a functional purpose in guiding behavior: evolution having found it suggests that pain might be the simplest solution for achieving its purpose. A superintelligence which was building subagents, such as worker robots or disembodied cognitive agents, might then also construct them in such a way that they were capable of feeling pain - and thus possibly suffering (Metzinger 2015) - if that was the most efficient way of making them behave in a way that achieved the superintelligence’s goals. Humans have also evolved to experience empathy towards each other, but the evolutionary reasons which cause humans to have empathy (Singer 1981) may not be relevant for a superintelligent singleton which had no game-theoretical reason to empathize with others. In such a case, a superintelligence which had no disincentive to create suffering but did have an incentive to create whatever furthered its goals, could create vast populations of agents which sometimes suffered while carrying out the superintelligence’s goals. Because of the ruling superintelligence’s indifference towards suffering, the amount of suffering experienced by this population could be vastly higher than it would be in e.g. an advanced human civilization, where humans had an interest in helping out their fellow humans. [...] If attempts to align the superintelligence with human values failed, it might not put any intrinsic value on avoiding suffering, so it may create large numbers of suffering subroutines.

A counter-argument to this would be the classical s-risk example of a cosmic ray particle flipping the sign on the utility function of an otherwise Friendly AI, causing it to maximize suffering that would dwarf any accidental suffering caused by a paperclip maximizer.

Two senses of “optimizer”Ω
2518h3 min readΩ 10Show Highlight

The word “optimizer” can be used in at least two different ways.

First, a system can be an “optimizer” in the sense that it is solving a computational optimization problem. A computer running a linear program solver, a SAT-solver, or gradient descent, would be an example of a system that is an “optimizer” in this sense. That is, it runs an optimization algorithm. Let “optimizer_1” denote this concept.

Second, a system can be an “optimizer” in the sense that it optimizes its environment. A human is an optimizer in this sense, because we robustly take actions that push our environment in a cert... (Read more)

3evhub3h The implicit assumption seems to be that an optimizer_1 could turn into an optimizer_2 unexpectedly if it becomes sufficiently powerful. It is not at all clear to me that this is the case – I have not seen any good argument to support this, nor can I think of any myself. I think that this is one of the major ways in which old discussions of optimization daemons would often get confused. I think the confusion was coming from the fact that, while it is true in isolation that an optimizer_1 won't generally self-modify into an optimizer_2, there is a pretty common case in which this is a possibility: the presence of a training procedure (e.g. gradient descent) which can perform the modification from the outside. In particular, it seems very likely to me that there will be many cases where you'll get an optimizer_1 early in training and then an optimizer_2 later in training. That being said, while having an optimizer_2 seems likely to be necessary for deceptive alignment, I think you only need an optimizer_1 for pseudo-alignment: every search procedure has an objective, and if that objective is misaligned, it raises the possibility of capability generalization without objective generalization. Also, as a terminological note, I've taken to using "optimizer" for optimizer_1 and "agent" for something closer to optimizer_2, where I've been defining an agent as an optimizer that is performing a search over what its own action should be. I prefer that definition to your definition of optimizer_2, as I prefer mechanistic definitions over behavioral definitions since I generally find them more useful, though I think your notion of optimizer_2 is also a useful concept.
3ofer4h It seems useful to have a quick way of saying: "The quarks in this box implement a Turing Machine that [performs well on the formal optimization problem P and does not do any other interesting stuff]. And the quarks do not do any other interesting stuff." (which of course does not imply that the box is safe)

Sure. Not making the distinction seems important, though, because this post seems to be leaning towards rejecting arguments that depend on noticing that the distinction is leaky. Making it is okay so long as you understand it as "optimizer_1 is a way of looking at things that screens off many messy details of the world so I can focus on only the details I care about right now", but if it becomes conflated with "and if something is an optimizer_1 I don't have to worry about the way it is also an optimizer_2" then that's dangero... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

1Pattern6h The jump between the two is not explained.Generalizing from Evolution.
Matthew Barnett's Shortform
413d1 min readShow Highlight

I intend to use my shortform feed for two purposes:

1. To post thoughts that I think are worth sharing that I can then reference in the future in order to explain some belief or opinion I have.

2. To post half-finished thoughts about the math or computer science thing I'm learning at the moment. These might be slightly boring and for that I apologize.

And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.

That's an argument against dualism not an argument against qualia. If mind brain identity is true, neural activity is causing reports, and qualia, along with the rest of consciousness are identical to neural activity, so qualia are also causing reports.

4Matthew Barnett8h I generally agree with the heuristic that we should "live on the mainline", meaning that we should mostly plan for events which capture the dominant share of our probability. This heuristic causes me to have a tendency to do some of the following things * Work on projects that I think have a medium-to-high chance of succeeding and quickly abandon things that seem like they are failing. * Plan my career trajectory based on where I think I can plausibly maximize my long term values. * Study subjects only if I think that I will need to understand them at some point in order to grasp an important concept. See more details here [https://www.lesswrong.com/posts/MnrQMLuEg5wZ7f4bn/matthew-barnett-s-shortform#tdTgyEf2Giy6SAZ7n] . * Avoid doing work that leverages small probabilities of exceptionally bad outcomes. For example, I don't focus my studying on worst-case AI safety risk (although I do think that analyzing worst-case failure modes is useful from the standpoint of a security mindset [https://arbital.com/p/AI_safety_mindset/]). I see a few problems with this heuristic, however, and I'm not sure quite how to resolve them. More specifically, I tend to float freely between different projects because I am quick to abandon things if I feel like they aren't working out (compare this to the mindset that some game developers have when they realize their latest game idea isn't very good). One case where this shows up is when I change my beliefs about where the most effective ways to spend my time as far as long-term future scenarios are concerned. I will sometimes read an argument about how some line of inquiry is promising and for an entire day believe that this would be a good thing to work on, only for the next day to bring another argument. And things like my AI timeline predictions vary erratically, much more than I expect most people's: I sometimes wake up and think that AI might be just 10 years away and other days I wake up and wond
5Raemon8h Some random thoughts: * Startups and pivots. Startups require lots of commitment even when things feel like they're collapsing – only by perservering through those times can you possibly make it. Still, startups are willing to pivot – take their existing infrastructure but change key strategic approaches. * Escalating commitment. Early on (in most domains), you should pick shorter term projects, because the focus is on learning. Code a website in a week. Code another website in 2 months. Don't stress too much on multi-year plans until you're reasonably confident you sorta know what you're doing. (Relatedly, relationships: early on it makes sense to date a lot to get some sense of who/what you're looking for in a romantic partner. But eventually, a lot of the good stuff comes when you actually commit to longterm relationships that are capable of weathering periods of strife and doubt) * Alternately: Givewell (or maybe OpenPhil?) did mixtures of shallow dives, deep dives and medium dives into cause areas because they learned different sorts of things from each kind of research. * Commitment mindset. Sort of how Nate Soares recommends separating the feeling of conviction from the epistemic belief of high-success [http://mindingourway.com/conviction-without-s/]... you can separate "I'm going to stick with this project for a year or two because it's likely to work" from "I'm going to stick to this project for a year or two because sticking to projects for a year or two is how you learn how projects work on the 1-2 year timescale, including the part where you shift gears and learn from mistakes and become more robust about them.
Please use real names, especially for Alignment Forum?
365mo1 min readShow Highlight

As the number of AI alignment researchers increases over time, it's getting hard for me to keep track of everyone's names. (I'm probably worse than average in this regard.) It seems the fact that some people don't use their real names as their LW/AF usernames makes it harder than it needs to be. So I wonder if we could officially encourage people to use their real firstname and lastname as their username, especially if they regularly participate on AF, unless they're deliberately trying to keep their physical identities secret? (Alternatively, at least put their real firstname and lastname in

... (Read more)

I got too annoyed with not remembering the mapping between people's usernames and full names, and wrote a userscript to always show full names of authors on GreaterWrong.com when available (with username in parentheses). (Unfortunately a similar userscript for LessWrong.com doesn't seem possible.) To use it, please install the Tampermonkey add-on for your browser (I know Firefox and Chrome on desktop and Firefox on Android support it at least), then click here.

Davis_Kingsley's Shortform
65dShow Highlight
2mr-hire6h Do you think this is relevant to more real world strategy situations?

I remember seeing a deal from a bank where the bank got to chose whether to pay a fixed interest rate or a variable one and either path looked like a good deal but the fact that the bank would chose meant you would always end on the wrong side of the market. I wish I could remember the exact promotion.

3Elizabeth13h 1. I have the opposite observation on abuse in poly vs mono relationships. I'm interested in discussing further but I think that requires naming names and I don't want to do so in a public forum, so maybe we should discuss offline. 2. Davis said harmful and habryka said abusive, which aren't synonymous. It's entirely possible for poly to lead to a lower chance any particular relationship is abusive, and yet raise the total amount of harm-done-by-relationships in a person or community.
2habryka12h 1. Sure, happy to chat 2. Yeah, I didn't mean to imply that it's in direct contradiction, just that I have the most data about actually abusive relationships, and some broad implication that I do think that's where most of the variance comes from, though definitely not all of it.
Open & Welcome Thread August 2019
1319d1 min readShow Highlight
  • If it’s worth saying, but not worth its own post, here's a place to put it.
  • And, if you are new to LessWrong, here's the place to introduce yourself.
    • Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

1Matthew Barnett7h Will Lesswrong at some point have curated shortform posts? Furthermore, is such a feature desirable? I will leave this question here for discussion.
4Raemon7h What I was thinking about doing was something like Scott Alexander "comment highlight" posts, where it might make sense to specifically say "here were particularly good shortform comments from the past week". Mostly haven't done it because it was a fair bit of activation effort.

There was a trend on the old LW that people were pushed to post in Discussion instead of Main and later in Open Threads instead of discussion because they wanted lower expected effort.

Adding features like curated shortform might mean more content that could be posted as proper post instead ends up in shortform.

It seems to me like it would make more sense if people who write valuable shortform content that's well received then write a proper post about the content then if the shortform version gets more exposure.

Markets are Universal for Logical Induction
83h4 min readShow Highlight

Background

Logical Induction is the best framework currently available for thinking about logical uncertainty - i.e. the “probability” that the twin primes conjecture is true, or that the ...

Announcement: Writing Day Today (Thursday)Ω
135h1 min readΩ 1Show Highlight

The MIRI Summer Fellows Program is having a writing day, where the participants are given a whole day to write whatever LessWrong / AI Alignment Forum posts that they like. This is to practice the skills needed for forum participation as a research strategy.

On an average day LessWrong gets about 5-6 posts, but last year this generated 28 posts. It's likely this will happen again, starting mid-to-late afternoon PDT.

It's pretty overwhelming to try to read-and-comment on 28 posts in a day, so we're gonna make sure this isn't the only chance you'll have to interact with th... (Read more)

Power Buys You Distance From The Crime
11420d7 min readShow Highlight

Introduction

Taxes are typically meant to be proportional to money (or negative externalities, but that's not what I'm focusing on). But one thing money buys you is flexibility, which can be used to avoid taxes. Because of this, taxes aimed at the wealthy tend to end up hitting the well-off-or-rich-but-not-truly-wealthy harder, and tax cuts aimed at the poor end up helping the middle class. Examples (feel free to stop reading these when you get the idea, this is just the analogy section of the essay):

  • Computer programmers typically have the option to work remotely in a low-tax state; t
... (Read more)

By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it's replying).

1Kenny9h enacting conflict in the course of discussing conflict ... seems to be exactly why it's so difficult to discuss a conflict theory with someone already convinced that it's true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false. But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one's unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one's decoupling [https://everythingstudies.com/2018/04/26/a-deep-dive-into-the-harris-klein-controversy/] the current discussion from a (or any) conflict theory.
1Kenny12h I don't think it's useful to talk about 'conflict theory', i.e. as a general theory of disagreement. It's more useful in a form like 'Marxism is a conflict theory'. And then a 'conflict theorist' is someone who, in some context, believes a conflict theory, but not that disagreements generally are due to conflict (let alone in all contexts). So, from the perspective of a 'working class versus capital class' conflict theory, public choice theory is obviously a weapon used by the capital class against the working class. But other possible conflict theories might be neutral about public choice theory. Maybe what makes 'conflict theory' seem like a single thing is the prevalence of Marxism-like political philosophies.
Biases: An Introduction
734y4 min readShow Highlight

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.

Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.

This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.

... (Read more)

Considering the fact that salespeople are seventy-five times more common as librarians, your estimates will give 7.5 more shy salespeople then shy librarians. You fell under base rate neglect bias right after you read about it, which is a very good manifestation of bias blindness.

My math was wrong

[This comment is no longer endorsed by its author]Reply

(cross posted from my personal blog)

Since middle school I've generally thought that I'm pretty good at dealing with my emotions, and a handful of close friends and family have made similar comments. Now I can see that though I was particularly good at never flipping out, I was decidedly not good "healthy emotional processing". I'll explain later what I think "healthy emotional processing" is, right now I'm using quotes to indicate "the thing that's good to do with emotions". Here it goes...

Relevant context

When I was a kid I adopted a stron... (Read more)

2Harlan9h The psychologist Lisa Feldman Barrett makes a compelling case that emotions are actually stories that our minds construct to explain the bodily sensations that we experience in different situations. She says that there are no "emotion circuits" in the brain and that you can train yourself to associate the same bodily sensations and situations with more positive emotions. I find this idea liberating and I want it to be true, but I worry that if it's not true, or if I'm applying the idea incorrectly, I will be doing something like ignoring my emotions in a bad way. I'm not sure how to resolve the tension between "don't repress your emotions" and "don't let a constructed narrative about your negative emotions run out of control and make you suffer."
4Hazard8h I see the apparent tension you mention. My only interaction with Lisa Feldman's model is a summary of her book here [https://praxis.fortelabs.co/how-emotions-are-made/], so I'll try and speak from that, but let me know if you feel like I'm misrepresenting her ideas. Here theory is framed in terms that on first glance make me suspect she's talking about something that feels entirely at odds with how I think about my own emotions, but looking more carefully, I don't think there's any contradiction. My one paragraph summary of her idea is "stuff happens in the world, your brain makes predictions, this results in the body doing certain things, and what we call 'emotions' are the experience of the brain interpreting what those bodily sensations mean." At the key point (in regards to my/your take-away) is the "re-trainability". The summary says "Of course you can't snap your fingers and instantly change what you're feeling, but you have more control over your emotions than you think." Which I'm cool with. To me, this was always a discussion about exactly how much and in what ways you can "re-train" yourself. My current model is that "re-training" looks like deeply understanding how an emotional response came to be, getting a feel for what predictions it's based on, and then "actually convincing" yourself/the sub-agent of a another reality. I bolded "actually convincing" because that's were all the difficulty lies. Let me set up an example: The topic of social justice comes up (mentioned because this is personally a bit triggering for me), my brain predicts danger of getting yelled at my someone, this results in bodily tension, my brain interprets that as "You are scared". I used to "re-train" my emotions by saying "Being scared doesn't fit our self-concept, so... you just aren't scared." It really helps to imagine a literally sub-agent with a face looking at me, completely unimpressed my such incredibly unconvincing reasoning. Now I go, "Okay, what would actually de

Thanks. Thinking about it in terms of convincing a sub-agent does help.

Breathing happens automatically, but you can manually control it as soon as you notice it. I think that sometimes I've expected changing my internal state to be more like breathing than it realistically can be.

[Question]Has Moore's Law actually slowed down?
92d1 min readShow Highlight

Moore's Law has been notorious for spurring a bunch of separate observations that are all covered under the umbrella of "Moore's law." But as far as I can tell, the real Moore's law is a fairly narrow prediction, which is that the transistor count on CPU chips will double approximately every two years.

Many people have told me that in recent years Moore's law has slowed down. Some have even told me they think it's stopped entirely. For example, the AI and compute article from OpenAI uses the past tense when talking about Moore's Law, "by comparison, ... (Read more)

Consumer CPU price/performance as well as stand still GPU price/performance 2016-2019 Is probably what contributed massively to public perception of Moore's law death.

In 90s and early 2000s after about 8 years you could get CPU for nearly the same money 50 times faster.

But since 2009 up until 2018 we maybe got 50 or 80% performance boost give or take for the same price. Now with Ryzen 3rd gen everything is changing, so after 10 disappointing years it looks interesting ahead.

1Ilverin14h How slow does it have to get before a quantitative slowing becomes a qualitative difference? AIImpacts https://aiimpacts.org/price-performance-moores-law-seems-slow/ [https://aiimpacts.org/price-performance-moores-law-seems-slow/]estimates price/performance used to improve an order of magnitude (base 10) every 4 years but it now takes 12 years.

(Or, is coordination easier in a long timeline?)

It seems like it would be good if the world could coordinate to not build AGI. That is, at some point in the future, when some number of teams will have the technical ability to build and deploy and AGI, but they all agree to voluntarily delay (perhaps on penalty of sanctions) until they’re confident that humanity knows how to align such a system.

Currently, this kind of coordination seems like a pretty implausible state of affairs. But I want to know if it seems like it becomes more or less plausible as time passes.

The following is my initial thi... (Read more)

I'm curating this question.

I think I'd thought about each of the considerations Eli lists here, but I had not seen them listed out all at once and framed as a part of a single question before. I also had some sort of implicit background belief that longer timelines were better from a coordination standpoint. But as soon as I saw these concerns listed together, I realized that was not at all obvious.

So far none of the answers here seem that compelling to me. I'd be very interested in more comprehensive answers that try to weigh the various co... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

"Rationalizing" and "Sitting Bolt Upright in Alarm."
311mo3 min readShow Highlight

This is (sort of) a response to Blatant lies are the best kind!, although I'd been working on this prior to that post getting published. This post explores similar issues through my own frame, which seems at least somewhat different from Benquo's.


I've noticed a tendency for people to use the word "lie", when they want to communicate that a statement is deceptive or misleading, and that this is important.

And I think this is (often) technically wrong. I'm not sure everyone defines lie quite the same way, but in most cases where I hear it unqualified, I usually assum... (Read more)

Initially I replied to this with "yeah, that seems straightforwardly true", then something about that felt off and then it took me awhile to figure out why.

This:

It seems like any AI built by multiple humans coordinating is going to reflect the optimization target of the coordination process building it

...seems straightforwardly true.

This:

..., so we had better figure out how to make this so. [where "this" is "humans are friendly if you scale them up"]

Could unpack a few different ways. I still agree with the general sentiment yo... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

[Event]Western Massachusetts SSC meetup #15
1Aug 31stNorthamptonShow Highlight

We are an established meetup that has been going since the 2018 "Meetups Everywhere" thread. We've often been listed on the open thread by SamChevre, but this is only the second time we've listed ourselves here. If you email me, I can add you to the email thread where we plan future meetings—or feel free to just show up!

We have a pretty healthy crowd of attendees for the size of the area, with about 4-7 folks typically turning up out of a pool of about 10 that has been slowly growing as people find us through the open thread. Most people are local, but folks come from as fa... (Read more)

G Gordon Worley III's Shortform
716dShow Highlight

I agree with KaJ Solata and Viliam that episteme is underweighted in Buddhism, but thanks for explicating that world view

4Viliam11h the most important thing in Buddhist thinking is seeing reality just as it is, unmediated by the "thinking" mind, by which we really mean the acts of discrimination, judgement, categorization, and ontology. To be sure, this "reality" is not external reality, which we never get to see directly, but rather our unmediated contact with it via the senses.The "unmediated contact via the senses" can only give you sensual inputs. Everything else contains interpretation. That means, you can only have "gnosis" about things like [red], [warm], etc. Including a lot of interesting stuff about your inner state, of course, but still fundamentally of the type [feeling this], [thinking that], and perhaps some usually-unknown-to-non-Buddhists [X-ing Y], etc. Poetically speaking, these are the "atoms of experience". (Some people would probably say "qualia".) But some interpretation needs to come to build molecules out of these atoms. Without interpretation, you could barely distinguish between a cat and a warm pillow... which IMHO is a bit insufficient for a supposedly supreme knowledge.
20Kaj_Sotala15h So when we talk about the dharma or justify our actions on it, it's worth noting that it is not really trying to provide consistent episteme. [...] Thus it's a strange inversion to ask the dharma for episteme-based proofs. It can't give them, nor does it try, because its episteme is not consistent and cannot be because it chooses completeness instead. In my view, this seems like a clear failing. The fact that the dharma comes from a tradition where this has usually been the case is not an excuse for not trying to fix it. Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with. This is not just a question of "the dharma has to be able to justify itself"; it's also a question of leaving out the episteme component leaves the system impoverished, as noted e.g. here [https://parletre.wordpress.com/2011/06/07/test-post/]: Recurrent training to attend to the sensate experience moment-by-moment can undermine the capacity to make meaning of experience. (The psychoanalyst Wilfred Bion described this as an ‘attack on linking’, that is, on the meaning-making function of the mind.) When I ask these patients how they are feeling, or what they are thinking, or what’s on their mind, they tend to answer in terms of their sensate experience, which makes it difficult for them to engage in a transformative process of psychological self-understanding. and here [https://parletre.wordpress.com/2019/06/20/critique-of-pragmatic-dharma-2/]: In important ways, it is not possible to encounter our unconscious – at least in the sense implied by this perspective – through moment-to-moment awareness of our sensate experience. Yes, in meditation we can have the experience of our thoughts bubbling just beneath the surface – what Shinzen Young calls the brain’s pre-
9G Gordon Worley III14h Hmm, I feel like there's multiple things going on here, but I think it hinges on this: Yes, the method requires temporarily suspending episteme-based reasoning and engaging with less conceptual forms of seeing. But it can still be justified and explained using episteme-based models; if it could not, there would be little reason to expect that it would be worth engaging with.Different traditions vary on how much to emphasize models and episteme. None of them completely ignore it, though, only seek to keep it within its proper place. It's not that episteme is useless, only that it is not primary. You of course should include it because it's part of the world, and to deny it would lead to confusion and suffering. As you note with your first example especially, some people learn to turn off the discriminating mind rather than hold it as object, and they are worse for it because then they can't engage with it anymore. Turning it off is only something you could safely do if you really had become so enlightened that you had no shadow and would never accumulate any additional shadow, and even then it seems strange from where I stand to do that although maybe it would make sense to me if I were in the position that it were a reasonable and safe option. So to me this reads like an objection to a position I didn't mean to take. I mean to say episteme has a place and is useful, it is not taken as primary to understanding, at some points Buddhist episteme will say contradictory things, that's fine and expected because dharma episteme is normally post hoc rather than ante hoc (though is still expected to be rational right up until it is forced to hit a contradiction), and ante hoc is okay so long as it is then later verified via gnosis or techne.
Alignment Newsletter #24Ω
101y12 min readΩ 1Show Highlight

Starting from this week, Richard Ngo will join me in writing summaries. His summaries are marked as such; I'm reviewing some of them now but expect to review less over time.

Highlights

Introducing the Unrestricted Adversarial Examples Challenge (Tom B. Brown et al): There's a new adversarial examples contest, after the one from NIPS 2017. The goal of this contest is to figure out how to create a model that never confidently makes a mistake on a very simple task, even in the presence of a powerful adversary. This leads to many differences from the previous contest. The task is a lot sim... (Read more)

Update: A reader suggested that in the open-source implementation of PopArt, the PopArt normalization happens after the reward clipping, counter to my assumption. I no longer understand why PopArt is helping, beyond "it's good for things to be normalized".

Chris_Leong's Shortform
71dShow Highlight
2Chris_Leong10h I'm going to start writing up short book reviews as I know from past experience that it's very easy to read a book and then come out a few years later with absolutely no knowledge of what was learned. Book Review: Everything is F*cked: A Book About Hope To be honest, the main reason why I read this book was because I had enjoyed his first and second books (Models and The Subtle Art of Not Giving A F*ck) and so I was willing to take a risk. There were definitely some interesting ideas here, but I'd already received many of these through other sources: Harrari, Buddhism, talks on Nietzsche, summaries of The True Believer; so I didn't gain as much from this as I'd hoped. It's fascinating how a number of thinkers have recently converged on the lack of meaning within modern society. Yuval Harrari argues that modernity has essentially been a deal sacrificing meaning for power. He believes that the lack of meaning could eventually lead to societal breakdown and for this reason he argued that we need to embrace shared narratives that aren't strictly true (religion without gods if you will; he personally follows Buddhism). Jordan Peterson also worries about a lack of meaning, but seeks to "revive God" as someone kind of metaphorical entity. Mark Manson is much more skeptical, but his book does start asking similar lines. He tells the story of gaining meaning from his grandfather's death by trying to make him proud although this was kind of silly as they hadn't been particularly close or even talked recently. Nonetheless, he felt that this sense of purpose had made him a better person and improved his ability to achieve his goals. Mark argues that we can't draw motivation from our thinking brain and that we need these kinds of narratives to reach our emotional brain instead. However, he argues that there's also a downside to hope. People who are dissatisfied with their lives can easily fall prey to ideological movements which promise a better future, especially when they
7Raemon12h I pushed a bit for the name 'scratchpad' [https://www.lesswrong.com/posts/9FNHsvcqQjxcCoJMJ/shortform-vs-scratchpad-or-other-names] so that this use case was a bit clearer (or at least not subtly implied as "wrong"). Shortform had enough momentum as a name that it was a bit hard to change tho. (Meanwhile, I settled for 'shortform means either the writing is short, or it took a (relatively) short amount of time to write)

“I’m sorry, I didn’t have the time to write you a short email, so I wrote you a long one instead.”

Load More