If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
61 comments, sorted by Click to highlight new comments since:

I asked GPT-3 to describe LessWrong. (The prompt is in bold, the rest is its output)


Item #: SCP-47000 

Object Class: Euclid 

Alias: LessWrong

Special Containment Procedures: SCP-47000 is to be contained within a standard containment cell at Site-19.

Description: SCP-47000 is an entity resembling a humanoid composed of a green, translucent material. It is approximately 1.8 meters in height. SCP-47000 is sapient, and is capable of human speech. It also possesses the ability to levitate and travel through solid objects. 

The entity has stated that it is a "rationalist", and is interested in "the pursuit of truth and knowledge". It has also claimed to have been created by an entity known as "Yudkowsky" and that its purpose is to "spread rationality". 

When interacting with Foundation personnel, SCP-47000 will often attempt to engage them in philosophical discussion. It has also been known to offer advice on various topics, including but not limited to: 

• The scientific method 

• The philosophy of science 

• The nature of reality 

• The existence of God 

• The meaning of life 

• The importance of critical thinking 

SCP-47000 has also been known to give lectures on various topics. These lectures typically last for several hours, and are often followed by a Q&A session. 

SCP-47000 has shown a deep understanding of human psychology, and is able to exploit this knowledge to its advantage. It has been known to use this knowledge to manipulate Foundation personnel into releasing it from containment, or into revealing sensitive information.

SCP-47000 is to be considered extremely dangerous, and is not to be released from containment under any circumstances.
 

It also possesses the ability to levitate and travel through solid objects. 

How is it contained?

My “better self” set karma notifications to just be once a day, so that I wouldn’t get addicted to refreshing LW.

However, it seems that my “worse self” has found a loophole in that plan, namely that I can still see whether my karma is going up on a minute-by-minute basis by looking at (1) the karma total on my user page and (2) the karma on my individual recent posts and comments.

So, I am willing to pay a fair price for a tampermonkey script (or any other method) that does the following:

  • Hide the karma total on my user page
  • Hide the karma on (only) my own comments and posts that I posted within the last 24 hours.

Anyone interested?

[Yes I have heard of GW anti-kibitzer mode but I don't like it for other reasons. Yes I am aware that karma = fake internet points and this whole thing is incredibly stupid. I (exclusively) use Chrome browser on a Windows desktop, if that’s relevant.]

My preferred adblocker, uBlock Origin, lets you right-click on any element on a page and block it, with a nice UI that lets you set the specificity and scope of the block. Takes about 10 seconds, much easier than mucking with JS yourself. I've done this to hide like & follower counts on twitter, just tried and it works great for LessWrong karma. It can't do "hide karma only for your comments within last 24 hours" but thought this might be useful for others who want to hide karma more broadly.

Nice!! Well, it took me 20 minutes not 10 seconds, mostly figuring out what the filters are and how they work, for the purpose of making them only apply on my user-page and not site-wide. (The trick is here, e.g. www.lesswrong.com##:matches-path(/steve2152) span.UsersProfile-userMetaInfo:nth-of-type(1).)

This isn't 100% what I wanted, but better than before, hopefully good enough.

As a bonus, now I have ad-blocking ;-)

It looks like you can remove the total karma score from your user page with document.querySelector(".UsersProfile-userMetaInfo").remove();, and that you can remove the karma scores from your comments with

document.querySelectorAll('.UsersNameDisplay-userName[href="/users/steve2152"]').forEach(function(el) {
    el.closest('.CommentsItem-meta').querySelector('.OverallVoteAxis-voteScore').innerHTML = '';
})

I did this in the Firefox developer console, but it's just JavaScript and should work in Tampermonkey?

Hi! It's time to convert my lurking into an active account. I'm interested in all things related to making the long-term future go well, and I currently run Future Forum.

Welcome! 

[-][anonymous]90

Hi, I'm new here (or at least this is my first comment). I was reading the Sequences, but sometimes there are parts that I don't understand very well. what do I do when that happens? I would feel very stupid if I were the only one in the comments of the Sequences asking what a certain paragraph in each article I don't understand means. It occurred to me to leave the article for later so next time maybe it will be clearer for me, but I risk losing interest in the Sequences when I come back. Speaking of interest, how can I sustain it for long enough to finish the Sequences, because I've already made several attempts and haven't gotten very far.

I think leaving comments about things you're confused about is good practice.

I think that is a flaw of comments, relative to 'google docs'. Long documents without the referenced areas being tagged in comments, might make it hard to find other people asking the same question you did, even if someone wondered about the same section. (And the difficulty of ascertaining that quickly seems unfortunate.)

The Sequences are very long, but worth it. I would recommend reading the Highlights, and then reading more of the sections that spark your curiosity. 

(I only found out about that today, and I've been lurking here for a little bit. Is there a way for the Highlights to be seen next to the Rationality: A - Z page?) 

We just curated and released the sequence highlights yesterday, so you were not missing anything. :)

Over the next few days we'll be adding more nice-to-have features to help people find and read them.

Aha! I was wondering how long those had been on the side of the page, unbeknownst to me! Thank you for that! I've already sent it to someone (who really liked the first few articles).

How did you decide which posts to include?

Link to my notes/summary of "The Dictator's Handbook".  

Probably of interest to people here thinking about the dynamics that govern political behavior in nation states, companies, etc. 

https://digitalsauna.wordpress.com/2022/07/13/the-dictators-handbook-by-bruce-bueno-de-mesquita-and-alastair-smith-2011/

There's also a deep dive LessWrong post on the topic:

https://www.lesswrong.com/posts/N6jeLwEzGpE45ucuS/building-blocks-of-politics-an-overview-of-selectorate

Hello! I've been here lurking for a bit, but never quite introduced myself. I found myself commenting for the first time and figured I should go ahead and write up my story.

I don't quite remember how I first stumbled upon this site, but I was astonished. I skimmed a few of the front page articles and read some of the comments. I was impressed by the level of dialogue and clear thought. I thought it was interesting but I should check it out when I had some more time.

One day I found myself trying to explain something to a friend that I had read here, but I couldn't do it justice. I hadn't internalized the knowledge, it wasn't a part of me. That bothered me. I felt like I should have been able to understand better what I read, or explain as I remembered reading it.

So I decided to dig in, I wanted to understand things, to be able to explain the concepts, to know them well enough to write about them and be understood. I like reading fantasy, so I decided to start with HPMOR.

I devoured that book. I found myself stunned with how much I thought like Harry. It was like reading what I had always felt but never been able to put into words. The more I read, the more impressed I was, I had to keep reading. I finished the book, and immediately started on the Sequences. I felt like this was a great project I could only have wished for, and yet here it was.

I started trying to apply the things I learned to myself, and found it very difficult. rationality was not as easy as reading up how it all works, I had to actually change my mind. For me, the first great test of my rationality was religious. I had many questions about my faith for a long time. Reading the Sequences gave me the courage I needed to finally face the scariest questions. I finally had tools that could apply to the foundational questions I had.

The answers I came to where not pretty. Facing the questions had changed me. In finding answers to my questions I had lost my belief in the claims of religion. I found myself with a clarity that I hadn't thought possible. I had some troubling issues to confront, now that my religious conception of the world had fallen away.

I found myself confident, in ways I had never been before. I could kind of explain where the evidence for my beliefs were, instead of having no answer at all. I have all kinds of mental models and names for concepts now that I wish I had found earlier. I had found a path that would take me where I wanted to go. I'm not very far along that path, but I found it.

Of course, I'm still learning. And I'm still not all that good at practicing my rationality. But I'm getting better, a little bit at a time. My priorities have changed. I've got money on the line now for some of my goals, thanks to Beeminder. I've been writing more, trying to get better at communicating. I can't thank enough all the people who contribute and maintain this site. It's a wonderful place of sanity in a mad world, and I have become better, and less wrong, because of it. 

Hello 

Came here from a link in ”The Browser” news letter. First impressions is I can learn a lot from this site. Thanks to all who contribute. 
 

Lib Sacul

Meta: This seems like a 101-level question, so I ask it in the Open Thread rather than using the questions feature on LW.

Suppose you are designing an experiment for a blood pressure drug. I've heard stuff about how it is important to declare what metrics you are interested in up front, before collecting the data, not afterwards. Ie. you'd declare up front that you only care about measuring systolic blood pressure, diastolic blood pressure, HDL cholesterol, and LDL cholesterol. And then if you happen to see an effect on heart rate, you are supposed to ignore it.

Why is this? Surely just for social reasons, right? If we do the experiment and happen to see data on heart rate, The Way of Bayes says we are forced to update our beliefs. Why would we ignore those updated beliefs?

Maybe this is getting at the thing (I don't know what it's called) where, if you flip a coin 100 times every day for 10 years, some of those days are going to be extreme results, but that doesn't mean that coin is weighted. Since you're doing it so much, you'd expect it to happen. Or something like that. I don't think my example is quite the same thing since it's the same coin. A better example is probably when you do a blood test on someone, if you test 1000 different metrics, a few of them are bound to give "statistically significant results" just by chance. But this just seems like a phenomena that you need to adjust for, not that you should ignore data.

FWIW, I think this is well above 101-level and gets into some pretty deep issues, and sparked some pretty acrimonious debates back during the Sequences when Eliezer blogged about stopping rules & frequentism. It's related to the question "why Bayesians don't need, in theory, to randomize", which is something Andrew Gelman mentioned for years before I even began to understand what he was getting at.

Oh that stuff is good to know. Thanks for those clarifications. I actually don't see how it's related to randomization though, so add that to the list of things I'm confused about. My question feels like a question of what to do with the data you got, regardless of whether you utilized randomization in order to get that data.

It's the same question because it screens off the data-generating process. A researcher who is biased or p-hacking or outcome switch is like a world which generates imbalanced/confounded experimental vs 'control' groups, in a Bayesian needs to model the data-generating process like the stopping rule to learn correctly from, while pre-registration and explicit randomization make the results independent of those and a simple generative model is correct.

(So this is why you can get a decision-theoretic justification for Bayesians doing those even if they are sure they are modeling correctly all confounding etc: because it is a 'nothing up my sleeve'-esque design which allows sharing information with other agents who have nonshared priors - by committing to a randomization or pre-registration, they can simply take your data at face-value and do an update, while if they had to model you as a non-randomized generating process generating arbitrarily biased data in unknown ways, the data would be uninformative and lose almost all of its possible value.)

Bayes has nothing to do with the concept of statistical significance. Statistical significance is a concept out of frequentist statistics. A concept that comes with a lot of problems.

Nobody really argues that you should ignore it. If you would want drug approval you likely even would have to list it as a potential side effect. That's why the increased lighting strike risk of the Moderna vaccine was disclosed. It's just that your study doesn't provide good evidence for the effect existing. If you want that evidence, you can run another study to look for it. 

Hey, I'm new here. I'm looking for help on dealing with akrasia. My most common failure mode is as follows: when I have many different tasks to do, I'm not able to start any one of them.

I'm planning on working through the hammertime sequence: i've asked for 9 days off work, for a total of 13 days free. Will this be achievable / helpful? What other resources are available?

Specs:
DC area. Have read MoR, Sig. Digits, Inadequate Equilibria, and half of the Sequences. Heavy background in Math/CS/Physics.

I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

By which mechanism do you expect to improve discussion by introducing censorship?

I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:

  • Descriptions of disruptive or dangerous new technology that might threaten mankind
  • Politically or socially controversial speech considered beyond the pale by the majority of members or administrators

I know there are posts in LW that mention a behaviour and/or productivity tip of the form "if/when X happens, do Y". I don't know how this is called so I am not able to find any. Could anybody point me to the right direction, please?

Trigger-Action Planning ("implementation intentions" in the academic literature)

Awesome, thanks Kaj!

Note that Duncan just posted the relevant chapter from the CFAR Handbook as a standalone LW essay.

Cool, thanks. I'll read it!

Is there a way to alter the structure of a futarchy to make it follow a decision theory other than EDT?

I've found a new website bug: copy & pasting bullet points from LW essays into the comment field fails with a weird behavior. I've created a corresponding Github issue.

I am trying to improve my forecasting skills and I was looking for a tool that would allow me to design a graph/network where I could place some statement as a node with an attached probability (confidence level) and then the nodes can be linked so that I can automatically compute the joint or disjoint probability etc.

It seems such a tool could be quite useful, for a forecast with many inputs. 

I am not sure if bayesian networks or influence graphs are what I am looking for or if they could be used for such scope. Nevertheless, I haven't exactly found a super user-friendly tool for either of them. 

What's up with editing groups? Myself and one of the organizers get an error like

app.operation_not_allowed Error submitting form: Localgroup.update

If this is intentional, can the error message show that better? I have no idea if I'm disallowed from taking this action or if there is a bug!

This is a real question! We do not know how to edit our group!

Anyone else shown DALL-E 2 to others and gotten surprisingly muted responses? I've noticed some people react to seeing its work with a lot less fascination than I'd expect for a technology with the power to revolutionize art. I stumbled on dalle2 subreddit post describing a similar anecdote so maybe there's something to this.

Hello, is there a reason for why we can't bookmark comments?

Also, what is supposed to be the point of bookmarks? Are they supposed to be a tool for read it later or for favoriting stuff?

Normal browser bookmarks do work. Use the link icon between the date and karma to get the URL for one.

There's not a particular reason you can't, except for "we haven't prioritized it", and "it's not obviously worth the UI-complexity-creep". 

I'll note that there are some really neat comments on here, and that the button could be hidden in our 3-dots on the top left of comments (though I've never really ached to bookmark a comment before)

Different people use it for different purposes (just like real bookmarks!). I think the most common use-case is to mark something as to be read for later.

Some days (weeks?) ago I saw a tweet from somebody saying something similar to: "What would you guys think of paying AI developers to change careers (to slow AI development)". I am now not able to find anymore.

Has anyone seen it as well and could link it to me here, please?

Thanks!

Is this kind of thinking gaining momentum?

My review of Four Thousand Weeks: Time Management for Mortals.  

I think there's productivity or life-hack kind of content on LessWrong, and I think this book is a good addition to that type of thinking, and it might be a useful counter-point to existing lenses or approaches.

https://digitalsauna.wordpress.com/2022/07/24/four-thousand-weeks-by-oliver-burkeman-2021-second-review/ 

I notice one of the big reasons that people view misalignment of AIs as not a threat is they view the human-AI gap like the gap between humans and corporations, where existential risk is low or none.

The hardness of the alignment problem is basically even just one order of magnitude difference in intelligence is essentially the difference between humans and the smarter animals like horses, not the difference between human beings and corporations. And very rarely does the more intelligent entity not hurt or kill the less-intelligent entity, and power differentials this large go very badly for the less intelligent side. Thus the default outcome of misaligned AI is catastrophe or extinction.

power differentials this large go very badly for the less intelligent side

With a bit of sympathy/compassion and cosmic wealth, this doesn't seem so inevitable. The question is probability of settling on that bit of sympathy, and whether it sparks before or after some less well-considered disaster.

Best of Econtwitter and Guzey's Best of Twitter are two Twitter newsletters I enjoy, does anyone know of similar roundup newsletters?

Can anyone recommend good reading material on economic calculation problem? 

It's been a while since I read it, and it's a blog post rather than anything formal, but I recall liking https://crookedtimber.org/2012/05/30/in-soviet-union-optimization-problem-solves-you/ .

Thanks. I had read it years ago, but didn't remember that he had many more points than O(n^3.5 log(1/h)) scale and provides useful references (other than Red Plenty).

(I initially thought it would be better not to mention the context of the question as it might bias the responses. OTOH the context could make the marginal LW poster more interested in providing answers, so I here it is:)

It came up in an argument that the difficulty of economic calculation problem could be a difficult to a hypothetical singleton, insomuch a singleton agent needs certain amount of compute relative to the economy in question. My intuition consists two related hypotheses: First, during any transition period where any agent participates in global economy where most other participants are humans ("economy" could be interpreted widely to include many human transactions), can the problem of economic calculation provide some limits how much calculation would be needed for an agent to become able to manipulate / dominate the economy? (Is it enough for an agent to be marginally more capable than any other participant, or does it get swamped by the sheer size of the economy is large enough?)

 Secondly, if an Mises/Hayek answer is correct and the economic calculation problem is solved most efficiently by a distributed calculation, it could imply that a single agent in a charge of a number of processes on "global economy" scale could be out-competed by a community of coordinating agents. [1]

However, I would like to read more to judge if my intuitions are correct. Maybe all of this is already rendered moot by results I simply do not how to find.

([1] Related but tangential: Can one provide a definition when distributed computation is no longer a singleton but more-or-less aligned community of individual agents? My hunch is, there could be a characterizations related to speed of communication between agents / processes in a singleton. Ultimately speed of light is prone to mandate some limitations.)

I think there was a post/short-story on lesswrong a few months ago about a future language model becoming an ASI because someone asked it to pretend it was an ASI agent and it correctly predicted the next tokens, or something like that. Anyone know what that post was?

[-]jp10

Is there a way to hide the curated sequences from the frontpage?

Not yet but I’ll likely be adding it soon. (I’m also going to be overhauling how curated sequences work and may do that first. I might actually end up re-disabling recommended sequences for older accounts until I’ve iterated on it more, then re-enable it, this time with a dismiss button)

Hi,

Got here through a recent "Browser" link. Looks interesting. 

Doron