Open and Welcome Thread - April 2021

by Raemon1 min read7th Apr 202133 comments

13

Open Threads
Personal Blog

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

40 comments, sorted by Highlighting new comments since Today at 6:59 PM
New Comment

Hey LessWrong, I found you years ago but made an account only now.  After reading the HPMOR series I bought the Feynman Lectures on Physics, but never made much headway. I am giving it a proper go again though, and feel like I am making more steady progress than the last time I tried.

One thing I am running into time and again is that while Feynman is amazing at guiding the reader through discovering the physical laws on their own, it is still a static textbook. Being a huge fan of everything Bret Victor, I wondered: has anyone attempted to make these lectures interactive in some way? What would the lectures look like in an age where simple simulations can visualise how parameters in physical laws are related? In an age where readers can use their 3d printers to print an experiment setup at home and follow along? Would love to discuss with people currently going through the lectures themselves what tools they are creating to help their own learning process. 

I am currently just taking notes while reading, but at some point am planning to at least create some interactive simulations of the experiments described. I find that even if those do not add any value, the process of making them helps understanding the concept immensely.

I'm not going through the lectures myself (at least not in a systematic way), but I do spend a lot of time thinking about physics concepts and trying to imagine them in more geometric, conceptual ways. I am interested in making visualizations based on my insights, however I haven't had time to make them yet. I'd love talking about ideas on how to do this though!

Lovely, thanks for replying! 

I'd say my understanding of physics is really highschool+ level, so I am actively learning as I go (I studied AI and have some maths background from it but that's about it). I have collected a few references of maths/physics visualisations as a starting point, though most of them touch on programming more than phyiscs per se. What would be a good format for talking about these ideas, i.e. what works best for you?

> however I haven't had time to make them yet
Yes this will probably be an issue for me too, but reaching out here on LW is the first step towards actually committing to it :)

Welcome! 

I think the problem with visualization content is that is very time-intensive to make (let alone the difficulty.)  You should look at the manim library written in python from which 3blue1brown made his videos of Linear Algebra.

Hey eigen, nice of you to say "hallo"

Looking more into manim is on my list, I have been following 3b1b for years ( he had the best explanation of quaternions and partial differential equations by far ). Theres actually a community split-off established recently, which should be more user-friendly. Thanks for reconfirming that as a valuable resource.

And yes, making visualisations is time-consuming. I think the effort put into writing your own tools-for-learning pays off in big ways however. Going through the Lectures without them is also time-intensive. I don't need to work my way through them fast anyways ;) My goal in making them would be to aid my learning first, and share my findings second (+ iterate with feedback) I guess.

Here's another one of the list, perhaps a bit more doable: https://seeing-theory.brown.edu/

As a side thought: One of the things I always sensed from this forum is a deep affinity for different ways of understanding things. So, not surprisingly, many converge and are enthralled by Bret Victor, though there are many (Nielsen, Matuschak, the web you linked, 3blue1brown, Jonathan Blow.) 

So I think that exploring different mediums can be and end on itself rather than just making visualizations to understand a given subject (I get a sense of that from your comment, and I hope you explore it further!) 

Cheers for the encouragement! I share your intuition, it is what prompted me to post here. A quick sitewide search showed that Bret Victor's name has come up before in discussion on LW, but not as much as I would expect. Anyway I had totally missed Matuschak out of that list, so on the growing list of references he goes :)

You can just DM me :)

I have been having some thoughts on supererogation (that there are morally praiseworthy actions that are not morally obligatory) and scrupulosity (a dysfunctional obsession with doing good), but not to any conclusion. In the course of it I was inspired to make a few memes. Here is The Insanity Wolf Sanity Check. Take the content warning at the top seriously. There is some strong stuff here.

It is a page of anti-advice, rahom, as one would call it in Láadan, in sections relating to EA, morality, AGI, and a few other topics. The device of placing these in the mouth of Insanity Wolf is inspired by Eliezer's Cognitive Trope Therapy.

Take the content warning at the top seriously. There is some strong stuff here.

If you are scrupulous, this page is like "100 reasons to hate yourself forever".

Opposite memes:

Self sacrifice is not a virtue.

(Might still have the same problems though.)

Added (reversed).

"SELF-SACRIFICE

IS A VIRTUE"

And a few others. There are now 127 Insanity Wolf memes there, and one Sean Bean.

If you're going to push “take the content warning at the top seriously”, then may I suggest being careful about what you subsequently quote from the same page directly in the comment thread without it‽

Oh my, that's a brilliant gem. Let's follow these. 

U.S. pauses J&J COVID-19 vaccine over rare blood clots

Zvi argued pretty persuasively that it was a massively bad idea when most of Europe suspended the AZ vaccine for similar reasons.

Are we making the same mistake now, or is it different this time for some reason?

We are making the same mistake now. The rate of serious adverse side effects is minimal. However, just as we engage in security theatre at the airports, so must we engage in public health theatre in our COVID response. With any adverse reactions occurring at all, and the need for our media system to spin every story far out of control and milk the pandemic for every bit of advertising dollars that it can yield, the public health authorities have to make a show of investigating this incredibly rare side effect, only to predictably determine that the rewards far outweigh the risks.

Hello, 

I don't really know how to introduce myself since this would be one of the first time that I post. 
I have found the website by pure chance and think this place would be a good place to try to interact with people who seem to think in a similar way, in so far as it seems that Less Wrong and its community is fundamentally devoted to truth : how to access it through reasoning and how to act upon it. 

On a more personal and practical level, I am a currently studying Fluid Mechanics in France but my interest here is not directly related to Science, but rather Philosophy and how one should act. 

I don't plan on posting often, at least at the start, as it seems that the barrier of entry to be able to put forth compelling points is quite high here and the discourse is based on a complex vocabulary that I will need to appropriate for my view of the world, and that will take time. 

I look forward to interacting with you, and hope not to make too much a fool of myself. 

Welcome Max :) I hope you find deeply worthwhile things to read.

Thank you ! I already have :) 

Maybe some day, who knows, I'll be able to add something to the conversation that has been going on here for years. 

I think Open Threads are kind of meant as being somewhat lower stakes. If you've got an idea and want to gauge interest, or get some feedback, you can try posting here.* (Though keep in mind, some times get less traffic than others, and a lack of response might mean you hit a low traffic time.)


*If you don't know how to do Shortform posts, or find something*, someone here will know how to do it.

*Like the wiki, or arbital, or posts that are referenced, but aren't on the website.

April 2021 is a special month - it has two open threads :D

Oh, huh. I'll merge the comments from the other one into this one.

I have now done so.

Hey LessWrong:  I stumbled across you after coming across references to rationalist and the Grey tribe on Twitter I think, along with the Post-Rats.  Anyhow, you caught my curiosity and I’ve been “dabbling” around the edges to understand your “hypostatical basis” of the world while trying not to get too lost in the weeds.  My background is as an AIDS/Oncology CNS before crashing out with health stuff.  Part of my nursing background involved learning Heidegger from Bert Dreyfus and Kierkegaard from Jane Rubin as part of learning Phenomenological methodologies for clinical research.  However, I ended up going down the rabbit hole with Levinas and Blanchot and then Derrida and ditching research for clinical practice.  Oddly enough it took me into Medical QA/QM stuff.  

Cutting to the chase, I’d read some Timothy Snyder stuff which piqued my phenomenological interests enough to decide I would attempt to re-read all those texts and see if I really understood any of it.  To the end, I recently re-read Derrida’s The Gift and am working on starting to reread both T&I and OTB by Levinas along with Blanchot’s Infinite Conversation; followed by Derrida’s Dissemination with luck.  Part of my curiosity was seeing how differently folks were defining simulacrum compared to my understanding and the differences.  In a nutshell, I’ve understood a simulacrum as being like a genotype and its‘ phenotype as it’s dissimulation.  That‘s pretty packed and there’s lots for me to sort out there...

I’ve been reading a few posts by Scott and sorting out what I’ve missed on AI; trying to make sense of Baye’s theory stuff and how it maps on Neuroscience stuff.  In particular I just read his review on “Surfing Uncertainty” and was struck by the similarities between it and Levinasian notions of proximity, substitution, saying/said, etc.  I don’t know how much I’ll have to contribute but saw the open/welcome thread and thought it probably best to introduce myself....hopkins (aka heideana)

Please excuse typos & autocorrect strangeness

I'd be curious whether you found any applications for phenomenological methodologies in the area of medical research/clinical practice. 

I'm still really keen for footnotes. They allow people to make more nuanced arguments by addressing objections or misunderstandings that people may have without breaking the flow.

I do definitely agree proper footnotes would be good for the default editor. I'm not sure whether we'll get to it any time soon because we continue to have a lot of competing priorities. But meanwhile my recommendation is to do footnotes they way they were done in this post (i.e. as comments that you can create hover-links to)

I'm trying to decide if i'm going to write up a thought about longtermism I had.

I think there are two schools of thought-- that the graph of a value function over time is continuous or discontinuous. The continuous school of thought suggests that you get near term evidence about long term consequences, and the discontinuous school of thought does not interpret local perturbation in this way at all.

I'm sure this is covered in one of the many posts about longtermism, and the language of continuous functions could either make it clearer or less clear depending on the audience.

I don't think there's enough written about long-termism. You have a reader here if you ever decide to write something. I wonder as to where between those two school of thoughts you fall.

Hi, I’m a European working in a hedge fund in Hong Kong. I’ve been on the site for many years, but only as a passive reader. However, last week I finally wrote my first post about what I think is an underinvestigated AI scenario - development of AI as oracles on blockchain. It immiedietely got buried under other posts so I’d like to repost it here. I would very much appreciate any comments on the idea https://www.lesswrong.com/posts/p9CSjcCoLqFptogjW/ai-oracles-on-blockchain-1

Why do the old Sequences posts suddenly appear in my RSS feed?

Can you paste the link of the RSS feed? We've recently moved a bunch of old sequences post to the frontpage that we missed when we did our initial pass in 2017, so that seems like the most reasonable cause, if you are subscribed to a feed that filters only on frontpage posts. 

Sure, it's a frontpage feed: https://www.lesswrong.com/feed.xml?view=frontpage-rss&karmaThreshold=45

Yeah, that makes sense. Will be more careful with moving old historical posts to the frontpage for this reason.

I've been working on defining "optimizer", and I'm wondering about what people consider to be or not be an optimizer. I'm planning on taking about it in my own post, but I'd like to ask here first because I'm a scaredy cat.

I know a person or AI refining plans or hypotheses would generally be considered an optimizer.

What about systems that evolve? Would an entire population of a type of creature be its own optimizer? It's optimizing for genetic fitness of the individuals, so I don't see why it wouldn't be. Evolutionary programming just emulates it, and it's definitely an optimizer.

How do you draw the line between systems that evolve and systems that don't evolve? Is a sterile rock an optimization process? I suppose there is potential for the rocks' contents to evolve. I mean, it's maybe eventually, through the right collisions, life could evolve in a pile of rocks, and then it would be evolve like normal. Are rocks not optimizers, or just really weak, slow optimizers, that take a really really long time to come up with a configuration that isn't equally horrible as everything else in the rock for self-reproduction.

What about systems that tend towards stable configurations? Imagine you have a box with lots of action figures and props and you're bouncing it around. I think such a system would, if feasible, tend towards stable configurations of its contents. For example, initially, the action figures might be all scattered about and bouncing everywhere. But eventually, the system might make the action figures in secure, stable positions. For example, maybe Spiderman would end up with his arm securely longed in a prop and his adjustable spider web accessory securely wrapped around a miniature street light? Is that system an optimizer? What if the toys also come with little motors and a microcontroller to control them, and change their program them by bouncing them around? If you tried this for a sufficiently long time, you could potentially end up with your action figures producing clever strategies to maintain their despite shakes and configuration and avoid further changes in their programs.

What about annealing? Basically annealing involves putting a piece of metal in an oven and heating it for a while. It changes durability and ductility. Normally, people wouldn't think of a piece of metal to be an optimizer. However, there's an optimization algorithm called "simulated annealing". It works pretty much the same way as actual annealing. Actual annealing works as a process in which the things in the metal end up in low-energy states. I don't know how I could justify calling a simulated annealing program and optimizer and not call actual annealing an optimizer.

To what extent is people's intuition of "optimizer" well-defined? I at first clearly say general people and AIs as optimizer, but I don't know about the above things.

Am I right that "optimizer" is a fuzzy concept?

And is it well-defined? I imagined so, but I've been thinking about a lot of things that my intuition doesn't say is or isn't an optimizer.

How much should we care about our notion of "optimizer"? It seems like the main point of the concept is that we know that some optimizers have the potential to be super powerfully or dangerously good at something. So what if we just directly focused on how to tell if a system has the potentially to be super dangerously or powerfully good at something?

I'm finding myself wishing for more resources on picking where to live.  I'm in an uncommon situation: Single.  Enough money to not need to work anymore unless I'm in a high cost of living place so I want to take a few years off.  The only area that I have lots of friends in I already know isn't right for me due to seasonal depression.  Finding the right place to live through my own research will be long having to visit places for long enough to see what they're like but there just doesn't exist super great resources for researching things ahead of time unless I'm missing something.

 

In related news: I hear Atlanta has a decent dance scene.  Anyone live in the Atlanta area have comments?

Have you considered a mobile home? at the very least it should make trying lots of places before anchoring down much easier.

When forecasting, you can be well-calibrated or badly calibrated (well calibrated if e.g. 90% of your 90% forecasts come true). This can be true on smaller ranges: you can be well-calibrated from 50% to 60% if your 50%/51%/52%/…/60% forecasts are each well calibrated.

But, for most forecasters, there must be a resolution at which their forecasts are pretty much randomly calibrated, if this is e.g. at the 10% level, then they are pretty much taking random guesses from the specific 10% interval around their probability (they forecast 20%, but they could forecast 25% or 15% just as well, because they're just not better calibrated).

I assume there is a name for this concept, and that there's a way to compute it from a set of forecasts and resolutions, but I haven't stumbled on it yet. So, what is it?