Wiki Contributions

Comments

Did you all see this? https://twitter.com/SquishChaos/status/1383435339910418432?s=20

Basically, claiming in the next 12 months ethereum will undergo the supply shock equivalent of 3 bitcoin halving events. Curious if rationalists see a flaw with the reasoning or are already ahead of this

Is this sequence going to become a sequence in the lesswrong content library at some point? I kind of like having things in the library page so I can go back and read the whole thing later but I noticed it's not there yet.

Another weird takeaway is the timeline. I think my intuition whenever I hear about a good idea currently happening is that because it's happening right now, it's probably too late for me to get in on it at all because everyone already knows about it. I think that intuition is overweighted. If there's a spectrum from ideas being fully saturated to completely empty of people working on them, when good ideas break in the news they are probably closer to the latter than I give them credit for being. At least, I need to update in that direction.

Ya, it's interesting because it was a "so clearly a good idea" idea. We tend to either dismiss ideas as bad because we found the fatal flaw or think "this idea is so flawless it must've been the lowest hanging fruit and thus have already been picked."

Another example that comes to mind is checklists in surgery. Gawande wrote the book "checklist manifesto" with his findings that a simple checklist dramatically improved surgical outcomes back in 2009. I wonder if the "maybe we should try to make some kind of checklist-ish modification to how we approach everything else in medicine" thought needs similar action.

I keep seeing these articles about the introduction of artificial intelligence/data science to football and basketball strategy. What's crazy to me is that it's happening now instead of much much earlier. The book Moneyball was published in 2003 (the movie in 2011) spreading the story of how use of statistics changed the game when it came to every aspect of managing a baseball team. After reading it, I and many others thought to ourselves "this would be cool to do in other sports" - using data would be interesting in every area of every sport (drafting, play calling, better coaching, clock management, etc). But I guess I assumed - if I thought of it, why wouldn't other people?

It's kind of a wild example of the idea that "if something works a little, you should do more of it and see if it works a lot, and keep doing that until you see evidence that it's running out of incremental benefit." My assumption that the "Moneyball" space was saturated back in 2011 was completely off given that in the time between 2011 and now, one could have trained themselves from scratch in the relevant data science methods and pushed for such jobs (my intuition is that 8 years of training could get you there). So, it's not even a "right place, right time" story given the timeline. It's just - when you saw the obvious trend, did you assume that everyone else was already thinking about it, or did you jump in yourself?

I have been watching this video https://www.youtube.com/watch?v=EUjc1WuyPT8 on AI alignment (something I'm very behind on, my apologies) and it occurred to me that one aspect of the problem is finding a concrete formalized solution to Goodhart's law-styled problems? Like Yudkowsky was talking about ways that an AGI optimized towards making smiles could go wrong (namely, the AGI could find smarter and smarter ways to effectively give everyone heroin to quickly create lasting smiles) - and it seems like one aspect of this problem is that the metric "smiles" is a measurement for this ambiguous target "wellbeing," and so when the AGI gives us heroin to make us smile we go "well no, that isn't what we meant when we said wellbeing." So we're trying to find a way to formally write an algorithm for pursuing what we actually mean by wellbeing in a lasting and durable way rather than an algorithm that gets caught optimizing metric that previously measured wellbeing before they were optimized so much. I get that the problem of AI alignment has more facets than just that, but it seems like finding an effective way to tell an AI what wellbeing is rather than telling it things that are currently metrics of wellbeing usually (like smiles) is one facet.

Is this in fact a part of the AI alignment problem, and if so is anyone trying to solve this facet of the problem and where might I go to read more about that? I've been sort of interested in meta-ethics for a while, and solving this facet of the problem seems remarkably related to solving important problems in meta-ethics.

woot woot! that long-term thinking paying off!

Yup, it was a quick thought I put to page and I will quickly and easily concede that 1) my initial idea wasn't expressed very clearly, 2) the way it was expressed is best interpreted by a reader in a way that makes it non-sensicle ("what does it mean to say oxygen is produced" and I didn't really tie my initial writing to climate change in the way I wanted too so what am I even talking about), 3) even the way I clarified my idea later mixed some thoughts that really should be separated out (viable != effective), and 4) I have some learning to do in the area of EA mental models and reasoning about public interventions. Not my best work.

Reflection:

I'm messing around with shortform as a way to kinda throw ideas on a page. This idea didn't work out too well towards generating productive discussion as upon reflection, the idea wasn't super coherent, let alone pointing towards anything true. However, I got a lot more engagement than I expected, which points to something of value from the medium. I think the course forward is probably to 1) keep experimenting with short form because I gain something from having my incoherence pointed out to me and there's a chance I will be more coherent and useful in the future, and 2) maybe take 5 minutes to reread my shortform posts before I post them (just because it's short form doesn't mean it can be nonsense)

Thanks for helping me get informed. I was under the impression that (and this is a separate thread) planting trees was a viable initiative to fight climate change, and by extension the survival of the amazon rainforest was a significant climate change initiative? I guess I'm wondering if along these lines as well, if climate change is important on the world stage, then the health of the rainforest would be as well?

**Thanks for correcting me about the oxygen consumption line - that is what I said and it was misguided

The fact that Amazon rainforest produces 20% atmospheric oxygen (I read this somewhere, hope this isn't fiction) should be a bigger political piece than it seems to be. Seems like brazil could be leveraging this further on a global stage (have other countries subsidize the cost of maintaining the rainforests, preventing deforestation as we all benefit from/need the CO2 to oxygen conversion)? Also, would other countries have a tree-planting supply race to eliminate dependence on such a large source of oxygen from any one agent?

Just a strange thought that occurred to me this morning. Obviously doesn't reflect unfortunate realities of current politics (it isn't much of a political piece if the other agents don't believe it's real), but it occurred to me as a alternative politics that might transpire in a world where everyone took climate change seriously as an existential threat.

Load More