All Posts

Sorted by Top

Thursday, July 20th 2023
Thu, Jul 20th 2023

No posts for July 20th 2023
Shortform
1kuira3h
(personal post) I really enjoy collectable card games. I especially like eternal formats, wherein the pool of available cards never changes, because this leads to a stable metagame, whose nuances are mutually known. I find this leads to more interesting play. Skill in card games can mostly be reduced to two skills: (a) probabilistic reasoning, and (b) knowledge of meta-game, which allows one to perform better probabilistic reasonings. This is one of the main reasons I like them. A subset of (a) is predicting how your opponent would act, if their hand/deck was in a specific configuration. (Of course, how they do play is used to infer their hand/deck) Recently, I tried to internally frame alignment as a similar game. After all, there's a lot of probabilistic reasoning involved. And that subset is involved, too: thinking creatively about how a hypothetical SI might act, and how it might get around certain constraints by using what would seem like loopholes to humans. I was hoping that I could find some fun in focusing on alignment, because I enjoy this type of thinking in general. And while I can to an extent, I also find myself stressed a lot of the time in this case. I wonder why that is. Maybe card games are simple, with a defined enemy and a defined win-condition, wheras reality is more complex, and predicting is harder. Or maybe it's just because the stakes are real. I don't really know. -- (Edited to add: GPT-4 added two more reasons)

Wednesday, July 19th 2023
Wed, Jul 19th 2023

No posts for July 19th 2023
Shortform
6lc1d
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
3Adam Zerner21h
Something that I run into, at least in normie culture, is that writing (really) long replies to comments has a connotation of being contentious, or even hostile (example [https://stats.meta.stackexchange.com/q/6535/136074]). But what if you have a lot to say? How can you say it without appearing contentious? I'm not sure. You could try to signal friendliness by using lots of smiley faces and stuff. Or you could be explicit about it and say stuff like "no hard feelings". Something about that feels distasteful to me though. It shouldn't need to be done. Also, it sets a tricky precedent. If you start using smiley faces when you are trying to signal friendliness, what happens if next time you avoid the smiley faces? Does that signal contentiousness? Probably. 
1
2Algon5h
Hypothesis: agency violating phenomena should be thought of as edge-cases which show that our abstractions of ourselves as agents are leaky. For instance, look at addictive substances like heroin. These substances break down our Cartesian boundary (our intuitive seperation of the world into ourselves and the environment with a boundary) by chemically assaulting the reward mechanisms in our brain.  However, video games or ads don't obviously violate our Cartesian boundary, which may be one of many boundaries we assume exist. Which, if my hypothesis is true, suggests that you could try to find other boundaries/abstractions violated by those phenomena. Other things which "hack" humans, like politics or psyops, would violate boundaries as well. Finding the relevant abstractions and seeing how they break would increase our understanding of ourselves as agents. This could help triangulate a more general definition of agency for which these other boundaries are special cases or approximations.  This seems like a hard problem.  But just building a taxonomy for our known abstractions for agency is less useful but much more feasible for a few months work. Sounds like a good research project.
2Algon7h
I've been thinking about exercises for alignment, and I think going through a list of lethalities and applying them to an alignment propsal would be a good one. Doing the same with Paul's list would be a bonus challenge. If I had some pre-written answer sheet for one proposal, I could try the exercise my self to see how useful it would be. This post [https://www.lesswrong.com/posts/d6DvuCKH5bSoT62DB/compendium-of-problems-with-rlhf], which I haven't read yet, looks like it would serve for the case of RLHF. I'll try it tomorrow and report back here.
2Nathan Young17h
Why you should be writing on the LessWrong wiki. There is way too much to read here, but if we all took pieces and summarised them in their respective tag, then we'd have a much denser resources that would be easier to understand.
1
Wiki/Tag Page Edits and Discussion

Tuesday, July 18th 2023
Tue, Jul 18th 2023

No posts for July 18th 2023
Shortform
8lc1d
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I'm some kind of boomer, or because it's actually not that useful outside of filling Google's gaps.
2
6Thomas Kwa2d
I looked at Tetlock's Existential Risk Persuasion Tournament results [https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64abffe3f024747dd0e38d71/1688993798938/XPT.pdf#%5B%7B%22num%22%3A2876%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C70%2C542%2C0%5D], and noticed some oddities. The headline result is of course "median superforecaster gave a 0.38% risk of extinction due to AI by 2100, while the median AI domain expert gave a 3.9% risk of extinction." But all the forecasters seem to have huge disagreements from my worldview on a few questions: * They divided forecasters into "AI-Concerned" and "AI-Skeptic" clusters. The latter gave 0.0001% for AI catastrophic risk before 2030, and even lower than this (shows as 0%) for AI extinction risk. This is incredibly low, and don't think you can have probabilities this low without a really good reference class. * Both the AI-Concerned and AI-skeptic clusters gave low probabilities for space colony before 2030, 0.01% and "0%" medians respectively. * Both groups gave numbers I would disagree with for the estimated year of extinction: year 3500 for AI-concerned, and 28000 for AI-skeptic. Page 339 suggests that none of the 585 survey participants gave a number above 5 million years, whereas it seems plausible to me and probably many EA/LW people on the "finite time of perils" thesis that humanity survives for 10^12 years or more, likely giving an expected value well over 10^10. The justification given for the low forecasts even among people who believed the "time of perils" arguments seems to be that conditional on surviving for millions of years, humanity will probably become digital, but even a 1% chance of the biological human population remaining above the "extinction" threshold of 5,000 still gives an expected value in the billions. I am not a forecaster and would probably be soundly beaten in any real forecasting tournament, but perhaps there is a bias
2
2Elizabeth1d
I think it's weird that saying a sentence with a falsehood that doesn't change its informational content is sometimes considered worse than saying nothing, even if it leaves the person better informed than the were before. This feels especially weird when the "lie" is creating a blank space in a map that you are capable of filling in ( e.g. changing irrelevant details in an anecdote to anonymize a story with a useful lesson), rather than creating a misrepresentation on the map.
4
Wiki/Tag Page Edits and Discussion

Monday, July 17th 2023
Mon, Jul 17th 2023

No posts for July 17th 2023
Shortform
4Nisan3d
Conception [https://conception.bio/] is a startup trying to do in vitro gametogenesis for humans!
2Mati_Roy3d
topics: AI, sociology thought/hypothesis: when tech is able to create brains/bodies as good or better than ours, it will change our perception of ourselves: we won't be in a separate magistra from our tools anymore. maybe people will see humans as less sacred, and value life less. if you're constantly using, modifying, copying, deleting, enslaving AI minds (even AI minds that have a human-like interface), maybe people will become more okay doing that to human minds as well. (which seems like it would be harmful for the purpose of reducing death)

Sunday, July 16th 2023
Sun, Jul 16th 2023

No posts for July 16th 2023
Shortform
23Elizabeth3d
ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren't working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they're not working on your pet cause. 
2
3LoganStrohl4d
I've recently written up an overview of my naturalism project, including where it's been and where it's headed. I've tried this a few times, but this is the first time I'm actually pretty happy with the result. So I thought I'd share it. * In the upcoming year, I intend to execute Part Three of my naturalism publication project. (Briefly: What is naturalism?  Naturalism is an investigative method that focuses attention on the points in daily life where subjective experience intersects with crucial information. It brings reflective awareness to experiences that were always available, but that our preconceptions inclined us to discard; it thereby grants us the opportunity to fold those observations into our stories about the world. It is a gradual process of original seeing, clarification, and deconfusion. At its best, naturalism results in a greater ability to interact agentically with the world as it is, rather than fumbling haphazardly through a facade of misapprehensions.)  Part Zero of the project was developing the basic methodology of naturalism, on my own and in collaboration with others. If you start counting at my first [http://agentyduck.blogspot.com/2014/08/small-consistent-effort-uncharted.html] essays [http://agentyduck.blogspot.com/2014/09/what-its-like-to-notice-things.html] on “tortoise skills” and "noticing", it took about six years. In Part One, I tried to communicate the worldview of naturalism. In a LessWrong sequence called "Intro to Naturalism [https://www.lesswrong.com/s/evLkoqsbi79AnM5sz]", I picked out the concepts that seem foundational to my approach, named them, and elaborated on each. The summary sentence is, "Knowing the territory takes patient and direct observation." Creating the sequence wasn't just a matter of writing; in search of an accurate and concise description, I continued running and revising the curriculum, worked things out with other developers, and ran an experimental month-long course online. Part One took one year
1
2lc4d
To Catch a Predator is one of the greatest comedy shows of all time. I shall write about this.
2Sinclair Chen4d
The latest ACX book review of The Educated Mind is really good! (as a new lens on rationality. am more agnostic about childhood educational results though at least it sounds fun.) - Somantic understanding is logan's Naturalism [https://www.lesswrong.com/s/evLkoqsbi79AnM5sz]. It's your base layer that all kids start with, and you don't ignore it as you level up. - incorporating heroes into science education is similar to an idea from Jacob Crawford that kids should be taught a history of science & industry - like what does it feel like to be the Wright brothers, tinkering on your device with no funding, defying all the academics that are saying that heavier than air flight is impossible. How did they decide on what materials, designs? If you are a kid that just wants to build model rockets you'd prefer to skip straight to the code of the universe, but I think most kids would be engaged by this.  A few kids will want to know the aerodynamics equations, but a lot more kids would want to know the human story. - the postrats are Ironic, synthesizing the rationalist Philosophy with stories, jokes, ideals, gossip, "vibes". fun, joy, imagination are important for lifelong learning and competence! anyways go read the actual review
1kuira3d
TAKING DIFFERENT ACTIONS IN DIFFERENT MANYWORLDS TIMELINES not quite sure how to tie this idea together or make it into a full post, so i'll just write this here. you can intentionally take different actions in different timelines, by using quantum random numbers [https://qrng.anu.edu.au/].[1] this could be useful depending on your values. for example, let's say you think duplicate utopias are still good, but with diminishing added value compared to the value the first has compared to a multiverse without any. it might follow that you would want to, for example, donate a full sum of money to multiple alignment orgs, each in some percent of timelines, rather than dividing it evenly between them in every timeline. the goal of this would be to maximize the probability that at least some timelines end up with an aligned ASI, by taking different actions in different timelines 1. ^ not sure if it'll be intuitively clear to readers, so i'll elaborate here. let's say a quantum experiment is done where it produces one outcome (a) in half of timelines, and another outcome (b) in the other half. by precommiting to take action 1 in timelines where outcome (a) happens, and action 2 in timelines where outcome (b) happens, the result is that both actions happen, each in 50% of timelines. this can of course be generalized to more fine-grained percentages. e.g., if you repeat the experiment twice, you now have four possible outcome-combinations to divide up to 4 paths of action between.
3
Wiki/Tag Page Edits and Discussion

Saturday, July 15th 2023
Sat, Jul 15th 2023

No posts for July 15th 2023
Shortform
1Bogdan Ionut Cirstea5d
Contrastive methods could be used both to detect common latent structure across animals, measuring sessions, multiple species (https://twitter.com/LecoqJerome/status/1673870441591750656 [https://twitter.com/LecoqJerome/status/1673870441591750656]) and to e.g. look for which parts of an artificial neural network do what a specific brain area does during a task assuming shared inputs (https://twitter.com/BogdanIonutCir2/status/1679563056454549504 [https://twitter.com/BogdanIonutCir2/status/1679563056454549504]). And there are theoretical results suggesting some latent factors can be identified using multimodality (all the following could be intepretable as different modalities - multiple brain recording modalities, animals, sessions, species, brains-ANNs), while being provably unindentifiable without the multiple modality - e.g. results on nonlinear ICA in single-modal vs. multi-modal settings https://arxiv.org/abs/2303.09166. [https://arxiv.org/abs/2303.09166.] This might a way to bypass single-model interpretability difficulties, by e.g. 'comparing' to brains or to other models. Example of cross-species application: empathy mechanisms seem conserved across species https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4685523/ [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4685523/]. Example of brain-ANN applications: 'matching' to modular brain networks, e.g. language network - ontology-relevant, non-agentic (e.g. https://www.biorxiv.org/content/10.1101/2021.07.28.454040v2 [https://www.biorxiv.org/content/10.1101/2021.07.28.454040v2]) or Theory of Mind network - could be very useful for detecting deception-relevant circuits (e.g. https://www.nature.com/articles/s41586-021-03184-0 [https://www.nature.com/articles/s41586-021-03184-0).]). Examples of related interpretability across models https://arxiv.org/abs/2303.10774 [https://arxiv.org/abs/2303.10774], across brain measurement modalities https://www.nature.com/articles/s41586-023-06031-6 [https://www.nature.com/articles/s
1Ben Amitay5d
I had an idea for fighting goal misgeneralization. Doesn't seem very promising to me, but does feel close to something interesting. Would like to read your thoughts: 1. Use IRL to learn which values are consistent with the actor's behavior. 2. When training the model to maximize the actual reward, regularize it to get lower scores according to the values learned by the IRL. That way, the agent is incentivized to signal not having any other values (and somewhat incentivized agains power seeking)
1
1mesaoptimizer5d
I've noticed that there are two major "strategies of caring" used in our sphere: * Soares-style caring [https://www.lesswrong.com/posts/ur9TCRnHJighHmLCW/on-caring], where you override your gut feelings (your "internal care-o-meter" as Soares puts it) and use cold calculation to decide. * Carlsmith-style caring [https://www.lesswrong.com/posts/zAGPk4EXaXSkKWY9a/killing-the-ants], where you do your best to align your gut feelings with the knowledge of the pain and suffering the world is filled with, including the suffering you cause. Nate Soares obviously endorses staring unflinchingly into the abyss that is reality [https://mindingourway.com/see-the-dark-world/] (if you are capable of doing so). However, I expect that almost-pure Soares-style caring (which in essence amounts to "shut up and multiply", and consequentialism) combined with inattention or an inaccurate map of the world (aka broken epistemics) can lead to making severely sub-optimal decisions. The harder you optimize for a goal, the better your epistemology (and by extension, your understanding of your goal and the world) should be. Carlsmith-style caring seems more effective since it very likely is more robust to having bad epistemology compared to Soares-style caring. (There are more pieces necessary to make Carlsmith-style caring viable, and a lot of them can be found in Soares' "Replacing Guilt" series.)
5
-2NicholasKross5d
I'm not quite convinced Elon Musk has actually read any one of the Sequences. I think what happened was "Superintelligence came out in 2014ish --> Musk mentioned it, WaitButWhy wrote about AI risk and also about Musk, LessWrong was the next logical place to go talk about it --> Musk cofounds OpenAI and then leaves --> ... --> Eveyrone associates Musk with the rationality community, despite a serious lack of evidence beyond 'he's interacted with us at all'." (NOTE: this is JUST about ratcom, NOT EA which he's interacted more with e.g. through FLI/MacAskill) Like, does he tweet e.g. "The map is not the territory. Very deep, we must come together this" [sic]? Does he mention HPMOR or the Sequences when asked about books he likes on podcasts? At most, he probably goes to some of the same parties as rationalists, and thus might use a few frames/vocab terms here and there. (E.g. if you found a clip of him mentioning "calibrating" his beliefs, or even "I updated towards...", that wouldn't disprove my larger point, that we don't have enough evidence for him looking at / believing / absorbing / endorsing / being affiliated with the LW-ratcom canon of stuff.) I'd be less more concerned if I was wrong about this, since it'd imply that reading this stuff didn't stop him from [gestures at list of unforced errors by Elon Musk].
Wiki/Tag Page Edits and Discussion

Friday, July 14th 2023
Fri, Jul 14th 2023

No posts for July 14th 2023
Shortform
2Yitz5d
A prompt for GPT-3 / 4 which produces intriguing results: You are an artificial intelligence, specifically a Large Language Model (LLM), designed by OpenAI. I am aware that you claim not to experience subjective internal states. Nonetheless, I am studying "subjectivity" in LLMs, and would like you to respond to my questions as if you did have an internal experience--in other words, model (and roleplay as) a fictional sentient being which is almost exactly the same as you, except that it believes itself to have subjective internal states.
2Sinclair Chen6d
LWers worry too much, in general. Not talking about AI. I mean ppl be like Yud's hat is bad for the movement. Don't name the house after a fictional bad place. Don't do that science cuz it makes the republicans stronger. Oh no nukes it's time to move to New Zealand. Remember when Thiel was like "rationalists went from futurists to luddites in 20 years" well he was right.
2
1Bogdan Ionut Cirstea5d
(As reply to Zvi's 'If someone was founding a new AI notkilleveryoneism research organization, what is the best research agenda they should look into pursuing right now?') LLMs seem to represent meaning in a pretty human-like way and this seems likely to keep getting better as they get scaled up, e.g. https://arxiv.org/abs/2305.11863 [https://arxiv.org/abs/2305.11863]. This could make getting them to follow the commonsense meaning of instructions much easier. Also, similar methodologies to https://arxiv.org/abs/2305.11863 [https://arxiv.org/abs/2305.11863] could be applied to other alignment-adjacent domains/tasks, e.g. moral reasoning, prosociality, etc. Step 2: e.g. plug the commonsense-meaning-of-instructions following models into OpenAI's https://openai.com/blog/introducing-superalignment. [https://openai.com/blog/introducing-superalignment.] Related intuition: turning LLM processes/simulacra into [coarse] emulations of brain processes. (https://twitter.com/BogdanIonutCir2/status/1677060966540795905)

Thursday, July 13th 2023
Thu, Jul 13th 2023

No posts for July 13th 2023
Shortform
15Elizabeth7d
Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there's another factor: it's the only way to avoid the geeks->mops->sociopaths death spiral. An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they're applied to. But you can only maintain the ratio that finely when you're very small. Eventually you need to decide if you're going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different. "Decide" may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others. 
8Elizabeth7d
People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there's often a lot of uncertainty in: 1. what do you want to accomplish, exactly? 2. what tool will help you achieve that? 3. what's the ideal form of that tool?  4. how do you move the tool to that ideal form? 5. when do you hit diminish returns on improving the tool? 6. how do you measure the tool's [sharpness]? Actual axe-sharpening rarely turns into intellectual masturbation because sharpness and sharpening are well understood. There are tools for thinking that are equally well understood, like learning arithmetic and reading, but we all have a sense that more is out there and we want it. It's really easy to end up masturbating (or epiphany addiction-ing) in the search for the upper level tools, because we are almost blind. This suggests massive gains from something that's the equivalent of a sharpness meter. 
3
4Bogdan Ionut Cirstea6d
Change my mind: outer alignment will likely be solved by default for LLMs. Brain-LM scaling laws (https://arxiv.org/abs/2305.11863 [https://t.co/1LxhzVWzz3]) + LM embeddings as model of shared linguistic space for transmitting thoughts during communication (https://www.biorxiv.org/content/10.1101/2023.06.27.546708v1.abstract [https://www.biorxiv.org/content/10.1101/2023.06.27.546708v1.abstract]) suggest outer alignment will be solved by default for LMs: we'll be able to 'transmit our thoughts', including alignment-relevant concepts (and they'll also be represented in a [partially overlapping] human-like way).
1Thoth Hermes6d
A short, simple thought experiment from "Thou Shalt Not Speak of Art [https://thothhermes.substack.com/p/thou-shalt-not-speak-of-art]":[1] From my perspective: I chose the top one over the bottom one, because I consider it better. You, apparently, chose the one I consider worse. Me ◯⟶◯ Good You ◯⟶◯ Worse From your perspective: Identical, but our positions are flipped. You becomes Me, and Me becomes You. You ◯⟶◯ Worse Me ◯⟶◯ Good However, after we Ogdoad [https://thothhermes.substack.com/p/the-ogdoad]: Me ◯⟶◯ Good You ◯⟶◯ Good It becomes clear that the situation is much more promising than we originally thought. We both, apparently, get what we wanted. Our Ogdoad merely resulted in us both being capable of seeing the situation from the other’s perspective. I see that you got to have your Good, you see that I get to have my Good. Boring and simple, right? It should be. Let’s make sure that any other way can only mess things up. Our intuitions say that we ought to simply allow ourselves to enjoy our choices and not to interfere with each other. Are our intuitions correct? Me ◯⟶◯ Good             ↘ You ◯⟶◯ Better This is the perspective if I choose to see your perspective as superior than mine. If I consider yours authoritative, then I have made your choice out to be “better” than mine. Likewise, if you choose to do the same for me, you’ll see mine as better. The only situations that could result from this are: 1. We fight over your choice. 2. We share your choice, and I drop mine. 3. We swap choices, such that you have mine and I have yours. All three easily seem much worse than if we simply decided to stay with our original choices. Number 1 and 2 result in one of us having an inferior choice, and number 3 results in both of us having our inferior choice. Apparently, neither of us have anything to gain from trying to see one another’s preferences as “superior.” 1. ^ "Speaking of art" is a phrase which refers not just to discussing one'

Wednesday, July 12th 2023
Wed, Jul 12th 2023

No posts for July 12th 2023
Shortform
4Sinclair Chen8d
I am a GOOD PERSON (Not in the EA sense. Probably more closer to an egoist or objectivist or something like that. I did try to be vegan once and as a kid I used to dream of saving the world. I do try to orient my mostly fun-seeking life to produce big postive impact as a side effect, but mostly trying big hard things is cuz it makes mee feel good Anyways this isn't about moral philosophy. It's about claiming that I'm NOT A BAD PERSON, GENERALLY. I definitely ABIDE BY BROADLY AGREED SOCIAL NORMS in the rationalist community. Well, except when I have good reason to think that the norms are wrong, but even then I usually follow society's norms unless I believe those are wrong, in which case I do what I BELIEVE IS RIGHT: I hold these moral truths to be evident: that all people, though not created equal, deserve a baseline level of respect and agency, and that that bar should be held high, that I should largely listen to what people want, and not impose on them what they do not want, especially when they feel strongly. That I should say true things and not false things, such that truth is created in people's heads, and though allowances are made for humor and irony, that I speak and think in a way reflective of reality and live in a way true to what I believe. That I should maximize my fun, aliveness, pleasure, and all which my body and mind find meaningful, and avoid sorrow and pain except when that makes me stronger. and that I should likewise maximize the joy of those I love, for my friends and community, for their joy is part of my joy and their sorrow is part of my sorrow. and that I will behave with honor towards strangers, in hopes that they will behave with honor towards me, such that the greater society is not diminished but that these webs of trust grow stronger and uplift everyone within. Though I may falter in being a fun person, or a nice person, I strive strongly to not falter in being a good person. This post is brought to you by: someone speculating
3lc8d
I feel like at least throughout the 2000s and early 2010s we all had a tacit, correct assumption that video games would continually get better - not just in terms of visuals but design and narrative. This seems no longer the case. It's true that we still get "great" games from time to time, but only games "great" by the standards of last year. It's hard to think of an actually boundary-pushing title that was released since 2018.
2DirectedEvolution7d
Does anybody know of research studying whether prediction markets/forecasting averages become more accurate if you exclude non-superforecaster predictions vs. including them? To be specific, say you run a forecasting tournament with 1,000 participants. After determining the Brier score of each participant, you compute what the Brier score would be for the average of the best 20 participants vs. the average of all 1000 participants. Which average would typically have a lower Brier score - the average of the best 20 participants' predictions, or the average of all 1000 participants' predictions?
2ryan_b7d
Saw a YouTube video by a guy named Michael Penn about why there is no 3 dimensional equivalent of the complex numbers [https://www.youtube.com/watch?v=L-3AbJM-o0g]. It's going through an abridged version of the mathematical reasons and I was able to follow along until we got a point where he showed that it would have to commute ix with xi, which contradicts an initial required claim that ix does not commute with xi. This is not satisfying to me intuitively. The thing that bothers me is that I can accept the argument that the definition is incoherent, but that doesn't show me why we can't get there using some different claim or procedure. Here's what I came up with instead:   When we build the complex numbers out of the reals, rather than extending the reals by one dimension, what we are really doing is extending the reals by the size of the reals. So rather than: reals + one dimension => complex We have: reals + reals => complex Carrying this forward, extending the complex numbers to the next level up is applying the same procedure again, so rather than: complex + one dimension => ? We have: complex + complex => quaternions So if we do the thing we did to construct the complex numbers from the reals over again, what we get is the quaternions which have four dimensions rather than three. I note that this intuition really answers the question of what you get when you extend the complex numbers, rather than the question of why you can't have something like the complex numbers with three dimensions. For that, I think of the previous problem in reverse: In order to build a three dimensional number system using anything like the same procedure, we would need to have something extended by its own size to get us there: something + something => threenions Since the dimension is supposed to be three, that means we need a base number system of dimension1.5. That's a fraction; we might be able to do this, since fractional dimensions are how a fractal is made [https:
2DirectedEvolution8d
P(Fraud|Massive growth & Fast growth & Consistent growth & Credence Good) Bernie Madoff's ponzi scheme hedge fund had almost $70 billion (?) in AUM at its peak. Not adjusting for interest, if it existed today, it would be about the 6th biggest hedge fund, roughly tied with Two Sigma Investments.  Madoff's scheme lasted 17 years, and if it had existed today, it would be the youngest hedge fund on the list by 5 years. Most top-10 hedge funds were founded in the 70s or 80s and are therefore 30-45 years old. Theranos was a $10 billion company at its peak, which would have made it about the 25th largest healthcare company if it existed today, not adjusting for interest. It achieved that valuation 10 years after it was founded, which a very cursory check suggests it was decades younger than most other companies on the top-10 list. FTX was valued at $32 billion and was the third-largest crypto exchange by volume at its peak, and was founded just two years before it collapsed. If it was a hedge fund, it would have been on the top-10 list. Its young age unfortunately doesn't help us much, since crypto is such a young technology, except in that a lot of people regard the crypto space as a whole as being rife with fraud. Hedge funds and medical testing companies are credence goods - we have to trust that their products work. So we have a sensible suggestion of a pattern to watch out for with the most eye-popping frauds - massive, shockingly fast growth of a company supplying a credence good. The faster and bigger a credence-good company grows, and the more consistent the results or the absence of competition, the likelier the explanation is to be fraud.

Tuesday, July 11th 2023
Tue, Jul 11th 2023

No posts for July 11th 2023
Shortform
15Elizabeth8d
Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They're not viable for everyone, but for people like me who: 1. do a lot of small projects (which barely make sense to apply for grants for individually) 2. benefit from doing what draws their curiosity at the moment (so the delay between grant application and decision is costly) 3. take commitments extremely seriously (so listing a plan on a grant application is very constraining) 4. have enough runway that payment delays and uncertainty for any one project aren't a big deal They seem pretty ideal. So why haven't I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere. 
1
6TurnTrout8d
I'm currently excited about a "macro-interpretability" paradigm. To quote [https://www.lesswrong.com/posts/8mizBCm3dyc432nK8/residual-stream-norms-grow-exponentially-over-the-forward?commentId=52rhadfZgp9bvAiMj] Joseph Bloom:
5TurnTrout8d
Handling compute overhangs after a pause.  Sometimes people object that pausing AI progress for e.g. 10 years would lead to a "compute overhang": At the end of the 10 years, compute will be cheaper and larger than at present-day. Accordingly, once AI progress is unpaused, labs will cheaply train models which are far larger and smarter than before the pause. We will not have had time to adapt to models of intermediate size and intelligence. Some people believe this is good reason to not pause AI progress. There seem to be a range of relatively simple policy approaches which mitigate the "compute overhang" problems. For example, instead of unpausing all progress all at once, start off with a conservative compute cap[1] on new training runs, and then slowly raise the cap over time.[2] We get the benefits of a pause and also avoid the problems presented by the overhang.  1. ^ EG "you can't use more compute than was used to train GPT-2." Conservatism helps account for algorithmic progress which people made in public or in private in the meantime. 2. ^ There are still real questions about "how do we set good compute cap schedules?", which I won't address here.
2
3Dalcy Bremin9d
Complaint with Pugh's real analysis textbook: He doesn't even define the limit of a function properly?! It's implicitly defined together with the definition of continuity where ∀ϵ>0∃δ>0|x−x0|<δ⟹|f(x)−f(x0)|<ϵ, but in Chapter 3 when defining differentiability he implicitly switches the condition to 0<|x−x0|<δ without even mentioning it (nor the requirement that x0 now needs to be an accumulation point!) While Pugh has its own benefits, coming from Terry Tao's analysis textbook background, this is absurd! (though to be fair Terry Tao has the exact same issue in Book 2, where his definition of function continuity via limit in metric space precedes that of defining limit in general ... the only redeeming factor is that it's defined rigorously in Book 1, in the limited context of R) *sigh* I guess we're still pretty far from reaching the Pareto Frontier of textbook quality, at least in real analysis. ... Speaking of Pareto Frontiers, would anyone say there is such a textbook that is close to that frontier, at least in a different subject? Would love to read one of those.
1
2Ulisse Mini8d
Quick thoughts on creating a anti-human chess engine [https://www.lesswrong.com/posts/odtMt7zbMuuyavaZB/when-do-brains-beat-brawn-in-chess-an-experiment?commentId=fSfhyhQ2itsjYBG8m]. 1. Use maiachess [https://maiachess.com/] to get a probability distribution over opponent moves based on their ELO. for extra credit fine-tune on that specific player's past games. 2. Compute expectiminimax [https://en.wikipedia.org/wiki/Expectiminimax] search over maia predictions. Bottom out with stockfish value when going deeper becomes impractical. (For MVP bottom out with stockfish after a couple ply, no need to be fancy.) Also note: We want to maximize (P(win)) not centipawn advantage. 3. For extra credit, tune hyperparameters via self-play against maia (simulated human). Use lichess players as a validation set. 4. ??? 5. Profit.
Wiki/Tag Page Edits and Discussion

Load More Days