All of __nobody's Comments + Replies

  1. In case people want to get to know each other better outside the meetup, you might want to mention, the rationalist friend-finder/dating site.

Unfortunately that requires Facebook =/ and most of my friends avoid / don't have Facebook for privacy reasons.


Thanks for the lead on a non-Facebook option! Reciprocity looked very exciting for a moment before I, too, realized that it was FB-only, and I have a nasty habit of wanting privacy more than I want friends when that dichotomy is invoked.

Technology Connections viewers already know this somewhat related bit: Consider switching to loose powder instead of tabs, or having both. The dishwasher runs three cleaning cycles (pre-wash, main, rinse), and the tab only goes in for the second phase. The first phase tries to get all the food and grease off using just water… which isn't ideal. Adding like 1/2 a teaspoon of the loose powder directly onto the door / into the tub at the bottom will greatly support the pre-wash phase and should deal with most things.

Since I started doing that, I don't bother ... (read more)

Also, for people in the United States, consider running the hot water from a nearby faucet until the hot water is hot. Then, turn off the faucet and turn on the dishwasher. As always, check your dishwasher's manual for specific recommendations.

The way I approach situations like that is to write code in Lua and only push stuff that really has to be fast down to C. (Even C+liblua / using a Lua state just as a calling convention is IMHO often nicer than "plain" C. I can't claim the same for Python...) End result is that most of the code is readable, and usually (i.e. unless I stopped keeping them in sync) the "fast" functions still have a Lua version that permits differential testing.

Fundamentally agree with the C not C++/Rust/... theme though, C is great for this because it doesn't have tons of ch... (read more)

Sabine Hossenfelder's assessment (quickly) summarized (and possibly somewhat distorted by that):

  • Uranium 235 is currently used at about 60K tons per year. World reserves are estimated to be 8M tons. Increasing the number of NPPs of current designs by a factor of ~10 means it's about 15-20 years until it'd no longer be economically viable to mine U235. Combined with the time scales & costs of building & mothballing NPPs, that's pretty useless. So while some new constructions might make sense, it's not good as a central pillar of a strategy.
  • Due to the
... (read more)
That does not count uranium in seawater. While we currently can't mine it at the same cost from seawater as we can mine it elsewhere, we can mine from seawater. suggests that currently, the price for uranium from seawater is six times as expensive as other sources. The price of uranium is not very important for the price of nuclear energy and paying six times as much for it wouldn't be a problem.

This seems to be another case of "reverse advice" for me. I seem to be too formal instead of too lax with these spatial metaphors. I immediately read the birds example as talking about the relative positions and distances along branches of the Phylogenetic tree, your orthogonality description as referring to actual logical independence / verifiable orthogonality, and it's my job to notice hidden interaction and stuff like weird machines and so I'm usually also very aware of that, just by habits kicking in.

Your post made me realize that instead of people's ... (read more)

Main constraint you're not modeling is how increasing margin size increases total pages and thus cost.

That's why I'm saying it probably won't need that for the footers. There's ~10mm between running footer and text block, if that's reduced to ~8 or 9mm and those 1-2mm go below the footer instead, that's still plenty of space to clearly separate the two, while greatly reducing the "falling off the page" feeling. (And the colored bars that mark chapters are fine, no need to touch those.)

3Ben Pace2y
I see, I misread, yup that makes sense.

Design feedback: Alignment is hard, even when it's just printing. Consider bumping up the running footer by 1-2mm next time, it ended up uncomfortably close to the bottom edge at times. (Also the chapter end note / references pages were a mess.) More details:

variance: For reference, in the books that I have, the width of the colored bars along the page edge at each chapter (they're easy to measure) varies between ~4.25mm and ~0.75mm, and sometimes there's a ~2mm width difference between top and bottom. (No complaints here. The thin / rotated ones look a bi... (read more)

5Ben Pace2y
Thanks for the feedback! Agree with your overall paragraph. Main constraint you're not modeling is how increasing margin size increases total pages and thus cost. Seems plausible I should cut one or two essays to accommodate, but I do love all the essays, and actually the real answer is just that the paragraph spacing is way too big. I didn't notice that about the footers, I will go back and take a look. The end notes were time-crunched for a lot of reasons, I wish they had been better.

Sounds great so far, some questions:

  • How does travel work? Do you get to Prague on your own and then there's organized transport for the last leg, or do you have to do the whole journey yourself? (I don't drive / only use public transport. Car-less travel to "a village about 90 mins [by car?] from Prague" could be anywhere between slightly annoying and near-impossible.)
  • How does accommodation & food work?

And (different category)

  • Are some of you at LWCW to chat in person?
Hi, I'm running these workshops with John so I can provide more information: * We will arrange transportation to the venue from Prague but you can also get there yourself using public transportation. It's quite straightforward. * The workshop is at a venue that has accommodation and food provided for the participants. Confirmed participants will receive more detailed information about logistics later. * It is possible that some of the instructors will be at LWCW but many are flying in later and some will be in Prague preparing for the workshops. I also expect that by late August we will have filled most of the spots for CFAR I and CFAR II. There may be still room in CFAR III (and CFAR IV if confirmed). So if you have questions about applying, feel free to message us so we can talk earlier.
Not an organizer, but when they refer to a village "90 mins from Prague" I'd assume they mean by public transport, since it is quite good in the Prague area.

Re solanine poisoning, just based on what's written in Wikipedia:

Solanine Poisoning / Symptoms

[...] One study suggests that doses of 2 to 5 mg/kg of body weight can cause toxic symptoms, and doses of 3 to 6 mg/kg of body weight can be fatal.[5][...]

Safety / Suggested limits on consumption of solanine

The average consumption of potatoes in the U.S. is estimated to be about 167 g of potatoes per day per person.[11] There is variation in glycoalkaloid levels in different types of potatoes, but potato farmers aim to keep solanine levels below 0.2 mg/g.[18] Sign

... (read more)
1Épiphanie Gédéon2y
The more I think about it, the more I wonder if boiling the potatoes infused them with the peels and increased significantly the quantity of solanine I was consuming. An obvious confounder is that whole-boiled potatoes are less fun to eat than in more varied forms, so it doesn't discriminate with the "fun food" theory Thanks a lot for the estimate, I'll look into recent studies of this to see what I find!

My gut feeling (no pun intended) says the mythical "super-donor" is a very good excuse to keep looking / trying without having to present better results, and may never be found. Doing the search directly in the "microbiome composition space" instead of doing it on people (thereby indirectly sampling the space) feels way more efficient, assuming it is tractable at all.

If some people are already looking into synthesis, is there anything happening in the direction of "extrapolating" towards better samples? (I.e. take several good-but-not-great donors that fal... (read more)

4Michael Harrop2y
I think your comment ignores the plethora of evidence supporting donor-quality hypotheses. Much of it was presented in the OP, and covered the permanent extinction of our host-native microbiomes, along with the exponential rise in chronic disease.  Your suggestion seems to be to “try to find a plethora of plant and wildlife species in a forest that has been burned to the ground”. Whether you can piece it back together is unknown, but I don’t think that’s the best approach to take right now.  Also, one of the major problems is that most people are not even bothering to look for high quality donors, and expecting FMT to get great results with low quality donors. The gut microbiome is incredibly complex, and we are so far from understanding it well enough [1] to be able to replace whole stool with synthetic FMT. Though I’m not discouraging people from trying it, and making headway there.  I would recommend anything by Martin Blaser. I also have a wiki section here on the permanent damage from antibiotics, that extends even beyond their killing of microbes:  There is a tremendous amount of antibiotic overuse/abuse in the medical system. The current guidelines are likely far too generous in promoting their use, and there's even 30%+ overuse according to current guidelines.  I had an extremely depressing related event recently. I had a donor applicant that was seemingly perfect in every way. Their physical condition and ability were amazing/perfect. Their mental condition seems fantastic as well. But someone gave them frequent amounts of antibiotics over their lifetime, which was almost certainly unnecessary. And now they're suffering the consequences of it (in seemingly-subtle ways).  There are research groups that have largely given up on finding ideal microbiomes in modern society, and have thus resorted to visiting remote tribes, such as the Hadza [1][2].  I know from experience (unfortunately only t
1Anton Rodenhauser2y
Sure, "the reason it doesn't work better is because we need better donors" sounds like a nice excuse. But it is at least suggestive that this is indeed the case. The better the donor criteria, the better the study outcomes. If we extrapolate this to even higher criteria...   Btw. poor donors are not the only (avoidable!) reason FMTs often show poor results. See the post.
3Anton Rodenhauser2y
My understanding is that as of now we know waaay too little about the gut microbiome to make this "direct search in microbiome composition" viable. For example, we basically have no clue about bacteriophages in the gut. Yet they probably play an important role in gut health, and in the efficacy of FMTs.  Also, even if we knew exactly what composition we wanted, we aren't very good yet to "synthesize it"/grow it in the lab.

I have something in the pipeline, but it'll take a while... if it's trying to be "actually" alien, it's kinda important that it's internally consistent. "Add some arbitrary bytes to [...represent] metadata" is exactly what you don't want to do. Because if you do, sure, it'll be hard, it'll (probably) be eventually solvable, but it'll be... somewhat dissatisfying. Same for using stuff like NTSC, it's just... why would they come up with exactly that? It just doesn't make any sense!

So, in case anyone else wants to also make a good challenge in this style, her... (read more)

I can't recall specific names / specific treatments of this, but I'm also relatively sure that it's kinda familiar, so I suspect that something exists there. Problem is that, in some sense, it falls right between areas.

On the practical side, people don't really care where the problem originates, they just want to fix it. (And people are swimming in abstraction layers and navigate them intuitively, so a "this is one layer up" doesn't really stand out, as it's still the bottom layer of some other abstraction.) So from the perspective of computer security, it... (read more)

Some very quick notes, don't have time for more:

  • It looks to me as if you fail to separate formal models from concrete implementations. Most exploitability results from mismatches between the two. As a concrete example, the PDF spec says (said?) that a PDF starts with a "this is a PDF" comment, e.g. %PDF-1.4. (Right at the start, with nothing before it.) In practice, most readers are happy if that signature sits anywhere in the first 4KiB or something like that. End result are benign things like polyglot files on the one end – look for Ange Albertini and

... (read more)
Thanks for the insightful comment! I’ll look through the links you provided, but I think we’re in agreement here (though you put it far more succinctly); formal models are not exploitable in the typical sense of the word. That’s why I’m interested in what I’m tentatively calling “Brickability” (though my hope is this concept isn’t new and it already has a name I could google)—a binary string which renders all further input to be irrelevant to the output (for a given Turing machine). For now I’m not really worried about concrete implementations, since I’m not fully sure my formal model is even consistent or non-trivial yet.

On the topic of grayscale: Love it, strongly recommend trying to everyone too.

What I'd really like to see is a way to selectively grayscale applications, or exclude some from the grayscale filter. (So e.g. if I'm running Gimp/PS/..., keep just that one non-grayscale while still desaturating everything else.) If anyone knows anything, all pointers are strongly appreciated!

(I think it ought to be possible with the X server permissions model on Linux, but I'm not sure if a program could even have the permissions to do this on Windows.)

I've done both – asparagus and lettuce – and it works. (Especially for the more bitter kinds of leafy greens, it somewhat reduces bitterness. It can also soften the stems / bottom bits and make them more usable. So e.g. finely chopping the stem and sauteing with some onion and mushrooms can be a good way to use what you'd otherwise discard.)

There's even salad-based soups (both with something like e.g. romaine added just before the end, and also with e. g. iceberg shredded, cooked, and blended), and while it may seem strange initially and mess with your exp... (read more)

Typo: "Prediction markets require liquidity. Suppose you seeded your prediction market with $10,000 of liquidity such that an investor can invest $10,000 into the market without noticeably moving the prices."

That number is wrong, and I'm not entirely sure which one was intended.

The number shouldn't be there at all. I have removed it. Thanks.

They're actually quite different from how our computers work (just on the surface already, the program is unmodifiable and separate from the code, and the whole walking around is also a very important difference[1]), but I agree that they feel "electronics-y" / like a big ball of wires.

Don't forget that most of the complexity theoretic definitions are based on Turing machines, because that happened to become the dominant model for the study of these kinds of things. Similar-but-slightly-different constructions would perhaps be more "natural" on lambda calc... (read more)

These are good points. From my understanding of how processors work, it seems one would get most of the benefits you mention by having addresses/absolute locations (and thus being able to use pointers [by using addresses instead of left/right operations]). Does that ring true to you?

I'll focus on the gears-level elaboration of why all those computational mechanisms are equivalent. In short: If you want to actually get anything done, Turing machines suck! They're painful to use, and that makes it hard to get insight by experimenting with them. Lambda calculus / combinator calculi are way better for actually seeing why this general equivalence is probably true, and then afterwards you can link that back to TMs. (But, if at all possible, don't start with them!)

Sure, in some sense Turing machines are "natural" or "obvious": If you start f... (read more)

Thanks for your reply! I had never heard of SKI-Calculus! I feel like one advantage Turing machines have, is that they are in some sense low level (actually quite close to how we build our computers)? They do seem to have a nice abstraction for memory. I would have no idea how to characterize NSPACE with lambda calculus, for example (though of course googling reveals someone seems to have figured it out).

Update: Amazon Germany now also has the books listed, for €36 (which is fine.) Since I haven't received the "Notify me when the UK books are available" mail yet, I assume this is further downstream propagation from the Amazon US listing.

If that is accurate, then there should be no need at all to manually ship books to other regions?! I guess that's very good news for future books!

Thanks for writing this; I didn't even think to check Amazon Germany since it wasn't listed here. That said, a quick search for Engines of Cognition on some websites that purportedly compare product prices on all Amazon EU stores did only list prices for UK and DE. Since the product is imported from the US in either case, and the non-US store pages were not created manually, I don't really understand why it wouldn't be offered on all Amazon storefronts. EDIT: Apparently the product is available on Amazon DE (though probably only within Germany) for "free worldwide shipping" from the US: the checkout page lists a shipping fee of 11.14€, which it then discounts to zero. This is the same procedure they use for nationwide free shipping, which is nominally priced at 3~4€ IIRC.
4Ben Pace2y
Indeed the books are not yet in-stock in Amazon UK.

Defaults matter: Opt-in may be better than opt-out.

For opt-out, you only know that people who disabled it care enough about not wanting it to explicitly disable it. If it's enabled, that could be either because they're interested or because they don't care at all.

For opt-in, you know that they explicitly expended a tiny bit of effort to manually enable it. And those who don't care sit in the group with those who don't want it. That means it's much more likely that your feedback is actually appreciated and not wasted. Additionally, comments with extended vo... (read more)

"Rationed breaks" could also work and is a bit "rounder". It's less mathematical, but the "ratio" root is still there, plus a hint of scarcity / frugality due to "rationing". Also "to ration one's time" is (I think? - non-native speaker here) a moderately common phrase?

Thanks - yes that's a phrase, and indeed, 'rationed breaks' is already on my list of maybes.

As a one-off, sure. Long term, it may be. I'm currently restructuring my todo list(s) to tag stuff by brain state. (Most of it requires considerable brain capacity, so if I'm exhausted/tired, I tend to scroll Discord or watch Twitch because "I can't do anything in this state anyway", which is neither productive nor particularly relaxing.)

Lots of things like watering plants, cleaning the bathroom walls, throwing some cleaner into the sinks / tub / ..., taking out the trash, properly archiving last quarter's stack of records, making backups, etc. are all ~ze... (read more)

🎉 1

I largely agree with this. Multi-axis voting is probably more annoying than useful for the regulars who have a good model of what is considered "good style" in this place. However, I think it'd be great for newbies. It's rare that your comment is so bad (or good) that someone bothers to reply, so mostly you get no votes at all or occasional down votes, plus the rare comment that gets lots of upvotes. Learning from so little feedback is hard, and this system has the potential to get you much more information.

So I'd suggest yet another mode of use for this: ... (read more)

I broadly agree, but I'd say I consider myself a regular (have been active for nearly 2 years, have deeper involvement with the community beyond LW, have a good bit of total karma), and I still expect this to provide me with useful information.

Berlin's numbers show about 20% Omicron for last week and about 3% for all of Germany. So at least in Berlin, it's already there (and numbers should be >50% omicron with new year's eve.)

In Hamburg, the numbers are also high. The same as London, New York, and all other dense connected traffic hubs. But even in Hamburg Omicron hasn't taken over yet - though it can't take much longer.  

I just noticed (again) that link previews on (only?) old posts are sometimes broken, e. g. when I open this in a new window/tab. At first I suspected it's something about the old link formats but more testing made me more and more confused, and now it seems more and more again like maybe that is relevant?

  1. Opening a link of the form /s/.../p/... in a new window/tab (i.e. not in an existing LW context) always(? - ~10 tests done) breaks the previews on links in the text, and they don't get the ° decoration. (Tags, pingbacks etc. still work.) The same seems t
... (read more)
My comment here links to one post and two sequences, and only the post got a preview. But I don't know if the sequence summary pages ever got previews.

It says the UK Amazon doesn't ship to Germany [at least for the auto-generated listing], and from the US it'd be ~$45 incl. shipping + taxes... =/

And since it's above the magic number of 1kg (around 1.2kg), even a bulk order with local distribution would have to add about €5 for the last leg of shipping, which (adding packaging etc.) makes that just not worth it.

Since Amazon UK is happy to ship other books, I subscribed to the UK availability notification -- maybe it'll work once it's "really" there. I'll update this once the notification comes and I have ... (read more)

Update: Amazon Germany now also has the books listed, for €36 (which is fine.) Since I haven't received the "Notify me when the UK books are available" mail yet, I assume this is further downstream propagation from the Amazon US listing.

If that is accurate, then there should be no need at all to manually ship books to other regions?! I guess that's very good news for future books!

A related observation that might help some: I'm fairly nocturnal because I can work better at night. (Less noise, less light, no interruptions from others, etc.) My default strategy to achieve that was to stay up very late and sleep until the early afternoon.

But at some point I noticed that getting up really early (like 1-3am) also gets you the time at night to work, except now you're going to bed around 6pm instead of staying up until 6am. Both work, with different tradeoffs. (And different friend groups being accessible at different times.)

I know now that I'm not forced to stick to the "staying up late" schedule to get the effect that I want.

Well if we've fallen to the level of influencing other people's votes by directly stating what the votes ought to say (ugh =/), then let me argue the opposite: This post – at least in its current state – should not have a positive rating.

I agree that the topic is interesting and important, but – as written – this could well be an example of what an AI with a twisted/incomplete understanding of suffering, entropy, and a bunch of other things has come up with. The text conjures several hells, both explicitly (Billions of years of suffering are the right choi... (read more)

Your writing feels comically-disturbingly wrong to me, I think the most likely cause is that your model of "suffering" is very different from mine. It's possible that you "went off to infinity" in some direction that I can't follow, and over there the landscape really does look like that, but from where I am it just looks like you have very little experience with serious suffering and ignore a whole lot of what looks to me to be essential complexity.

When you say that all types of suffering can be eliminated / reversed, this feels wrong because people chang... (read more)

The latter. If you have 8 or 16 cores, it'd be really sad if only one thing was happening at a time.

Isn't this false nowadays that everyone has multi-core GPUs?

Nope, still applies. Even if you have more cores than running threads (remember programs are multi-threaded nowadays) and your OS could just hand one or more cores over indefinitely, it'll generally still do a regular context switch to the OS and back several times per second.

And another thing that's not worth its own comment but puts some numbers on the fuzzy "rapidly" from the article:

It's just that [the process switching] happens rapidly.

For Windows, that's traditionally 100 Hz, i. e. ... (read more)

There are no processes that can run independently on every time scale.  There will be many clock cycles where every core is processing, and many where some cores are waiting on some shared resource to be available.  Likewise if you look at parallelization via distinct hosts - they're CLEARLY parallel, but only until they need data or instructions from outside. The question for this post is "how much loss is there to wait times (both context switches and i/o waits), compared with some other way of organizing"?  Primarily, the optimizations that are possible are around ensuring that the units of work are the right size to minimize the overhead of unnecessary synchronization points or wait times.
2Rafael Harth2y
Interesting. But does this mean "no two tasks are ever executed truly parallel-y" or just "we have true parallel execution but nonetheless have frequent context switches?"

I'd adjust the "breadth over depth" maxim in one particular way: Pick one (maybe two or three, but few) small-ish sub-fields / topics to go through in depth, taking them to an extreme. Past a certain point, something funny tends to happen, where what's normally perceived as boundaries starts to warp and the whole space suddenly looks completely different.

When doing this, the goal is to observe that "funny shift" and the "shape" of that change as good as you can, to identify the signs of it and get as good a feeling for it as you can. I believe that being a... (read more)

Other failure modes could be to fail to have properties of probabilty distributions. Negative numbers, imaginary amounts? Not an unknown probability distribuiton because its not a probabilty distribution[...]

Not every probability distribution has to result in real numbers. A distribution that gets me complex numbers or letters from the set { A, B, C } is still a distribution. And while some things may be vastly easier to describe when using quasiprobability distributions (involving "negative probabilities"), that is a choice of the specific approach to ... (read more)

I mean negative or imaginary probablities. Quasipropability distributions fail to be probability distributions. If I have a "random apple" and somebody ask what proprtion of it might be "pear" then that will be 0 as pears are not apples. If I meant to ask "random fruit" then pears would be relevant. While you get some analysis and there is hope to get to a probability distribution analysis, you would need a entirely depenable way to produce the reformulation. Just because some vechicles are amphibous doesn't mean you can take a boat and drive it on land (because some boats are also cars). {0,1,2,3,4,5,6...|}=omega is an exact surreal number and 1/omega = epsilon is an exact surreal number. Yes, the base approach is to make your terms clear and if there remain ambiguity in the core part of the question it is going to critically confuse you. I didn't provide enough clues to glue in what I was talking about. It is kind of telling that the "default frame" will push into all spaces not specifically specified to be against it even if it is pushing square peg throught a round hole. One could easily think that which such a "easy" construction "uniform between 0 and 1" seems like easy to understand. I am trying to highlight a situation where the thign is so basic it seems it would be reasonable to trancend particular formalizations. "getting a propability" and "throwing into the reals" can be slightly different operations when you would need to throw it into others than reals to make your calculation work. Here specifically you can dance it around if you cast small against small into real numbers or bigs against bigs into real numbers. But when you would need to respect the things compared to belong to different archimedean fields things break down. For casting into a single archimedean field everything that fails to be a finite length line will get rounded to nearest real precision of 0 and then all zeroes are equal failing to distinguish single points from infinitely sh

If I really wanted to, I could probably force myself to eat a pack of dates for about 2-3 days before having enough of them.

Actually, I tried that too now. 8 was more than enough, don't really want to eat more. (Wolfram estimates a single dried date to weigh about 16 g and contain roughly 10 g sugar.) So if that's right, this was about 80g of sugar. That's less than half of what I estimated. (Even adding the (tea)spoon of sugar from before as 1-2 extra dates doesn't make much of a difference.)

Approximate amount: 50-60g maybe? I like to add juuust a little to tea and other drinks, about 1-2g / 100ml. Completely unsweetened (and I count nut milks as sweetener too) irritates my stomach for some reason. (And plain unflavored water causes nausea, so teas it is.) There's also often a teaspoon or two in some meals to balance acidity or bring out spices. Rarely some chocolate (80-99% cocoa) or a slice of home-baked cake. (I tend to halve the amount of sugar in recipes.) Fruits (fresh or dried) also contain non-negligible amounts.

How I feel about sugar:... (read more)

Actually, I tried that too now. 8 was more than enough, don't really want to eat more. (Wolfram estimates a single dried date to weigh about 16 g and contain roughly 10 g sugar.) So if that's right, this was about 80g of sugar. That's less than half of what I estimated. (Even adding the (tea)spoon of sugar from before as 1-2 extra dates doesn't make much of a difference.)

I plan to stick to Hy, but I'll make the versioning clearer in the future.

If there's two weeks, that should leave enough time for making & checking alternate implementations, as well as clarifying any unclear parts. (I never fully understood the details of the selection algorithm (and it seems there were bugs in it until quite late), but given a week for focusing just on that, I hope that should work out alright.)

I'm optimizing for features, not speed.

No complaints here, that's the only sane approach for research and other software like this.


... (read more)
I love this forum/community so much.

Feedback on the game so far:

Genome Format: Even though I'm a long time programmer, I vastly preferred this year's version where no one (except you) had to write any code. This was awesome!

Implementation/Spec: I would have preferred a clear spec accompanied by a reference implementation. Hy may be fun to use, but it's incredibly slow. (Also the version differences causing various problems was no fun at all.)
The only big thing to watch out for is to not use the built-in RNG of whatever language you'll end up using, but instead a relatively simple to impleme... (read more)

Lots of great feedback! Here are my tentative thoughts. Please don't construe them as promises. Gene Format The genome format is easier for me too. This sounds like the genome format is just the right way to go in the future. Implementation/Spec I plan to stick to Hy, but I'll make the versioning clearer in the future. I could have done a better job with setup instructions overall. I think the slowness came from implementation choices. I made an extremely inefficient simulation in order to support forward compatibility with some features I never got around to implementing. I'm optimizing for features, not speed. Using PCG sounds like an easy change. If that was all there was to it then I wouldn't mind using it for the random numbers. However there's other bits of the code too (like random selection from sets) which might vary from one implementation to another. Making it possible for everyone's simulator to behave exactly the same is a nice-to-have feature that I'm probably not going to implement. I'd rather put that time into creating more adaptations players get the choice of using. Time I can avoid the last/first week of a quarter year. That's no problem. Duration Extending from one week to two weeks is no problem either. Using the first week for a spec and a sample environment and the second week for the actual parameters sounds excellent. Leaving an explicit 48-hour window for changes sounds good too. Submission It's easy for me to accept CSV files. I think the best way to help everyone is to just accept both CSV files and Google Forms.

In the Markdown editor, surround your text with :::spoiler at the beginning, and ::: at the end.

This (or at least my interpretation of it) seems to not work.

I read it as anywhere inline (i.e. surrounded by other text) putting :::spoiler (without the backticks), followed by your text to be spoilered, followed by ::: (no space required and again without backticks.)

That ended up producing the unspoilered text surrounded by the :::spoiler ... ::: construction and making me slightly sad. Here is a :::spoiler not really ::: spoilered example of the failure.

It... (read more)

Test spoiler:

I didn't trusty myself to reimplement the simulator - any subtle change would likely have invalidated all results. So simulations were real slow... I still somehow went through about 0.1% of the search space (25K of about 27M possible different species), and I hope it was the better part of the space / largely excluding "obviously" bad ideas. (Carefully tweaking random generation to bias it towards preferring saner choices, while never making weird things too unlikely.) Of course, the pairings matter a lot so I'm not at all certain that I didn't accidental... (read more)

There's definitely a rock-paper-scissors dynamic where smaller herbivores are more efficient than larger ones while being less resilient to predators. There could also be a strong random element if the total number of species is much higher than what the food sources can support and the RNG has to decide who starves first (not to mention the people who submit identical species).

Seconding this, does 'by Sep 30th' mean start or end of the day? I'm currently assuming 'end of', in some unspecified time zone.

My computer's still crunching numbers and I'm about to head to bed… would be sad to miss the deadline.

End of September 30th, Pacific Time.

I've got it mostly working now... problem is that the default plot size is unusable, the legend overlaps, etc. etc. -- when run interactively, you can just resize the window and it'll redraw, and then you save it once you're happy. So now I'm also setting plot size, font size, legend position, and then it's "just" a (plt.savefig "plot.png") added before the (

I might also add a more easily parseable log output, but for now tee and a small Lua script mangling it into CSV is enough.

I'll probably clean up all of that in a couple of hours and send ano... (read more)

Another question: where does the script save the data / graphs when running it? Or does it do that at all?

It looks like it might try to open a plot window, but I'm running it on a headless server... so nothing will happen. Is the (hard-to-parse) text scrolling by all that I'll get at the end of a run?

It doesn't. The project should have a persistent memoization system but I didn't implement one. The way to save the data is to pipe your output into a file. The way I save the graphs is to click "Save the figure" on your graph. Yup. If anyone wants to improve the system I am happy to accept merge requests.
On my Windows machine it opens a plot window that has UI to save the image.


Turns out it needs a newer hy, then it works. (And in case anyone else has a similar problem and is also a Python noob, pip's package is probably called python3-pip or something like that. After that, the rest is explained either in the article or by pip itself.)

That just gets me an even longer error message:

Python 3.7.3 (default, Jan 22 2021, 20:04:44)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import hy
>>> import main
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/hy/", line 238, in reader_macroexpand
    reader_macro = _hy_reader[None][char]
KeyError: '*'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module&
... (read more)
The import main workaround did solve the error for someone else. In case it helps, I'm using Python 3.8.10 with the following two libraries installed via pip3. hy==0.20.0 matplotlib=3.4.2

Do you have a more exact version spec? Because I don't even have pip3 and I don't use Python, so I just installed the hy that comes with my distro... and then I get

  File "./main.hy", line 63, column 1

  (defn initial-population [biome]
    "Creates a list of initial organisms"
    (+ [] #*
       (lfor species +species-list+
             (if (= (. species ["spawning-zone"])
                 (setv organisms [])
                 (setv energy +seed-population-energy+)
                 (while (> energy 0)
... (read more)
On Ubuntu, at least, there's a python3-pip package, separately from the python3 package? (Other distros may be similar.) It's also supposed to be possible to install pip using Python itself.
Here is a workaround. 1. cd into the project directory 2. Run the python3 interpreter 3. In the python3 interpretor, call import hy 4. In the python3 interpretor, call import main

Once things have stabilized and things like inline annotations are there, I'd love to see the following: (1) An easy way to add and remove yourself from a pool of available feedback providers. (Checkbox in settings?) And (2) a way for anyone (or nearly anyone - e.g. non-negative karma) to request brief / "basic feedback" on their posts, by automatically matching people from the pool to posts based on e.g. post tags and front page tag weights.

On (1): I have proofread a couple thousand pages by now, and while I'm usually pretty busy, in a slow week I'd be ha... (read more)

I plan to attempt making inline annotations happen soon. The rest of what you describe sounds very cool. Maybe! If there's the demand and supply for it, we could probably build it.

Could you imagine the feeling of lying on a carpet without a shirt on (ie the feeling of a carpet on your torso) ?

Somewhat... it's too diffuse. I can imagine the effect at single spots, the whole thing at once doesn't really work. (I get "glitchy partials", brief impressions flickering and jumping around, but it's not forming anything consistent / stable.)

What about a spider crawling across your hand ?

Back of the hand is manageable (it's "only" tracking of 9 points - 8 legs plus occasional abdomen contact) and it can even become "independent" and su... (read more)

Also one of my friends struggles with verbal thinking and thinks mostly implicitly, using concepts, if I understood that correctly, and they have a strong preference for non-verbal signs of affection (physical contact, actions, quality time etc.).

Same here. Not thinking in words at all, very strong preference for touch or very simple expressions. Over the years with my SO, we basically formed a language of taps, hugs, noises, licks, sniffs, ... (E. g. shlip tongue noise - Greetings! / I like you. / ... (there are even tonal variations - rising / higher ... (read more)

Some more details on each of the categories in order:

Visual - I don't really see things, I just get some weird topological-ish representation. E.g. if I try to imagine a cube, it's more like the grid of a cube / wireframe instead of a real object, and it's really stretchy-bendy and can sometimes wobble around or deform on its own. And attributes like red / a letter printed on a side etc. are not necessarily part of the face but often just floating "labels" connected by a (different kind of) line that goes "sideways" out of 3-space? o.O Even real objects li... (read more)

My visual imagination matches your whole paragraph exactly. Great description. I think the rest of my responses are typical: reasonable sound imagination, minimal taste&touch&smell imagination. Thinking is a mix of abstract stuff and words and images. Little mind control, no synesthesia. Strong internal monologue: at the extreme, most everything I think is backed by the monologue in some way, and the monologue is nearly continuous; at the other extreme if I've been meditating a lot in the past month there's much less monologue. My memory is worse than average, I think. I don't remember a whole lot after a year has passed. I get the impression that many people associate many of their long term memories with time (like, what month it was or what season it was). I don't, at all. I'll remember something that happened during undergrad, but have to reason from context about whether it would have been the first year or last year (which is usually easy to figure out, but that knowledge is not attached to the memory).
All of this is super interesting to me! Especially where we differ. I can imagine all of these extremely vividly. Even multiple different types of carpets, and walking on carpets in different shoes. Could you imagine the feeling of lying on a carpet without a shirt on (ie the feeling of a carpet on your torso) ? What about a spider crawling across your hand ? I am very jealous of your ability to ignore your thoughts and track north. I am terrible with directions, navigating familiar places only by landmarks.  Not to get too speculative, but you mention doing mathematical proofs, which I've never done in my life. Even learning syntax for linguistics (expressed as binary branching trees) was very difficult for me. I'm studying French literature and anything to do with words comes very easily to me. I wonder if there's any tangible overlap between brain function and fields of interest.
1Maxwell Peterson3y
Great descriptions!

I suspect the water content of honey/treacle (estimating 15-20%) will lead to more gluten formation, which risks causing a chewy instead of crumbly texture. (If you're not adding any water at all, you're not getting gluten strands.) Butter also has some water (around 15%), so you generally don't knead these kinds of dough for long. (Same goes for shortbread, scones, ...)

Hence, I guess any flour should do if you know how to handle it / are careful not to overwork the dough.

They are meant to be chewy, not crumbly.
Load More