__nobody

Posts

Sorted by New

Comments

The New Frontpage Design & Opening Tag Creation!

Some (fairly low-priority) glitches / feature requests surrounding tag filtering, in decreasing order of perceived importance.

Front page filtering / tag order: Request: Currently they're in order of being added, with no way to reorder. I'm already getting confused by that… Manual reordering might be nice, but just some imposed order (most likely name or weight adjustment) would be better than the current state.

Front page filtering / adjusting tag weights in the popup: Glitch: When changing the tag weights via the up/down arrows next to the 'other' field, the size of element(s) to the left changes when reaching/leaving one of the pre-defined values, which changes where your clicks go. (Worst case, Firefox re-layouted such that the second click went to the 'remove tag' button… oops.) Ideally, the buttons stay where they are so I don't have to (remember to) wait nor reach for the keyboard and manually compute the adjustment in my head, and instead can just click a few times.

All tags page: Request: Mark newly added / edited tags in some way. (I went through the whole list to add tags that I want to track. New tags are still being created all the time. Currently, it seems the only way to find all those new/updated tags is to go through the whole list again.)

Site Redesign Feedback Requested

I'd actually prefer to have a true dark mode, which probably won't be coming anytime soon… (unless the team steals the dark/light mode stuff from Gwern maybe?)

Luckily, user style sheets are a thing! There's a dark mode style sheet from 2018 on userstyles.org and thanks to its extreme simplicity, I'm happy to report that it still works on the new site design. (Unfortunately, that simplicity also means that it's not very good… c'est la vie. Maybe I'll make a better one when some urgent deadline approaches some day in the following months, maybe not.)

If you prefer white, setting this as your user style sheet might get you 80%-95% of the way there:

.PostsItem2-background {
    background: #f4f4f4;
}
body, .Layout-main {
    background: white;
}
.PostsItem2-bottomBorder {
    border-bottom: solid 2px white;
}

(You may want to fiddle some more with the post / comment borders, other than that I noticed no problems.)


Of course, I'm not sure how stable those names are… So here's some actual feedback / questions on that side of the design:

  • Some CSS classes do seemingly unnecessary things (e.g. .PostsItem2-isRead overrides the background color of the normal .PostsItem2-background but applies the exact same value, same for .Layout-main / body), which makes it potentially harder to customize. (Because !important in the user style overrides both classes it actually still works fine for PostsItem2-isRead, but .Layout-main needs an explicit extra rule and just deleting that background color override in the original CSS doesn't seem to break anything.) Do you want a list of weird spots like that? (If yes, I'll make a crazy style in 1-2 weeks and see what breaks when modding / what can (probably) safely go away.)
  • stable names for most things would be really useful, but PostsItem2 looks fairly unstable / generated? How stable will those names be?

Other than that, I agree with the "looks good"!

__nobody's Shortform

Observation: It should generally be safe to forbid non-termination when searching for programs/algorithms.

In practice, all useful algorithms terminate: If you know that you're dealing with a semi-decidable thing and doing serious work, you'll either (a) add a hard cutoff, or (b) structure the algorithm into a bounded step function and a controller that decides whether or not to run for another step. That transformation is not adding significant overhead size-wise, so you're bound to find a terminating algorithm "near" a non-terminating one!

Sure, that slightly changes the interface – it's now allowed to abort with "don't know", but that's a transformation that you likely would have applied anyway. Even if you consider that a drawback, not having to deal with potentially non-terminating programs / being able to use a description format that cannot represent non-terminating forms should more than make up for that.

(I just noticed this while thinking about how to best write something in Coq (and deciding on termination by "fuel limit"), after AABoyles' shortform on logical causal isolation with its tragically simple bit-flip search had recently made me think about program enumeration again…)

Baking is Not a Ritual

The key point is this: The big difference between baking and cooking is that baking involves much more chemistry than cooking, which means that altering the recipe without understanding what you're doing is much more likely to result in failure. (Bad substitutions/ratios in cooking means the result might taste/look a bit strange, but overall it'll likely be fairly close to the original. Bad substitutions/ratios in baking means you probably get a brick / dust / other unidentifiable garbage.)

Thus, if people approach baking like cooking, they probably fail. Repeatedly. Hence the ritual thinking.

What are objects that have made your life better?

Erasable gel ink pens in lots of different colors.

Working on paper still beats tablets etc. sometimes, and instead of crossing out stuff and trying again, you erase and overwrite – looks much cleaner, even if this was your very first draft / rough diagram / whatever. Instead of copying / re-writing the whole thing a half-dozen times or more to get a final clean version, you copy maybe once, at most twice, often not at all. (You just erase small mistakes that happen while making the clean version, instead of starting over yet again.)

Beats colored pencils by a wide margin, both in handling when writing/drawing as well as in ease of erasure. (The ink becomes completely invisible when heated, no need to scrape pigments out of the crevices of the paper / abrade the paper surface.)

Muji (the "Japanese Ikea") had great ones, but they got rid of most colors (no more green/cyan/purple/…, only black/red/blue.) Luckily, lots of others are producing them now, so I can get new ones when mine (and their refills) finally run out.

My only warning: If you're writing double-sided in a notebook with thin paper, don't be too vigorous when erasing. Normal corrections are no problem, but taking out a whole shaded diagram might also erase parts on the back. Other than that, while I'm not sure how long-term stable these inks are, my 5+ year old notes still look fresh. (I still made backup photos just in case…)

Value of building an online "knowledge web"

I haven't created an account on their page, so this is based purely on what I'm seeing in the example collection / demo videos. It looks broadly similar enough to what I've been building/using over the last years[1] that I think a summary of my experiences with my own tool and the features that you will need might be useful:

In short: It looks awesome as long as you have only small amounts of content – and I think it may actually be awesome in those cases: For almost all my projects, I'm creating separate(!) maps/webs and collecting todos and their states to get a visual overview, and these are also useful to get back into the project if I come back months or years later – so they'd probably also help other people too. But… as things grow, it will get very confusing. (Unless you add lots of functionality into the tool and put in extra effort specifically to counter that.) All attempts to collect all my projects / ideas in a single big map have failed so far… That said, I haven't given up yet and am still trying to make it work.


Here's how things have been breaking down for me:

naming things is hard and over time, you'll pick subtly different names (unless you have a fast way to look up what you called the thing a couple of months ago), and then spelling variants will point to different pages and the web breaks down… Also, there will be drift / shift in meaning or goals – you explore a new sub-topic and suddenly it looks like a good idea to rename / move a page or even a whole category, in sync, across all pages. Without tool support for both of these, things will fail. (p=1.0, N=3; after adding fast search (not just prefix-based like autocomplete!), the 4th map/web died of ontology instead of spelling variants and the 5th isn't dead yet…) Name changes are made simpler if you have page IDs that are independent from the name (and actually Roam has that, too – so these two shouldn't be a problem. A difference is that they're using auto-generated IDs whereas I'm manually choosing mine and using them to encode hierarchy or other stuff that I want in there… I personally find that useful (as long as you don't over-ontologize), but YMMV….)

Next is unweighted linking of everything: If you see all the gazillion uses, sub-topics, … of linear algebra when you look at that page, that's not much better than seeing none of them. (Same for visual graphs.) Automatic sorting (or even just highlighting) by centrality / importance or, failing that, manually curated sub-lists are a necessity. Related, there's not hiding stuff outside the local project/topic: Separate small maps / collections, that can still refer to each other but are largely independent, seem to work better than having everything in one big blob. (Technically, that's basically equivalent to assigning a unique name/ID prefix to each project, but having to manually add that everywhere is draining. It's bad enough that I started working on splitting things up, in spite of all the new problems that brings up… To name just one, things aren't truly separate at the mechanical level, as you'll still have to rename/refactor in sync.) Also, graphs can pick a better layout if they're not constrained by the placement of nodes that you don't want to see anyway.

Beyond that point, I don't know yet… the last two are still only partially implemented. (And I'm not seeing anything like these in Roam… so you'll probably run into problems there eventually.) So far, it looks like that might be enough and from that point on experience in steering / organizing the thing becomes more important.[2]


On graphs: What I'm seeing in Roam is… underwhelming. (Same for the example linked by mr-hire.) Unless I've looked at bad examples, the graphs that Roam gets you are a fixed grid-based unordered (i.e. not ordered to minimize total edge lengths or something like that) mess and the only thing you have is that you can click a node and see its immediate neighbors highlighted? (Or the per-page graphs are essentially a linear list, presented as a very wide 1-deep "graph"?)

If that's what you're working with, then of course that won't be an effective learning tool. When I read you talking about the "clarity" of seeing the connectedness I thought what you'd get from Roam is much closer to something like this:

sample graph

This is a small-ish part of an older map of mine (with node labels censored)… rectangular nodes and the wide blue arrows show hierarchy, dotted arrows (not sure if there are any…) are references/mentions, solid arrows are dependencies. Green stuff is done, light beige stuff is dead/failed. The bright orange bubbles are temporary(!) todo markers – what do I want to achieve / understand now? The fat / colored bubbles are the response (a gradient from bright red thru purple and blue fading to white) – this is what you can work on next and how important it is. (Thin / gray nodes are off the active paths and can probably be ignored, nodes on the paths with open dependencies also show the max. flow through them in their border color so you can decide to cut further dependencies and bodge something in place of that sub-tree.) As you can mark multiple things at the same time (also with different weights/priorities), you can not only get local "to get X, you'll need A, B, C, …" information (plus information on the branching and where it might make sense to cut…) but you can also get "broad field" information. ("X, Y, Z all indirectly/weakly depend on A", so working on A might simplify work on all of these – something that you might not notice if you look at X, then Y, then Z one after another…)

If you have something like that – where you can ask questions and get responses based on topical dependencies and what you already know, and iterate those questions – I think that can be an effective learning tool. But I don't know for sure yet, this thing doesn't work well enough yet to manage large amounts of nodes/information… (My gut feeling currently says that you want to split things very fine-grained – every concept / theorem is its own node, so that you can mark them as done independently – and then when you work through, say, linear algebra, that'll be a lot of nodes. Not there yet, still not enough hiding/filtering…)

(Another thing I'm undecided on is whether I want/need "xor nodes" – "to get X, you'll need A or B or C; not A and B and C" – that might allow much fancier optimization but it also takes agency/information away from you and I'm not sure that you'd get the best decisions out of that, especially if map information is partial/wrong/incomplete and all that…)


As for why this isn't a thing yet… I guess it's (a) hard to make something that actually works well and lots of people try a bit and give up and then others see all those bad examples and conclude it can't work, so less people actually really try? Also, if you finally had the tooling, (b) getting it right would involve a lot of data entry, and you'd need more experience than you'd need to edit Wikipedia (I suspect closer to something like Wikidata) but it'd be a lot less work to just write a linear text book and be done with it. And (c) machine learning can help with the tagging involved in (b) and simplify that down to Wikipedia-level, but the fancy stuff has only happened in the last couple of years (and it'd be even more work on the tooling side)… so it's possible that something exists somewhere that works fairly well, but it's still rather unlikely?


[1] It's a terrible terrible terrible "organically grown" ~1.5KLoC Lua script (about 20% of that in a single function…) that, via Graphviz and Pandoc, generates (static) colorful graphs and HTML pages. Primary focus is tracking / prioritizing todos and projects, but that includes learning new stuff and recording knowledge. (Especially when approaching new areas of math, you tend to get loops (to understand A, you want to understand B, which can be done via C, which relies on A…) and one of the tasks will be to break up those loops… None of the existing tools I found were able to record and work with that. So that's how this started…)

[2] E.g. for my planning thing I now have broad categories like "knowledge" (static, non-actionable, non-directed), "projects" (active, non-actionable, directed / actively carving actionable todos), "farts" (inactive projects, ideas to do stuff, …), "beacons" (fuzzy long-term goals / directions to move in, grouping many of the projects), "spikes" (actionable project carvings) and specific (sub-)projects/tasks are repeatedly moving between these – 'projects.foo' grows a spike somewhere in its text, it moves to 'spikes.foo.how_random_is_enough', if I stop working on it it moves back into the project (and once restarted back to spikes…), and when done it gets a write-up and moves to 'knowledge.foo.random.spike' (for archival purposes), plus extra nodes like 'knowledge.foo.random.lcg_is_not_enough', 'knowledge.foo.random.pcg_works', … (for fast knowledge access). So far, this seems to finally reach the point where it starts to work… for me.

How can we quantify the good done of donating PPEs vs ventilators?

If you can, ask the people over there what they need more urgently! You deciding what's best for them is… typical white people behavior.

If you can't, well…


This is the result of grabbing the first reasonable-looking results off Google and throwing stuff together, where "reasonable-looking" is based on 2 months of intensely tracking the CoV development and ZERO prior experience.

Ventilators (that's the fairly easily quantifiable part)

  • About 40-85% of people put on a ventilator die. (This reddit post has / links a bunch of estimates, but no clear numbers exist…)
  • Besides ventilators, you need trained personnel, and various drugs (for sedation & relaxation – the patient shouldn't fight against the respirator.)
    • Basic training should not be a problem (lots of people/institutions are putting videos online), but lack of experience / risk of errors will probably further lower the survival chances.
    • You still need lots of personnel – ventilation is high-maintenance.
  • Prices seem to be about 500 per, so you'd get about 40 ventilators.
  • For time on a ventilator I'd say average about 10 days (some die quicker, some need 3 weeks to recover enough)

So assuming personnel/drugs/etc. are absolutely no problem, 40 ventilators will support about 120 people per month, which will save about 30-70 lives. Assuming a peak duration of 3 months (completely made up) that's about 100-200 people saved?

PPE (This is very soft / fuzzy… no clue.)

  • Nitrile gloves seem to cost about .20 per pair
  • simple masks are (were?) also about .20 per (if you can get some…)

so if you don't reuse equipment at all, basic equip is about .40 per set or you can have 50K reasonably safe patient interactions with the $20K.

So how many lives does that save…? That depends on how/where the PPE is used. (Solely to protect medical personnel from known-infected people, or when interacting with general people of unknown status / protecting both sides from each other, or also filling in other shortages in the normal "background noise"?) No clue, so let's just say 10% (completely made up number) of those interactions would have resulted in an infection. That would result in 5K prevented infections, which at 5% of them requiring ventilator/ICU would prevent about 250 critical cases that (assuming no ventilators are available) would (probably) die. (Even with ventilators that's about 50-180 deaths?)

So from my estimates, they come out at roughly the same level, with PPE probably being better, but variance for PPE is much higher. (Could save a lot more or a lot less, and if not being down with the virus for several weeks also counts as fractional lives it's probably a lot higher.) Still, could go either way – so, if you can, ask.

Programming: Cascading Failure chains

tl;dr: This text would be much better if it was purely an example of a failure cascade & its explanation and if you left out the last paragraphs where you try to assign blame to specific parts of the system & how this might have been prevented. (I believe you don't really know what you (are trying to) talk about there.)


Let's work backwards, from the end back to the start.

First: Haskell, not Haskel.

The "this wouldn't have happened with Haskell" bit seems like a regretrospective coming from a narrow tunnel. I strongly suspect you're not actually using Haskell intensively, or you'd be aware of other problems, like the compiler / type system being too stupid to understand your logically correct construction and then having to use weird workarounds that just plain suck and that you subsequently drown in heaps of syntactic sugar to make them palatable. (I may be wrong on this; things might have improved a fair bit, I ragequit to Coq a long time ago…)

Also, Haskell still allows nonsense like let f x = f x in f "foo" (an endless loop) – you'll want to use Coq, Agda, or Idris with %default total – then you'll have to prove that your program will actually terminate. (But that still doesn't mean that it'll terminate before the heat death of the universe… only that it can't run forever. The one thing this does mean is that your functions are now safe to use in proofs, because you can no longer "prove" stuff by infinite regress.) With the extended capabilities of dependent types, you'll be able to construct almost anything that you can come up with… only it'll take years of practice to get fluent, and then for serious constructions it may still take days to explain what you're doing in ways that the compiler can understand. (Simple example: If you define addition on (unary) natural numbers (O, S O, S (S O), …) as O + m = m; (S n) + m = S (n + m), then the compiler will be able to figure out that a list of length 3 + n is S (S (S n)) long, but it won't have a clue about n + 3. This matters if you need a non-empty list (e.g. to get the first element) – in the 3 + n case, it can see that the list has length S <something> so it's non-empty, in the other case you'll first have to prove that addition is commutative and then show that n+3 = 3+n (by commutativity) = S <something> (by reduction)…[1] and that's for something as simple as getting an element from a list.) While this is very worthwhile for getting rock-solid software (and things are getting more and more usable / closer to being viable), it's far too slow if you just want to get something up and running that does stuff that's hopefully approximately correct. Nonetheless, I think it's worth spending some time trying to understand & work with this.[2]

Aside: I'm still occasionally using Coq, but >95% of what I'm writing I do in Lua… In years of Coq, I learned safe "shapes" of execution and now I'm mostly writing in Lua what I'd write in Coq, just without the proofs, and also with duck typing galore! If one function describes a certain "movement" of data, when the ways in which it interacts with your data are abstract enough, you can use it independently of types or particular shape of the inputs. While you should theoretically be able to explain that to e.g. Coq, that'd be really complicated and so, for all intents and purposes, it's practically un-typable… So I don't agree at all with your claim that "[t]he basic problem was that the programming language was dynamically typed[…]". You can use that reasonably safely, you just need a few years of training with some hardcore dependently typed language first… ;-)

Your brief interjection on hacking seems rather unmotivated. Your program doesn't provide any interesting computational capacity, there's not much to hack. Same thing goes for your claim that "All this means that most long and complex python program[s are] likely to be hackable." – the program can weigh megabytes or even gigabytes, as long as it doesn't consume any external inputs / doesn't interact with the outside world, there's no way for you to influence its behavior and so it'll be 100% hack proof. (In the other direction, a nc -l -p 2323|bash (don't run this!) or its equivalent in your language of choice is very short and provides full programmability to the outside world.) It's not about length at all (although, sure, in practice there's correlation), it's about the computational capacity provided to the input (and, by extension, the attacker.) You may be thinking that you are writing the program, therefore you are in control; but actually you are constructing an abstract machine on which the input (files, packets, …) will later run – the input is programming the abstract machine that is provided by your program, the branches taken are determined by the bytes read, and the attacker is in control – the only question is how much computational power do you provide? The LANGSEC people are on to something very good, but as some people show it's clearly not enough yet.[3]

The initial example was entertaining and I learned (or rather: became aware of) something new from that. (I technically knew that parentheses in Python do both grouping / precedence and tuple construction, but I wasn't actively aware of the resulting problem. So thank you for this!)


[1]: Actually, in this particular case it's enough to do a case analysis on n and see that even when n=0, you'll have the 3 items and so the list can't be empty. But in general, you'll rarely be that lucky and often have to do really tricky stuff…

[2]: If you want an intro to this stuff, look at the Software Foundations and then also Certified Programming with Dependent Types on how to not go crazy while writing all those proofs… or look at CompCert for some real-world software and be made aware of Mirage as (what I think is currently) the most likely path for this stuff to become relevant. (Also, if you're serious about working with dependent types, the stuff that Conor McBride does is awesome – understand that shift in perspective and you'll cut out years of pain.)

[3]: Now loop that back into the stuff about dependent types / functional programming, where stuff is often represented as trees with lots of pointers… you'll see that while that may be provably safe, it's also extremely brittle, which is another reason why a program written in Lua or C may be much more robust (and, counter-intuitively, ultimately safer) than one written in Coq…