Software developer and EA with interests including programming language design, international auxiliary languages, rationalism, climate science and the psychology of its denial.
My intuition says that this is qualitatively different. If the agent knows that only one green roomer will be asked the question, then upon waking up in a green room the agent thinks "with 90% probability, there are 18 of me in green rooms and 2 of me in red rooms." But then, if the agent is asked whether to take the bet, this new information ("I am the unique one being asked") changes the probability back to 50-50.
Let's hope a rationalist wouldn't write a headline about a "100%" effective COVID treatment without qualification and then, when discussing the two studies, not mention the size of the trials, not discuss the methodology and not show any skepticism (granted, this may well be different from what he posted on Medium).
Also, personally, there's no way I would mention a news report that "of 47,780 people who were discharged from hospital in the first wave, 29.4 per cent were readmitted to hospital within 140 days, and 12.3 per cent of the total died" - and repeat the ambiguous message without at least complaining that it never says whether "the total" refers to 47,780, or 29.4% of 47,780.Medium clearly overreacted by deleting six years of his writing, which seems like a scarily common tendency in big tech (it costs them nothing except maybe a little reputation here and there; I suppose they avoid reputational damage mainly by reinstating those who manage to generate a certain amount of public backlash after-the-fact.)
So, how is that different from JSON? I could take the elevator pitch at JSON.org and change some words to make it about LES:
LES is built on three structures:
To put it another way: JSON provides data interoperability. It seems like no one has to explain why this is good, it's just understood. So I am puzzled why the same argument for code falls flat, even though I see people gush about things like "homoiconicity" (which LES provides btw) without having to explain why that is good.
P.S. no one disagreed with my arguments at the WebAssembly CG, so don't be too quick to judge my arguments as bad.
P.P.S. and to be clear, I don't expect the average developer to get it at this point in time, but the argument apparently fell flat even among language designers at FoC. Nobody said they didn't understand, but no interest was expressed either.
It has not escaped my notice that hopping on a bandwagon is an easier way to gain attention, but a lot of people have had success by starting their own projects.
How is what I did different from Ruby, Python, Vue, Unison, V, Nim, or any of those projects where people make general-purpose libraries to supplement the standard libraries? And in particular, how is my LeMP front-end for C# different from C++, which began as a C preprocessor called "C with Classes"?
A tempting answer is that I was simply 20 years too late to be able to do something new, but V and Vue are quite recent examples. In any case, if we're talking about a project like LES - I am unaware of anything else like it, so which existing project should I have engaged with in order to make it a success? I did try to engage in 2016 with the WebAssembly CG, but that was a flop, as the other members mostly chose not to participate in the conversation I tried to start.
Speaking of little tragedies, some tragedies got me thinking a long time ago.
My biggest one was the fact that most programming languages (1) aren't compatible...Python doesn't Interop with C# doesn't Interop with C++... So people struggle to keep reinventing the wheel in different languages and only rarely is a job done well; And (2) popular languages aren't powerful or extensible (or efficient) enough - e.g. I made a prototype unit inference engine for an obscure language in 2006 and still today not a single one of the popular languages has a similar feature. So I set out to fix these problems 13 years ago in my free time... and I'm still stuck on these problems today. I wished so much I could spend more time on it that in 2014 I quit my job, which turned out to be a huge mistake, but never mind. (There are web sites for my projects which go unnoticed by pretty much everyone. My progress has been underwhelming, but even when I think I've done a great job with great documentation and I've tried to publicize it, it makes no difference to the popularity. Just one of life's mysteries.)
Anyway, I've come to think that actually there are lots of similar problems in the world: problems that go unsolved mainly because there is just no way to get funding to solve them. (In a few cases maybe it's possible to get funding but the ideas man just isn't a businessman so it doesn't happen... I don't think this is one of those cases.) For any given problem whose solutions are difficult and don't match up with any capitalist business model, they're probably just not going to be solved, or they will be solved in a very slow and very clumsy way.
I think government funding for "open engineering" is needed, where the work product is a product, service, or code library, not a LaTeX jargonfest in a journal. Conventional science itself seems vaguely messed up, too; I've never seen the sausage get made so I'm unfamiliar with the problems, but they seem rather numerous and so I would qualify the previous statement by saying we need open engineering that doesn't work as badly as science.
UBI might work as an alternative. It would lack the motivating structure of a conventional job, but if I could find one other UBI-funded person who wanted to do the same project, maybe we could keep each other motivated. I noticed a very long time ago that most successful projects have at least two authors, but still I never found a second person who wanted to work on the same project and earn zero income.
If a self-replicating microbot has the same computing power as a 2020 computer chip half its size, and if it can get energy from sugar/oil while transforming soil into copies of itself, modular mobile supercomputers of staggering ability could be built from these machines very quickly at extremely low cost. Due to Amdahl's law and the rise of GP-GPUs, not to mention deep learning, there has already been a lot of research into parallelizing various tasks that were once done serially, and this can be expected to continue.
But also, I would guess that a self-replicating nanofabricator that can build arbitrary molecules at the atomic scale will have the ability to produce computer chips that are much more efficient than today's chips because it will be able to create smaller features. It should also be possible to decrease power consumption by building more efficient transistors. And IIUC quantum physics doesn't put any bound on the amount of computation that can be performed with a unit of energy, so there's lots of room for improvement there too.
Especially as no character has given a reason to suspect any sort of "perception filter" a la Doctor Who. Incidentally, didn't Hogwarts often reconfigure itself in HPMOR? Seems odd, then, that Fred/George believe they've seen it all.
Thank you for this valuable overview, it's worth bookmarking.
The link in section 3 does not support the idea that humans don't suffer from a priming effect (this may not have been what you meant, but that's how it sounds). Rather, the studies are underpowered and there is evidence of positive-result publication bias. This doesn't mean the published results are wrong, it means 'grain of salt' and replication is needed. LWers often reasonably believe things on less evidence than 12 studies.
Yeah, this was a good discussion, though unfortunately I didn't understand your position beyond a simple level like "it's all quarks".
On the question of "where does a virtual grenade explode", to me this question just highlights the problem. I see a grenade explosion or a "death" as another bit pattern changing in the computer, which, from the computer's perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about "beauty" and "love" and "being in pain", but it seems to me that nothing can really matter to the computer because it can't really feel anything. I once wrote software which actually had a concept that I called "pain". So there were "pain" variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of "nothing really matters: suffering is just an illusion" or, less likely, "pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter", though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word "elephant" comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain's computations: a holistic sense of elephant-ness (and I feel as though I "understand" this output—even though I don't understand what "understanding" is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of "consciousness" that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie "Being John Malkovich", and having recently head of the "thousand brains theory", I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one "huge" particle.
I don't know of a good content aggregator. I guess I would like to see a personalized web site which shows me all the posts/articles from all the good blogs and publishers I know about.
RSS readers are a good start, but not every site has a proper feed (with full, formatted article text and images) and usually the UI isn't what I want (e.g. it might be ugly compared to viewing the site in a browser; also I'd like to be able to see a combined feed of everything rather than manually selecting a particular blog). In the past, I needed caching for offline viewing on a phone or laptop, but mobile internet prices have come down so I bit the bullet and pay for it now. I wonder what tools people like here?
I also wish I had a tool that would index all the content I read on the internet. Often I want to find something I have read before, e.g. to show it to someone with whom I'm conversing, but AFAIK there is no tool for this.
Another tool I wish for is a public aggregator: when I find a good article (or video) I want to put it on a public feed that is under my own control. Viewed in a web browser, ideally the feed would look like a news site, or a blog, or a publication on medium.com. And then someone else could add my "publication" to their own RSS reader, and the ideal RSS reader would produce a master feed that deduplicates (but highlights) content that multiple people (to whom I subscribe) have republished (I was on Twitter yesterday and got annoyed when it showed me the same damn video like 15 times retweeted by various people).