Sorry for the delayed response, but fyi in this case I don't mean after X days, I mean if you aren't whitelisted in advance the intended recipient never gets the email at all.
Thanks, this nicely encapsulated what I was circling around as I said it. I kept reaching for much more absurd cases, like "Mr. President, you're a liar for not disclosing all the details of the US latest military hardware when asked."
Even aside from that... I'm a human with finite time and intelligence, I don't actually have the ability to consistently avoid lies of omissions even if that were my goal.
Plus, I do think it's relevant that many of our most important social institutions are adversarial. Trials, business decisions, things like that. I expect that there are non-adversarial systems that can outperform these, but today we don't have them, and you need more than a unilateral decision to change such systems. Professionally, I know a significant amount of information that doesn't belong to me, that I am not at liberty to disclose due to contracts I'm covered under, or that was otherwise told to me in confidence (or found out by me inadvertently when it's someone else's private information). This information colors many other beliefs and expectations. If you ask me a question where the answer depends in part on this non-disclosable information, do I tell you my true belief, knowing you might be able to then deduce the info I wasn't supposed to disclose? Or do I tell you what I would have believed, had I not known that info? Or some third thing? Are any of the available options honest?
For me, one of the key insights for thinking about this kind of situation was reading David Chapman's In the Cells of the Eggplant. In short, even with common knowledge that a group of people believe that Truth and Honesty are important, there will still be (usually less severe) problems of accurate communication (especially of estimations and intentions and memories and beliefs) that at least rhyme with the problems Quakers have in dealing with Actors. Commitment to truth is not always sufficient even for the literal minded.
The nurse/shot example is an interesting one. In an ideal world the thing the nurse would communicate is, "There will be a little bit of pain, but you need to hold still anyway, because that little bit of pain is not important compared to getting the shot, plus you need to get used to little bits of inescapable pain in life, and my time is more valuable than yours." This...is not actually something you can convey to a little kid, at least not quickly, nor would most generally react well if you did. It would plausibly be a disaster for the pediatric offices everywhere if you tried. Some kids can learn to hold still while knowing a shot hurts when they're still very young, while some grown adults still can't, so it's not like there's an age when a nurse who barely knows you can reliably change their approach.
English words (or words in any natural language) genuinely don't have precise enough meanings for use as Parseltongue-analogs. This is tied into A Human's Guide to Words, and also into what Terry Pratchett was pointing at in The Hogfather.
You could try to create artificial languages that do, and speak only in them when aiming for truth. Math and code are real-world examples, but there are useful concepts we don't know how to express with them (yet?). Historically every advance in the range of things we can express in math has been very powerful. Raikoth's Kadhamic is a fictional one that would be incredibly useful if it existed.
This is interesting, and I'd like to see more/similar. I think this problem is incredibly important, because Five Words Are Not Enough.
My go-to head-canon example is "You should wear a mask." Starting in 2020, even if we just look at the group that heard and believed this statement, the individual and institutional interpretations developed in some very weird ways, with bizarre whirls and eddies, and yet I still have no better short phrase to express what's intended and no idea how to do better in the future.
One-on-one conversations are easy mode - I can just reply, "I can answer that in a tweet or a library, which are you looking for?" The idea that an LLM can approximate the typical response, or guess at the range and distribution of common responses, seems potentially very valuable.
In the case of the hedonic treadmill, possibly more inputs could have no effect at all? I was wondering this too.
Yes, I seem to remember him writing about it at the time, too. Not big posts, more public comments and short posts, not sure exactly where.
The invention of transformers circa 2017 would be the next time I remember a similar shift.
Sorry, got it. Sometimes it's hard to guess the right level of detail.
First point: The comparison to make is "An area covered with solar panels" vs "an area covered with a metamaterial film that optically acts like the usual CPV lenses and trackers to focus the same light on a smaller area." The efficiency benefit is for the same reasons CPV is more efficient than PV, but without the extra equipment and moving parts. It will only ever make sense if the films can be made cheaply, and right now they can't. The usual additional argument for CPV is that it also makes it viable to use more expensive multi-junction cells, since the area of them that's needed is much smaller, but we may be moving towards tandem cells within the next decade regardless. In principle metamaterials can also offer a different option beyond conventional CPV, though this is my own speculation and I don't think anyone is doing it yet even in the lab: separating light by frequency and guiding each range of frequencies to a cell tuned to that range. This would enable much higher conversion efficiencies within each cell, reducing the need for cooling. It would also remove the need for transparency to make multi-junction cells.
Second point: I've talked to the people operating these gasification systems, not just the startups. The numbers are all consistent. Yes, gasification costs energy, and gasifying coal would not make sense (unless you're WW2 Germany). But the process can work with any organic material (including plastics and solvents), not just fossil fuels or dry biomass and the like, as long as the water concentration isn't excessive (I've been told up to ~20%), and consumes a bit less than half the energy content of the fuel. The rest is what you can get from the syngas, most of which is in the hydrogen, and fuel cells are about 60% efficient if you want to use it to make electricity. That's where the 30% number comes from. There are plants doing this using agricultural waste, MSW, ASR, construction waste, medical waste, hazardous waste, food waste, and other feedstocks that otherwise either have little value or are expensive to dispose of.
For sure, panels and land are cheap, and there's no good reason to increase $/W just to gain efficiency. Except sometimes on rooftops where you want a specific building to gain maximum power from limited space, but you obviously wouldn't use CPV with all the extra equipment in that kind of application.
The metamaterial thing (or to a lesser degree even just other advanced optics, like GRIN lenses) is that you can make thin films that behave, electrically, like non-flat optical elements. Metamaterials can give you properties like anomalously high or low absorption coefficients and refractive indexes, negative refractive indexes, and highly tunable wavelength specificity. In some designs (see: Kymeta, Echodyne and their sibling companies) you can make electrically reconfigurable flat, unmoving antennas that act like a steerable parabolic dish. The "how" involves sub-wavelength-scale patterning of the surface, and a lot more detail than I can fit in a comment.
And I don't mean IGCC, I agree about that. I have spoken with several dozen companies in the waste gasification space, their technologies vary quite a bit in design and peformance, but at the frontier these companies can extract ~50% of the chemical energy of mixed organic wastes (with up to 20% water content) in the form of syngas (~30% if you have to convert back to electricity and can't use the extra heat), 2-4x what you get from a traditional incinerator+boiler (which are about 10-12% energy recovery).
You should go back and re-examine those arguments. They haven't been true for decades, if ever, and are usually not produced in good faith. It's not even close, unless you exclusively charged your EV from the very least carbon efficient power plants in the world.
If you mean serial hybrids/PHEVs, then I agree, this is something I expected to see a lot more of by now, but instead companies seem to want to jump straight to pure BEVs, which I think is likely to be a worse transition overall.
You may want to be more specific what you mean by "pollution" and "bigger." Particulates and various fumes other than CO2 per gallon of fuel burned? Sure, makes sense. Anything about aggregate amounts? No. That may be true in some specific geographies, but is broadly false. I do agree that going electric is often a good idea, as long as your lot isn't too big and you keep up with it. I had a battery electric mower and loved it, except that if I missed a week or two I would drain the batteries several times faster b/c of the taller grass. Ditto if the grass was at all damp. But it's getting there.