1 min read

2

This is a special post for quick takes by MikkW. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
MikkW's Shortform
369 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I was going for a walk yesterday night, and when I looked up at the sky, I saw something I had never seen before: a bright orange dot, like a star, but I had never seen a star that bright and so orange before. "No... that can't be"- but it was: I was looking at Mars, that other world I had heard so much about, thought so much about.

I never realized until yesterday that I had never seen Mars with my own two eyes until that day- one of the closest worlds that humans could, with minimal difficulty, make into a new home one day.

It struck me then in a way that I never felt before, just how far away the world Mars is. I knew it in an abstract sense, but seeing this little dot in the distance, a dot that I knew to be an object larger even than the Moon, but seeming so small in comparison, made me realize, in my gut, just how far away this other world was, just like how when I stand on top of a mountain, and see small buildings on the ground way below me, I realize that those small buildings are actually skyscrapers far away.

And yet, as far as Mars was that night, it was so bright, so apparent, precisely because it was closer now to us than it normally ever is- normally this world is even further from us than it is now.

1MikkW
Correction: here I say that I had never seen Mars before, but that's almost certainly not correct. Mars is usually a tiny dot, nearly indistinguishable from the other stars in the sky (it is slightly more reddish / orange), so what I was seeing was a fairly unusual sight

In short, I am selling my attention by selling the right to put cards in my Anki deck, starting at the low price of $1 per card.

I will create and add a card (any card that you desire, with the caveat that I can veto any card that seems problematic, and capped to a similar amount of information per card as my usual cards contain) to my Anki deck for $1. After the first ten cards (across all people), the price will rise to $2 per card, and will double every 5 cards from then on. I commit to study the added card(s) like I would any other card in my decks (I will give it a starting interval of 10 days, which is sooner than the usual interval of 20 days I usually use, unless I judge that a shorter interval makes sense. I study Anki every day, and have been clearing my deck at least once every 10 days for the past 5 months, and intend to continue to do so). Since I will be creating the cards myself (unless you know of a high-quality deck that contains cards with the information you desire), an idea for a card is enough even if you don't know how to execute it.

Both question-and-answer and straight text are acceptable forms for cards. Acceptable forms of payment include cash, Venmo, BTC, E... (read more)

5Mati_Roy
That's genius! Can I (or you) create a LessWrong thread inviting others to do the same?
7MikkW
Thanks! I will create a top level post explaining my motivations and inviting others to join.
1MikkW
Done: https://www.lesswrong.com/posts/zg6DAqrHTPkek2GpA/selling-attention-for-money
3MikkW
Proof that I have added cards to my deck (The top 3 cards, the other claimed cards are currently being held in reserve; -"is:new" shows only cards that have been given a date and interval for review)
3Chris_Leong
Interesting offer. If you were someone who regularly commented on decision theories discussions, I would be interested in order to spread my ideas. But since you aren't, I'd pass.
1MikkW
When I write up the top-level post, I'll mention that you offered this for people who comment on DT discussions, unless you'd prefer I don't
2Chris_Leong
That's fine! (And much appreciated!)
3Mati_Roy
can I claim cards before choosing its content?
3MikkW
Yes, that is allowed, though I reserve the right to veto any cards that I judge as problematic
3Mati_Roy
if so, I want to claim 7 cards
3MikkW
Messaged
3wunan
I'm curious what cards people have paid to put in your deck so far. Can you share, if the buyers don't mind?
2MikkW
I currently have three cards entered, and the other seven are being held in reserve by the buyer (and have already been paid for). They are: "Jeff's Friendly Snek", "Book:  The Mathematical Theory of Communication by Claude Shannon", and "Maximize Cooperative Information Transfer for {{Learning New Optimization}}", where {{brackets}} indicate cloze deletions; these were all sponsored by jackinthenet, he described his intention as wanting to use me as a vector for propagating memes and maximizing cooperative information transfer (which prompted the card).

Religion isn't about believing false things. Religion is about building bonds between humans, by means including (but not limited to) costly signalling. It happens that a ubiquitous form of costly signalling used by many prominent modern religions is belief taxes (insisting that the ingroup professes a particular, easily disproven belief as a reliable signal of loyalty), but this is not neccesary for a religion to successfully build trust and loyalty between members. In particular, costly signalling must be negative-value for an individual (before the second-order benefits from the group dynamic), but need not be negative-value for the group, or for humanity. Indeed, the best costly sacrifices can be positive-value for the group or humanity, while negative-value for the performing individual. (There are some who may argue that positive-value sacrifices have less signalling value than negative value sacrifices, but I find their logic dubious, and my own observations of religion seem to suggest positive-value sacrifice is abundant in organized religion, albeit intermixed with neutral- and negative-value sacrifice)

The rationalist community is averse to religion because it so often goe... (read more)

2Pattern
That's one way to do things, but I don't think it's necessary. A group which requires (for continued membership) members to exercise, for instance, imposes a cost, but arguably one that should not be (necessarily*) negative-value for the individuals. *Exercise isn't supposed to destroy your body.
3MikkW
If it's not negative value, it's not costly signalling. Groups may very well expect members to do positive-value things, and they do - Mormons are expected to follow strict health guidelines, to the extent that Mormons can recognize other Mormons based on the health of their skin; Jews partake in the Sabbath, which has personal mental benefits. But even though these may seem to be costly sacrifices at first glance, they cannot be considered to be costly signals, since they provide positive value
4Pattern
If a group has standard which provide value, then while it isn't a 'costly signal' it sorts out people who aren't willing to invest effort.* Just because your organization wants to be strong and get things done, doesn't mean it has to spread like cancer*/cocaine**. And something that provides 'positive value' is still a cost. Living under a flat 40% income tax by one government has the same effect as living under 40 governments which each have a flat 1% income tax. You don't have to go straight to 'members of this group must smoke'. (In a different time and place, 'members of this group must not smoke' might have been regarded as an enormous cost, and worked as such!) *bigger isn't necessarily better if you're sacrificing quality for quantity **This might mean that strong and healthy people avoid your group.

If you know someone is rational, honest, and well-read, then you can learn a good bit from the simple fact that they disagree with you.

If you aren't sure someone is rational and honest, their disagreement tells you little.

If you know someone considers you to be rational and honest, the fact that they still disagree with you after hearing what you have to say, tells you something.

But if you don't know that they consider you to be rational and honest, their disagreement tells you nothing.

It's valuable to strive for common knowledge of you and your partners' rationality and honesty, to make the most of your disagreements.

2Dagon
If you know someone is rational, honest, and well-read, then you probably don't know them all that well.   If someone considers you to be rational and honest, and well-read, that indicates they are not.
1MikkW
😅

Does newspeak actually decrease intellectual capacity? (No)

In George Orwell's book 1984, he describes a totalitarian society that, among other initiatives to suppress the population, implements "Newspeak", a heavily simplified version of the English language, designed with the stated intent of limiting the citizens' capacity to think for themselves (thereby ensuring stability for the reigning regime)

In short, the ethos of newspeak can be summarized as: "Minimize vocabulary to minimize range of thought and expression". There are two different, closely related, ideas, both of which the book implies, that are worth separating here.

The first (which I think is to some extent reasonable) is that by removing certain words from the language, which serve as effective handles for pro-democracy, pro-free-speech, pro-market concepts, the regime makes it harder to communicate and verbally think about such ideas (I think in the absence of other techniques used by Orwell's Oceania to suppress independent thought, such subjects can still be meaningfully communicated and pondered, just less easily than with a rich vocabulary provided)

The second idea, which I worry is an incorrect takeaway people m... (read more)

4Viliam
Yes, the important thing is the concepts, not their technical implementation in the language. Like, in Esperanto, you can construct "building for" + "the people who are" + "the opposite of" + "health" = hospital. And the advantage is that people who never heard that specific word can still guess its meaning quite reliably. I think the main disadvantage is that it would exist in parallel, as a lower-status version of the standard English. Which means that less effort would be put into "fixing bugs" or "implementing features", because for people capable of doing so, it would be more profitable to switch to the standard English instead. (Like those software projects that have a free Community version and a paid Professional version, and if you complain about a bug in the free version that is known for years, you are told to deal with it or buy the paid version. In a parallel universe where only the free version exists, the bug would have been fixed there.) How would you get stuff done if people won't join you because you suck at signaling? :( Sometimes you need many people to join you. Sometimes you only need a few specialists, but you still need a large base group to choose from.
1MikkW
As an aside, I think it's worth pointing out that Esperanto's use of the prefix mal- to indicate the opposite of something (akin to Newspeak's un-) is problematic: two words that mean the exact opposite will sound very similar, and in an environment where there's noise, the meaning of a sentence can change drastically based on a few lost bits of information, plus it also slows down communication unnecessarily. In my notes, I once had the idea of a "phonetic inverse": according to simple, well defined rules, each word could be transformed into an opposite word, which sounds as different as possible from the original word, and has the opposite meaning. That rule was intended for an engineered language akin to Sona, so the rules would need to be worked a bit to have something good and similar for English, but I prefer such a system to Esperanto's inversion rules
3Matt Goldenberg
The other problem is that opposite is ill defined depending and requires someone else to know which dimension you're inverting along as well as what you consider neutral/0 for that dimension
2MikkW
While this would be an inconvenience for the on-boarding process for a new mode of communication, I actually don't think it's that big of a deal for people who are already used to the dialect (which would probably make up the majority of communication) and have a mutual understanding of what is meant by [inverse(X)] even when X could in principle have more than one inverse.
2Matt Goldenberg
That makes the concept much less useful though. Might as well just have two different words that are unrelated. The point of having the inverse idea is to be able to guess words right?
1MikkW
I'd say the main benefit it provides is making learning easier - instead of learning "foo" means 'good' and "bar" means 'bad', one only needs to learn "foo" = good, and inverse("foo") = bad, which halves the total number of tokens needed to learn a lexicon. One still needs to learn the association between concepts and their canonical inverses, but that information is more easily compressible

"From AI to Zombies" is a terrible title... when I recommend The Sequences to people, I always feel uncomfortable telling them the name, since the name makes it sound like cookey bull****- in a way that doesn't really indicate what it's about

3Eli Tyre
I agree.  I'm also bothered by the fact that it is leading up to AI alignment and the discussion of Zombies is in the middle! Please change?
2Yoav Ravid
I usually just call it "from A to Z"
2Willa
I think "From AI to Zombies" is supposed to imply "From A to Z", "Everything Under the Sun", etc., but I don't entirely disagree with what you said. Explaining either "Rationality: From AI to Zombies" or "The Sequences" to someone always takes more effort than feels necessary. The title also reminds me of quantum zombies or p-zombies everytime I read it...are my eyes glazed over yet? Counterpoint: "The Sequences" sounds a lot more cult-y or religious-text-y. "whispers: I say, you over there, yes you, are you familiar with The Sequences, the ones handed down from the rightful caliph, Yudkowsky himself? We Rationalists and LessWrongians spend most of our time checking whether we have all actually read them, you should read them, have you read them, have you read them twice, have you read them thrice and committed all their lessons to heart?" (dear internet, this is satire. thank you, mumbles in the distance) Suggestion: if there were a very short eli5 post or about page that a genuine 5 year old or 8th grader could read, understand, and get the sense of why The Sequences would actually be valuable to read, this would be a handy resource to share.

Asking people to "taboo [X word]" is bad form, unless you already know that the other person is sufficiently (i.e. very) steeped in LW culture to know what our specific corner of internet culture means by "taboo".

Without context, such a request to taboo a word sounds like you are asking the other person to never use that word, to cleanse it from their vocabulary, to go through the rest of their life with that word permanently off-limits. That's a very high, and quite rude, ask to make of someone. While that's of course not what we mean by "taboo", I have seen requests to taboo made where it's not clear that the other person knows what we mean by taboo, which means it's quite likely the receiving party interpreted the request as being much ruder than was meant.

Instead of saying "Taboo [X word]", instead say "could you please say what you just said without using [X word]?" - it conveys the same request, without creating the potential to be misunderstood to be making a rude and overreaching request.

5Viliam
I see you tabooed "taboo". Indeed, this is the right approach to LW lingo... only, sometimes it expands the words into long descriptions.
2Pattern
Step 1: Play the game taboo. Step 2: Request something like "Can we play a mini-round of taboo with *this word* for 5 minutes?" *[Word X]* Alternatively, 'Could you rephrase that?'/'I looked up what _ means in the dictionary, but I'm still not getting something...'

I'm quite scared by some of the responses I'm seeing to this year's Petrov Day. Yes, it is symbolic. Yes, it is a fun thing we do. But it's not "purely symbolic", it's not "just a game". Taking things that are meant to be serious is important, even if you can't see why they're serious.

As I've said elsewhere, the truly valuable thing a rogue agent destroys by failing to live up to expectations on Petrov day, isn't just whatever has been put at stake for the day's celebrations, but the very valuable chance to build a type of trust that can only be built by playing games with non-trivial outcomes at stake.

Maybe there could be a better job in the future of communicating the essence of what this celebration is intended to achieve, but to my eyes, it was fairly obvious what was going on, and I'm seeing a lot of comments by people (whose other contributions to LW I respect) who seemed to completely fail to see what I thought was obviously the spirit of this exercise

I'm quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren't trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)

Even putting this aside, the fact that nobody jumped at the chance to potentially shift a non-trivial (for certain definitions of trivial) amount of funding away from bad organizations and towards good organizations (which I'm investing primarily as a personal financial strategy), seems very worrying to me. While it is (as ChristianKI pointed out) debatable that the amount of funding I can provide as a single person will make a big difference to a big company, it's bad decision theory to model my actions as only being correlated with myself; and besides, if the funding was redirected, it probably would have gone somewhere without the enormous supply of funds Alphabet has, and very well could have made an important difference, pushing the margins away from failure and towards success.

There's a good chance I may change my mind in the future about this, but currently my response to this information is a substantial shift away from the LW crowd actually being any good at usefully using rationality instrumentally

(For what it's worth, the post made it not at all clear to me that we were talking about a nontrivial amount of funding. I read it as just you thinking a bit through your personal finance allocation. The topic of divesting and impact investing has been analyzed a bunch on LessWrong and the EA Forum, and my current position is mostly that these kinds of differences in investment don't really make much of a difference in total funding allocation, so it doesn't seem worth optimizing much, besides just optimizing for returns and then taking those returns and optimizing those fully for philanthropic impact.)

8Matt Goldenberg
This seems to be the common rationalist position, but it does seem to be at odds with: 1. The common rationalist position to vote on UDT grounds. 2. The common rationalist position to eschew contextualizing because it ruins the commons. I don't see much difference between voting because you want others to also vote the same way, or choosing stocks because you want others to choose stocks the same way. I also think it's pretty orthogonal to talk about telling the truth for long term gains in culture, and only giving money to companies with your values for long term gains in culture.
2mako yass
I don't understand. What do you mean by contextualizing?
2Matt Goldenberg
More here: https://www.lesswrong.com/posts/7cAsBPGh98pGyrhz9/decoupling-vs-contextualising-norms 
6John_Maxwell
For what it's worth, I get frustrated by people not responding to my posts/comments on LW all the time. This post was my attempt at a constructive response to that frustration. I think if LW was a bit livelier I might replace all my social media use with it. I tried to do my part to make it lively by reading and leaving comments a lot for a while, but eventually gave up.
3Viliam
Does LW 2.0 still have the functionality to make polls in comments? (I don't remember seeing any recently.) This seems like the question that could be easily answered by a poll.
2jimrandomh
It doesn't; this feature didn't survive the switchover from old-LW to LW2.0.
2ChristianKl
My point wasn't about the size about the company but about whether or not the company already has large piles of cash that it doesn't know how to invest. There are companies that want to invest more capital then they have available and thus have room for funding and there are companies where that isn't the case.  There's a hilarious interview with Peter Thiel and Eric Schmidt where Thiel charges Google with not spending their 50 billion dollar in the bank that it doesn't know what to do with and Eric Schmidt says "What you discover running these companies is that there are limits that are not cash..." That interview happened back in 2012 but since then the amount of cash reverse of Alphabet has more then doubled despite some stock buybacks.  Companies like Tesla or Amazon seem to be willing to invest additional capital to which they have access in a way that Alphabet and Microsoft simply don't.  My general model would be that most LW'ler think that the instrumentally rational thing is to invest the money into a low-fee index fund. 
3MikkW
Wow, that video makes me really hate Peter Thiel (I don't necessarily disagree with any of the points he makes, but that communication style is really uncool)
2ChristianKl
In most context I would also dislike this communication style. In this case I feel that the communication style is necessary to get a straight answer from Eric Schmidt who would rather avoid the topic.  
2Ben Pace
On the contrary, I aspire to the clarity and honesty of Thiel's style. Schmidt seems somewhat unable to speak directly. Of the two of them, Thiel was able to say specifics about how the companies were doing excellently and how they were failing, and Schmidt could say neither.
5MikkW
Thank you for this reply, it motivated me to think deeper about the nature of my reaction to Thiel's statements, and my thoughts on the conversation between Thiel and Schmidt. I would share my thoughts here, but writing takes time and energy, and I'm not currently in position to do so. 
2Ben Pace
:-)

During today's LW event, I chatted with Ruby and Raemon (seperately) about the comparison between human-made photovoltaic systems (i.e. solar panels), and plant-produced chlorophyll. I mentioned that in many ways chlorophyll is inferior to solar panels - consumer grade solar panels operate in the 10% to 20% efficiency range (i.e. for every 100 joules of light energy, 10 - 20 joules are converted into usable energy), while chlorophyll is around 9% efficient, and modern cutting edge solar panels can go even as high as nearly 50% efficiency. Furthermore,... (read more)

3Raemon
Huh, somehow while chatting with you I got the impression that it was the opposite (chlorophyll more effective than solar panels). Might have just misheard.
1MikkW
The big advantage chlorophyll has is that it is much cheaper than photovoltaics, which is why I was saying (in our conversation) we should take inspiration from plants
2Raemon
Gotcha. What's the metric that it's cheaper on?
4mingyuan
Well, money, for one?
2mako yass
It would be interesting to see the efficiency of solar + direct air capture compared to plants. If it wins I will have another thing to yell at hippies (before yelling about there not being enough land area even for solar)
4MikkW
There's plenty of land area for solar. I did a rough calculation once, and my estimate was that it'd take roughly twice the land area of the Benelux to build a solar farm that produced as much energy per annum as the entirety of humanity uses each year (The sun outputs an insane amount of power, and if one steps back to think about it, almost every single joule of energy we've used came indirectly through the sun - often through quite inefficient routes). I didn't take into account day/night cycles, or losses of efficiency due to transmission, but if we assume 4x loss due to nighttime (probably a pessimistic estimate) and 5x loss due to transmission (again, being pessimistic), it still comes out to substantially less than the land we have available to us (About 1/3 the size of the Sahara desert)

Update on my tinkering with using high doses of chocolate as a psychoactive drug:

(Nb: at times I say "caffeine" in this post, in contrast to chocolate, even though chocolate contains caffeine; by this I mean coffee, energy drinks, caffeinated soda, and caffeine pills collectively, all of which were up until recently frequently used by me; recently I haven't been using any sources of caffeine other than chocolate, and even then try to avoid using it on a daily basis)

I still find that consuming high doses of chocolate (usually 3-6 table spoons of dark cocoa ... (read more)

I may have discovered an interesting tool against lethargy and depression [1]: This morning, in place of my usual caffeine pill, I made myself a cup of hot chocolate (using pure cacao powder / baking chocolate from the supermarket), which made me very energetic (much more energetic than usual), which stood in sharp contrast to the past 4 days, which have been marked by lethargy and intense sadness. Let me explain:

Last night, I was reflecting on the fact that one of the main components of chocolate is theobromine, which is very similar in structure to caffe... (read more)

3gilch
I think I want to try this. What was your hot cocoa recipe? Did you just mix it with hot water? Milk? Cream? Salt? No sugar, I gather. How much? Does it taste any better than coffee? I want to get a sense of the dose required.
3MikkW
Just saw this. I used approximately 5 tablespoons of unsweetened cocoa powder, mixed with warm water. No sweetener, no sugar, or anything else. It's bitter, but I do prefer the taste over coffee.
3gilch
I just tried it. I did not enjoy the taste, although it does smell chocolatey. I felt like I had to choke down the second half. If it's going to be bitter, I'd rather it were stronger. Maybe I didn't stir it enough. I think I'll use milk next time. I did find this: https://criobru.com/ apparently people do brew cacao like coffee. They say the "cacao" is similar to cocoa (same plant), but less processed.
3gilch
Milk does take the edge off, even with no added sweeteners. I had no trouble swallowing the whole thing this way.
2gilch
I found this abstract suggesting that theobromine doesn't affect mood or vigilance at reasonable doses. But this one suggests that chocolate does. Subjectively, I feel that my cup of cocoa today might have reduced my usual lethargy and improved my mood a little bit, but not as dramatically as I'd hoped for. I can't be certain this isn't just the placebo effect.
1MikkW
The first linked study tests 100, 200, and 400 mg Theobromine. A rough heuristic based on the toxic doses of the two chemicals suggests 750 mg, maybe a little more (based on subjective experience) is equivalent to 100mg caffeine or a cup of coffee (this is roughly the dose I've been using each day), so I wouldn't expect a particularly strong effect for the first two. The 400 mg condition does surprise me; the sample size of the study is small (n = 24 subjects * 1 trial per condition), so the fact that it failed to find statistical significance shouldn't be too big of an update, though.
2gilch
I also noticed that it suppressed my appetite. Again, that's only from trying it once, but it might be useful for weight loss. I'm not sure if that's due to the theobromine, or just due to the fact that cocoa is nutritionally dense.
2Dagon
Can you clarify your Soylent anti-recommendation?  I don't use it as an actual primary nutrition, more as an easy snack for a missed meal, once or twice a week.  I haven't noticed any taste difference recently - my last case was purchased around March, and I pretty much only drink the Chai flavor.  
7MikkW
A] Meal replacements require a large amount of trust in the entity that produces it, since if there's any problems with the nutrition, that will have big impacts on your health. This is less so in your case, where it's not a big part of the nutrition, but in my case, where I ideally use meal replacements as a large portion of my diet, trust is important. B] A few years ago, Rob Rhinehart, the founder and former executive of Soylent, parted ways with the company due to his vision conflicting with the investor's desires (which is never a good sign). I was happy to trust Soylent during the Rhinehart era, since I knew that he relied on his creation for his own sustenance, and seemed generally aligned. During that era, Soylent was very effective at signaling that they really cared about the world in general, and people's nutrition in general. All the material that sent those signals no longer exists, and the implicit signals (e.g. the shape of and branding on the bottles, the new products they are developing [The biggest innovation during the Rhinehart era was caffeinated Soylent, now the main innovations are Bridge and Stacked, products with poor nutritional balance targeted at a naïve general audience, a far cry from the very idea of Complete Food], and the copy on their website) all indicate that the company's main priority is now maximizing profit, without much consideration as to the (perceived) nutritional value of the product. In terms of product, the thing is probably still fine (though I haven't actually looked at the ingredients in the recent new nutritional balance), but in terms of incentives and intentions, the management's intention isn't any better than, say, McDonald's or Jack In The Box. Since A] meal replacements require high trust and B] Soylent is no longer trustworthy: I cannot recommend anyone use Soylent more than a few times a week, but am happy to recommend Huel, Saturo, Sated, and Plenny, which all seem to still be committed to Complete Food.
2Dagon
Thanks for the detail and info!
2Zolmeister
I recommend Ample (lifelong subscriber). It has high quality ingredients (no soy protein), fantastic macro ratios (5/30/65 - Ample K), and an exceptional founder.

In Zvi's most recent Covid-19 post, he puts the probability of a variant escaping mRNA vaccines and causing trouble in the US at most at 10%. I'm not sure I'm so optimistic.

One thing that gives reason to be optimistic, is that we have yet to see any variant that has substantial resistance to the vaccines, which might lead one to think that resistance just isn't something that is likely to come up. However, on the other hand, the virus has had more than a year for more virulent strains to crop up while people were actively sheltering in place, and variants ... (read more)

One thing that is frustrating me right now is that I don't have a good way of outputting ideas while walking. One thing I've tried is talking into voicememos, but it feels awkward to be talking out loud to myself in public, and it's a hassle to transcribe what I write when I'm done. One idea I don't think I've ever seen is a hand-held keyboard that I can use as I'm walking, and can operate mostly by touch, without looking at it, and maybe it can provide audio feedback through my headphones.

3DirectedEvolution
If you have bluetooth earbuds, you would just look to most other people like you're having a conversation with somebody on the phone. I don't know if that would alleviate the awkwardness, but I thought it was worth mentioning. I have forgotten that other people can't tell when I'm talking to myself when I have earbuds in.

Epistemic: Intend as a (half-baked) serious proposal

I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-l... (read more)

2Dagon
I may be doing just that by replying seriously. If this was intended as a "modest proposal", good on you, but you probably should have included some penalty for being caught, like surgery to remove the truth-register. Humans have been practicing lying for about a million years. We're _VERY_ good at difficult-to-legislate communication and misleading speech that's not unambiguously a lie. Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible. And if you CAN detect it, the enforcement isn't necessary. If people really wanted to punish lying, this regime would be unnecessary - just directly punish lying based on context/medium, not caring about tone of voice.
1MikkW
I assure you this is meant seriously. There's plenty of blatant lying out there in the real world, which would be easily detectable by a person with access to reliable sources and their head screwed on straight- I think one important facet of my model of this proposal, that isn't explicitly mentioned in this shortform, is that validating statements is relatively cheap, but expensive enough that for every single person to validate every single sentence they hear is infeasible. By having a central arbiter of truth that enforces honesty, it allows one person doing the heavy lifting to save a million people from having to each individually do the same task. The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies- since the identifiable features of the register would be protected as intellectual property, the organization that owned the IP could enforce a violation of the intellectual property, even when there would be no legal basis for violating norms of honesty
2Dagon
Oh, I'd taken that as a fanciful example, which didn't need to be taken literally for the main point, which I thought was detecting and prosecuting lies. I don't think that part of your proposal works - "intellectual property" isn't an actual law or single concept, it's an umbrella for trademark, copyright, patent, and a few other regimes. None of which apply to such a broad category of communication as register or accent. You probably _CAN_ trademark a phrase or word, perhaps "This statement is endorsed by TruthDetector(TM)". It has the advantage that it applies in written or spoken media, has no accessibility issues, works for tonal languages, etc. And then prosecute uses that you don't actually endorse. Endorsing only true statements is left as an excercise, which I suspect is non-trivial on it's own.
1MikkW
I suspect there's a difference between what I see in my head when I say "protected register", compared to the image you receive when you hear it. Hopefully I'll be able to write down a more specific proposal in the future, and provide a legal analysis of whether what I envision would actually be enforceable. I'm not a lawyer, but it seems that what I'm thinking of (i.e., the model in my head) shouldn't be dismissed out of hand (although I think you are correct to dismiss what you envision that I intended)

Current discourse around AI safety (at least among people who haven't missed) has a pretty dark, pessimistic tone - for good reason, because we're getting closer to technology that could accidentally do massive harm to humanity.

But when people / groups feel pessimistic, it's hard to get good work done - even when that pessimism is grounded in the real-world facts.

I think we need to develop an optimistic, but realistic point of view - acknowledging the difficulty of where we are, but nonetheless being hopeful and full of energy towards finding the solution. Because AI alignment can be solved, we just actually have to put in the effort to solve it, and maybe a lot faster than we are currently prepared to.

3hobs
Indeed. Good SciFi does both for me - terror of being a passenger in this train wreck and ideas for how heroes can derail the AI commerce train or hack the system to switch tracks for the public transit passenger train. Upgrade and Recursion did that for me this summer.

Somehow I stumbled across this quote from Deutoronomy (from the Torah / Old Testament, which is the law of religious-Jews):

You shall not have in your bag two kinds of weights, large and small. You shall not have in your house two kinds of measures, large and small. You shall have only a full and honest weight; you shall have only a full and honest measure, so that your days may be long in the land that the Lord your God is giving you. For all who do such things, all who act dishonestly, are abhorrent to the Lord your God.

There's of course the bit about... (read more)

This Generative Ink post talks about curating GPT-3, creating a much better output than it normally would give, turning it from quite often terrible to usually pround and good. I'm testing out doing the same with this post, choosing one of many branches every few dozens of words.

For a 4x reduction in speed, I'm getting very nice returns on coherence and brevity. I can actually pretend like I'm not a terrible writer! Selection is a powerful force, but more importantly, continuing a thought in multiple ways forces you to actually make sure you're saying thin... (read more)

1MikkW
It occurs to me that this is basically Babble & Prune adapted to be a writing method. I like Babble & Prune.
1MikkW
This post was written in 5 blocks, and I wrote 4 (= 2^2) branches for each block, for 5*2 = 10 bits of curation, or 14.5 words per bit of curation. As it happens, I always used the final branch for each block, so it was more effects of revision and consolidation than selection effects that contribute to the end result of this excercise.

URLs (Universal Resource Locators) are universal over space, but they are not universal over time, and this is a problem

4Dagon
According to https://datatracker.ietf.org/doc/html/rfc1738 , they're not intended to be universal, they're actually Uniform Resource Locators.  Expecting them to be immutable or unique can lead to much pain.