Shortform Content [Beta]

Carmex's Shortform

if creating a simulation has acausal trade (superrationality) implications, then so too does the act of creating a child.

niplav's Shortform

The treatment of Penrose's theory of consciousness by rationality(-adjacent) people is quite disgraceful. I've only heard mockery (e.g. on the Eleuther AI discord server or here in a talk by Joscha Bach), no attempts to even weak-man the theory (let alone understand or refute it!). Nothing like Aaronson's IIT post – just pure absurdity heuristic (which, if I have to remind, works rather poorly). Sure, there's Tegmark's paper – but for some reason I highly doubt that anyone who mocks it has ever even looked at any of the books or papers written in response.... (read more)

Daniel Kokotajlo's Shortform

Elon Musk is a real-life epic tragic hero, authored by someone trying specifically to impart lessons to EAs/rationalists:

--Young Elon thinks about the future, is worried about x-risk. Decides to devote his life to fighting x-risk. Decides the best way to do this is via developing new technologies, in particular electric vehicles (to fight climate change) and space colonization (to make humanity a multiplanetary species and thus robust to local catastrophes)

--Manages to succeed to a legendary extent; builds two of the worlds leading tech giants, each with a... (read more)

Showing 3 of 7 replies (Click to show all)

I agree with you completely and think this is very important to emphasize. 

I also think the law of equal and opposite advice applies. Most people act too quickly without thinking. EAs tend towards the opposite, where it’s always “more research is needed”. This can also lead to bad outcomes if the results of the status quo are bad. 

I can’t find it, but recently there was a post about the EU policy on AI and the author said something along the lines of “We often want to wait to advise policy until we know what would be good advice. Unfortunately, t... (read more)

2Pattern11hThe second point isn't important, it's an incorrect inference/hypothesis, predicated on the first bit of information being missing. (So it's fixed.)
2Pattern11hMy point was just: How much thinking/researching would have been necessary to avoid the failure? 5 hours? 5 days? 5 years? 50? What does it take to not make a mistake? (Or just, that one in particular?) Expanding on what you said: Is it a mistake that wouldn't have been solved that way? (Or...solved that way easily? Or another way that would have fixed that problem faster?) For research to trivially solve a problem, it has...someone pointing out it's a bad idea. (Maybe talking with someone and having them say _ is the fix.)
MikkW's Shortform

I want there to be a way of telling time that is the same no matter where you are. Of course, there's UTC, but it uses the same names as the traditional locality-dependent clocks, so it can only be used unambiguously if you explicitly state you're using UTC, or you're in a context where it's understood that times are always given in UTC (in the military, "Zulu" is a codeword indicating UTC time; I wouldn't mind if people got in the habit of saying "twenty-two thirty Zulu" to refer to times, though I do worry it might seem a little weird to non-familiar peo... (read more)

https://xkcd.com/927/

Telling time by specifying the timezone (3:12pm Pacific Time) or ISO8601 is pretty much usable anywhere, and as precise as you need.  It's going to be more universal to get competent at timezone handling than to (try to) convince everyone to use UTC.

Jemist's Shortform

Just realized I'm probably feeling much worse than I ought to on days when I fast because I've not been taking sodium. I really should have checked this sooner. If you're planning to do long (I do a day, which definitely feels long) fasts, take sodium! 

Carmex's Shortform

How do I start acausally cooperating with my future self? The spiritualists seem to call this "ascending to 4D energy". How about with my counterfactual selves? The equivalent of "ascending to 5D energy" in spiritualist speak. I need practical and specific instructions.

If you don't know what you expect future / counterfactual versions of you want, it will be hard to co-operate, so I recommend spending time regularly reflecting on what they might want, especially in relation to things that you have done recently. Reflect on what actions you have done recently (consider both the most trivial and the most seemingly important), and ask yourself how future and counterfactual versions of you will react to finding out that (past) you had done that. If you don't get a gut feeling that what you did was bad, test it out by trying ... (read more)

Xylitol's Shortform

How long will it take until high-fidelity, AI-generated porn becomes an effective substitute for person-generated porn?

Here are some important factors: Is it ethical? Is it legal? Does the output look genuine? Is it cost-effective?

Possible benefits:

  • More Privacy. If significant markets still exist for porn images, the images taken of porn actors will be used for data rather than as-is, which means that their identity can be protected from the consumer.
  • More Boutique Offerings. If massive volumes of fairly derivative AI-generated pornography can be created
... (read more)
Showing 3 of 4 replies (Click to show all)
3Xylitol3dThere could be knock-on effects of increasing demand for non-AI-generated analogues, increasing harm.
1irarseil1dThere could also be effects of <b>decreasing<b> demand for non-AI-generated analogues, because of potential consumers of this kind of content being satisfied with these virtual, AI-generated, no-one-was-harmed analogues, hence <b>reducing</b> harm. I can see how sex with real children leads to moral condemnation and to legal punishment. But if no real child is ever involved in this it seems to me that it's an instance of "disgust leads to moral condemnation leads to legal punishment / prohibition of the material".

You can surround text with two asterisks (*) on each side to bold text, at least in the Markdown editor. With the rich-text editor, you can just click on the bold button.

Carmex's Shortform

Are fantasy worlds possible? I don't mean are they physically possible, but whether there exists some "fantasy world" economic/technological steady state that doesn't just devolve back into what our world is today: one where staple crops are dirt cheap, the map of the world is complete, advancement can't be stopped, etc. Basically, what are the environmental conditions necessary to stifle development and maintain scarcity of modern comforts? I think this is a Hard Problem. In fact, my intuition is that fantasy worlds don't just don't exist, they don't exis... (read more)

Showing 3 of 7 replies (Click to show all)

Just take out coal/oil and a stable technological level seems possible. Also I'm not sure those stable fantasy worlds really exists in literature, most examples I can think of have (sometimes magical) technological growth or decline.

Tolkien Middle Earth is very young - a few thousands years. This means no coal, no oil, and no possibility of an industrial revolution. Technology would still slowly progress to 18th century level but I can see it happening slow enough to make the state of technology we see in the LOTR acceptable. On the other hand magical tech... (read more)

1JBlack2dIf the rules of the world preferentially destroy cultures that develop beyond the "standard fantasy technology level" (whatever that is) then I expect that over time, cultures will very strongly disfavour development beyond that level. I'm pretty sure that this will be a stable equilibrium. If the rules are sufficiently object-level (such as in a computer game), then technological progress based on exploiting finer grained underlying rules becomes impossible. You can't work out how to crossbreed better crops if crops never crossbreed in the first place, and likewise for other things. If intelligence itself past some point is a serious survival risk, then it will be selected against. You may get an equilibrium where the knowledge discovered between generations is (on long-term average) equal to knowledge lost. ... and so on.
1Yair Halberstadt3dBut as civilization develops people might rediscover magic, beginning the whole cycle anew.
ChristianKl's Shortform

"To 'take over the world'? That must be the natural killer application for a secret clone army... All those clone projects were survivalist projects. They all failed, all of them. Because they lacked transparency."

Radical projects need widespread distributed oversight, with peer review and a loyal opposition to test them. They have to be open and testable. Otherwise, you've just got his desperate little closed bubble. And of course that tends to sour very fast.

Bruce Sterling in "The caryatides"

MikkW's Shortform

An update on my goal of daily writing, there have been a good number of days when I have neither posted a shortform nor worked on an essay. Many (not all) of these days I have been working on an adjacent project which is higher-priority for me. Starting from today, these count towards the daily goal.

I will probably revisit the daily goal at some point, I suspect it's not perfectly tuned for my needs & goals, but that will be a decision for a later time.

romeostevensit's Shortform

It's impossible to come up with a short list of what I truly val- https://imgur.com/a/A26h2JE

I enjoyed this very much.

2Ruby3d<heart/>
Gunnar_Zarncke's Shortform

Team Flow Is a Unique Brain State Associated with Enhanced Information Integration and Interbrain Synchrony

It's also possible to experience 'team flow,' such as when playing music together, competing in a sports team, or perhaps gaming. In such a state, we seem to have an intuitive understanding with others as we jointly complete the task at hand. An international team of neuroscientists now thinks they have uncovered the neural states unique to team flow, and it appears that these differ both from the flow states we experience as individuals, and from the

... (read more)
risedive's Shortform

Which would be better for my level of happiness: living as long as possible, or making the world a better place?

I expect the answer to this question to determine my career. If living as long as possible is more important, then it seems like I should try to make as much money as possible so that I can afford life-extension technology. If making the world a better place is more important, then I should probably aim to work in AI alignment, in which I might have a small but significant impact but (I think) won’t make as much of a difference to my personal lif... (read more)

Showing 3 of 8 replies (Click to show all)
3JBlack4dIt seems quite likely that living as long as possible will require the world to be a better place. That doesn't mean that it has to be you who helps make the world a better place, but that's more of a coordination problem than a happiness question. There is also the question of your happiness (or other utility measures) in possible outcomes where you fail to achieve your goal in each case. If expensive life-extension technology isn't available, or you never succeeded in amassing enough wealth to buy it, would you look back and decide that you would have been happier having tried to make the world a better place? Likewise if the world never gets any better than it is now (and possibly worse) despite your parts in trying to improve it, would you have preferred to have tried to amass wealth instead? This doesn't address the likelihood of these outcomes. It seems much more likely that you'll amass enough wealth to buy expensive life extension technology than that you'll make a global difference in the the state of the world, but I suspect it's likely that you could make a large difference in the state of the world for quite a number of people, depending upon what you do.

“If expensive life-extension technology isn't available, or you never succeeded in amassing enough wealth to buy it, would you look back and decide that you would have been happier having tried to make the world a better place? Likewise if the world never gets any better than it is now (and possibly worse) despite your parts in trying to improve it, would you have preferred to have tried to amass wealth instead?”

Well, I don’t know. That’s what I was trying to figure out by asking this question. For the first question, it’s quite likely, as my wealth wouldn... (read more)

1risedive5dI think the possibility of living for a googol years vastly outweighs the amount of happiness I’d get directly from any job. And making the world a better place is agreed by everyone I’ve seen comment on the topic (including Eliezer Yudkowsky - https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny [https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny]) to be an essential part of happiness, and the window of opportunity for that might well close in a hundred years or so, when AI is able to do everything for us.
Raj Thimmiah's Shortform

I have a hunch that task switching is lowering my productivity some amount but I'm not sure because there are multiple possible sources:
-might be coworking around friends who might be talking
-might be reading and task switching to phone
-might be doing some work, need to ask friend a question on discord and then getting distracted (even if I really am asking/discussing thing with them)

How could I test just how bad it is for productivity and optimize it over time?

Alexander's Shortform

I just came across Lenia, which is a modernisation of Conway's Game of Life. There is a video by Neat AI explaining and showcasing Lenia. Pretty cool!

3Carmex4dWhat are some automata designs that are robust against chaos? As in, if there's a source of randomness somewhere on the map, are there any automata that can survive/feed off it?

Fascinating question, Carmex. I am interested in the following space configurations:

  1. Conservation: when a lifeform dies, its constituents should not disappear from the system but should dissipate back into the background space.
  2. Chaos: the background space should not be empty. It should have some level of background chaos mimicking our physical environment.

I'd imagine that you'd have to encode a kind of variational free energy minimisation to enable robustness against chaos.

I might play around with the simulation on my local machine when I get the chance.

Carmex's Shortform

I don't see any discussion in the Cryptocurrency space about how Proof of Stake allows for 51% of the stake to eventually accumulate into 99% of the stake. Being chosen to receive a coin makes it more likely that you'll be chosen again. This way, the relative distribution of coin ownership will become more and more extreme as time passes.

Proof of Burn-Stake seems to avoid this "issue". Because being chosen to have your coin burned makes it less likely that you'll be chosen again. This way, the relative distribution of coin ownership won't change.

But there'... (read more)

There's quite a bit of discussion of this on discussions of various proof of stake algorithms and their strengths and weaknesses (or used to be).

4ChristianKl7dThe way to fight this is to create a fork when someone would try such an attack that invalidates all ownership of the attacker.
Randomized, Controlled's Shortform

I'm very confused what the situation with Delta in Ontario is right now. Looking at covariants.org for Canada as well as other countries, Delta seems to be ~99% market share. But going to Public Health Ontario and the City of Toronto's dashboards both show no Delta.

I'm inclined to think Something Is Wrong with the Ontario/Toronto dashboards.

niplav's Shortform

Let be the method by which an oracle AI outputs its predictions and any answer to a question . Then we'd want it to compute something like so that , right?

If we have a working causal approach, should prevent self-fulfilling predictions (obviously not solving any embedded agency etc.)

If the possible answers are not very constrained, you'll get a maximally uninformative answer. If they are constrained to a few options, and some of the options are excluded by the no-interference rule, you'll get an arbitrary answer that happens to not be excluded. It's probably more useful to heavily constrain answers and only say anything if no answers were excluded, or add some "anti-regularization" term that rewards answers that are more specific.

Raj Thimmiah's Shortform

Does anyone have experience doing rationality adjacent hackathons? I’m thinking of hosting one in the Berkeley area aimed at trying to make cool rationality tools. I’m interested on input for what kinds of an event people would want or if people have relevant experience and suggestions!

Load More