Isn't -1 inversion?
I think for quaternions, corresponds both to inversion and a 180 degree rotation.
When using quaternions to describe rotations in 3D space however, one can still represent rotations with unit-quaternions where n is a 'unit vector' distributed along the directions and indicates the rotation axis, and is the 3D rotation angle. If one wishes to rotate any orientation (same type of object as n) by q, the result is . Here, corresponds to and is thus a full 360 turn.
I have tried to read u...
Thank you.
I really like your framing of home - it seems very close to how John Vervaeke describes it, but somehow your description made something click for me.
I wish to be annealed by this process.
I'd like to share a similar framing of a different concept: beauty. I struggled with what I should call beautiful for a while, as there seemed to be both some objectivity to it, but also loads of seemingly arbitrary subjectiveness which just didn't let me feel comfortable with feeling something to be beautiful. All the criteria I could use to call something b...
I will have to try this, thanks for pointing to a mistake I have made in my previous attempts at scheduling tasks!
One aspect which I have the feeling is also important to you (and is important to me) is that the system also has some beauty to it. I guess this is mostly because using the system should feel more rewarding than the alternative of "happen to forget about it" so that it can become a habit.
I recently read (/listened to) the shard theory of human values and I think that its model of how people decide on actions and especially how hyperbolic disco...
Regarding the transporter:
Why does "the copy is the same consciousness" imply that killing it is okay?
From these theories of consciousness, I do not see why the following would be ruled out:
oh.., right - it seems I actually drew B instead of C2. Here is the corrected C2 diagram:
Okay, I think I managed to make at least the case C1-C2 intuitive with a Venn-type drawing:
(edit: originally did not use spades for C1)
The left half is C1, the right one is C2. In C1 we actually exclude both some winning 'worlds' and some losing worlds, while C2 only excludes losing worlds.
However due to symmetry reasons that I find hard to describe in words, but which are obvious in the diagrams, C1 is clearly advantageous and has a much better winning/loosing ratio.
(note that the 'true' Venn diagram would need to be higher dimensional so that one c...
Thanks for the attempt at giving an intuition!
I am not sure I follow your reasoning:
Maybe the intuition here is a little clearer, since we can see that winning hands that contain an ace of spades are all reported by C1 but some are not reported by C2, while all losing hands that contain an ace of spades are reported by both C1 and C2 (since there's only one ace for C2 to choose from)
If I am not mistaken, this would at first only say that "in the situations where I have the ace of spades, then being told C1 implies higher chances than being told C2"? Ea...
Though it's unclear to me if confidence intervals suggest this notation already. If you had less chance of moving your interval, then it would already be a smaller interval, right?
Counterexample: if I estimate the size of a tree, I might come up with CI 80 % [5 m, 6 m] by eye-balling it and expect that some friend will do a stronger measurement tomorrow. In that case, CI 80 % [5 m, 6m] still seems fine even though I expect the estimate to narrow down soon.
If the tree is instead from some medieval painting, my CI 80 % [5 m, 6 m] could still be true while I ...
I like the idea of your proposal -- communicating how solidified one's credences are should be helpful for quickly communicating on new topics (although I could imagine that one has to be quite good at dealing with probabilities for this to actually provide extra information).
Regarding your particular proposal "CR [ <probability of change>, <min size of change>, <time spent on question> }" is unintuitive to me:
After reading your sequence today, there is one additional hypothesis which came to my mind, which I would like to make the case for (note that my knowledge about ML is very limited, there is a good chance that I am only confused):
Claim: Noise favours modular solutions compared to non-modular ones.
What makes me think that? You mention in Ten experiments that "We have some theories that predict modular solutions for tasks to be on average broader in the loss function landscape than non-modular solutions" and propose to experimentally test this.
If this...
I just found the link for their summary on job-satisfaction in particular: https://80000hours.org/career-guide/job-satisfaction/
A re-interpretation that makes me perceive the lines as equal length without any problems is to label them as "some kind of magnets that hover above each other"
As this is somewhat realistic, while letting the lines integrate well with the rest of the railroad (maybe this is an artistic impression for Maglev trains?), this does not break immersion for me.
Trends of different quantities:
Generally, I agree with your points :)
I recently stumbled upon this paper "The World’s Technological Capacity to Store, Communicate, and Compute Information", which has some neat overviews regarding data storage, broadcasting and compute trends:
From a quick look at the Figures my impression is that compute and storage look very much like 'just' exponential, while there is a super-exponential figure (Fig. 4) for the total communication bandwidth (1986-2007)[1]
General
...I can see two mechanisms that potentially may make it l
This does not quite match your question, but I want to recommend taking a look at https://80000hours.org/ if you don't already know them.
Their focus is on providing resources for impactful altruistic careers, but they still have lots of nice general advice regarding job satisfaction and how to approach the topic of career choice. There also are examples where people describe their experiences with different career paths or the content of the paths themselves and lots more.
Depending how much you are interested in an altruistic focus for your career, you'll find a larger or smaller portion of their writing relevant, but there is a lot of good stuff in any case :)
Compute: A very simple attempt at estimating (non-biological) computing power:
If we just naively combine these two, we get two multiplying exponential growths for the total semiconductor compu...
If one believes the orthogonality thesis (and we only need a very weak version of it), just knowing that there is an AGI trying to improve the world is not enough to predict how exactly it would reason about the more quirky aspects about human character and values. It seems to me that something that could be called "AGI-humans" is quite possible, but a more alien-to-us "total hedonistic utility maximizing AGI" also seems possible.
From how I understood arguments of Eliezer Yudkowsky here, the way that we are selecting for AI models will favour models with c...
One concept which I think is strongly related is common knowledge (if we extend it to not only refer to beliefs but also to norms).
I think that a good part of the difference between the scenarios for answering Alec is captured by the difference between sharing personal advice and knowledge compared to sharing common norms and knowledge. The latter will give Alec a lot of information about how to work together and cooperate with "the typical member of" the AI safety community, which is important information independently of whether it would be best fo...
a proposal that is related in meaning to Liquid Breaks would be
organic breaks - I like some of the connotations (organic growth of e.g. trees has a lot of flexibility, while still following simple rules; the method (work:break ratio) can be organically adapted to the user and one's current capacity)
Earned Breaks - would put the focus on the emotional aspect of coupling the two lengths and would sound non-technical
Great post, thanks! I would not have guessed that brains are this impressive.
Re: algorithms of the brain, I could still imagine that the 'algorithms' we rely on for higher concepts (planning, deciding on moral values, etc.) are more wasteful with regards to resources. But your arguments certainly reshape my expectations for AI
Some typos/a suggestion:
"The predicted wire energy is 10−19W/bit/nm" J/bit/nm
"[...] a brain-size ANN will cost almost 3kW" 3kJ
"if we assume only 10% of synapses are that large, the minimal brain synaptic volume is about 10^18 nm...
Interesting comparison!
I intended to write something like the following as a new comment, but you provided a great metaphor:
One aspect that I want to point to is that the jungle can be 'out to get you':
As ads become the norm in public spaces—which comprise a large part of the interaction between an individual and "society at large"—this will affect the trust-level of the people involved. A consequence of this would be that bad and manipulative ads provide a limit on how much individuals can trust that their society is friendly to them and whether behaving ...
Good points!
But what is this supposed to be specifically?
My estimate of what could plausibly be achieved in the near future is my model of "the thing that happens during childhood", maybe with better control by the individuum. This would be some phases of heightened learning rate/'priority' for different parts of our mind (fear and recognizing safe/unsafe situations; calibrating and exploring perception and motion; status and norms; sexuality; sense of self and identity; ...). I assume that this 'emotional rejuvenation' could work relatively well 'f...
Thanks for writing!
One trend in longevity-enhanced cultures which I expect to become common, is for "old and worn-out souls" to trigger a kind of mental rejuvenation: Use some type of medicine/gene-activation/.. to trigger a softer version of what happens during childhood.
I see several reasons:
Regarding the link: Possibly you did not activate the markdown editor for your profile?
In that case you can create one by marking the link-text: a hover-menu will appear with the option to add a link
Ah, this makes sense thanks!
I wouldn't say that this is what happens with Shoulder Advisors or with the no-self experience of meditation. There are many failure modes of the brain making sense of agency and identity.
This sounds right. Maybe the cases that I am concerned about additionally contain fear responses, and purely having a non-unified or unclear sense of self is more normal/safer than I thought.
It seems I am not as worried about gpt3 as you, but when listening to the simulated interview with simulated Elon Musk by Lsusr in the clearer thinking podcast episode 073 (starts in minute 102), I was quite concerned
Parts like
“Dumbledore gave you this?” Hermione gasped.
“Yes… he didn’t really explain why. But then he couldn’t explain the rock either.”
and
But the warm glow of academic pride washed over her face all the same.
are really well-done. It feels as if your writing could have been part of HPMOR.
Thank you!
Thanks for your reply,
am an unsure whether I am correctly understanding your position:
If I am not misunderstanding you, this is a...
My thoughts on this are mostly from introspection. When I try to imagine a shoulder advisor in comparison to my self (note that I do not have shoulder advisors currently), there seem to be some additional properties to my self which a should advisor would not have.
Trying to get at the differences, what comes up is:
Another name proposal: essentialized skill
- sounds impressive (I think. I am not a native speaker)
- the essence-part suggests some kind of deep mastery
- the ized-part suggests that it was once just a regular skill and can in principle be acquired
- the essence-part also suggests that the skill is now part of one's nature and is thus not necessarily under conscious control/supervision
I decided to propose this name before reading the comments, I also like some properties of the other proposals though.
I would guess that there is some additional machinery involved in the ego compared to shoulder advisors (this might not contradict your description of ego as privileged shoulder advisor), as tulpas seem to be quite related to shoulder advisors while being 'closer to ego' in some sense.
Probably this distinction is an important reason why shoulder advisors seem much less problematic from the standpoint of mental health.
After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking ("revolutionary")?
It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impos...
I really liked this question and the breadth of interesting answers.
I want to add a mechanism which might contribute to a weakening of institutions that is related to the 'stronger memes' described by ete (I have not thought this out properly, but I am quite confident that I am pointing at something real even if I might well be mistaken in many of the details):
In myself, and I think this is quite common, while considering my life/career options, I noticed an internal drive (think elephant from elephant in the brain) that made me focus on the highest-...
I am not sure whether my take on this is correct, so I'd be thankful if someone corrects me if I am wrong:
I think that if the goal was only 'predicting' this bit-sequence after knowing the sequence itself, one could just state probability 1 for the known sequence.
In the OP instead, we regard the bit-sequence as stemming from some sequence-generator, of which only this part of the output is known. Here, we only have limited data such that singling out a highly complex model out of model-space has to be weighed against the models' fit to the bit-sequence.
Thanks for sharing!
There seems to be a typo ('k4rss' compared to 'krss') in the link to your blog-post introducing kindle4rss
I'm glad if this was helpful.
I was also surprised to learn about this formalism at my university, as it wasn't mentioned in either the introductory nor the advanced lecture on QM, but turns out to be very helpful for understanding how/when classical mechanics can be a good approximation in a QM universe.
I would need to think about this more to be sure, but from my first read it seems as if your idea can be mapped to decoherence.
The maths you are using looks a bit different than what I am used to, but I am somewhat confident that your uncalibrated experiment is equivalent to a suitably defined decohering quantum channel. The amplitudes that you are calculating would be transition amplitudes from the prepared initial state to the measured final state (Denoting the initial state as |i>, the final state as |f> and the time evolution operator as U, your ...
I’m just sick of struggling through life. The inefficiencies all around me are staggering and overwhelming.
Your mileage will vary, but a train of thought that helped me change my perspective on this (and I fully endorse this shift) was to realize that my emotions were ill-calibrated:
When I considered the state of the world, my emotional reaction was mostly negative, but when I tried to compare this reaction to a world in which earth is replaced by a lifeless rock I realized that this would clearly not be an improvement. After contemplating this, I d...
Up-voted for thoroughly putting the idea into less wrong context - i enjoyed being reminded of all the related ideas
A thought: I am a bit surprised that one can distil a single belief network explaining a whole lot of the variance of beliefs across many people. This makes me take the idea more seriously that a large number of people regularly do have very similar beliefs (down to the argumentative structure). Remembering You Have About Five Words this surprises me as I would expect a less reliable transmission of beliefs? (It might well be that I am just misunderstanding something)
Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:
One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing:
On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with 'aesthetics',
on the other hand if in my mind I replace the term "high-energy-state" with "currently-active-goal-function(s)"...
I am very much impressed by the exchange in the parent-comments and cannot upvote sufficiently.
With regards to the 'mental motion':
In contrast, the model description you gave made it sound like craving was an active process that one could simply refrain from [...]
As I see it, the perspective of this (sometimes) being an active process makes sense from the global workspace theory perspective: There is a part of one's mind that actually decides on activating craving or not. (Especially if trained through meditation) it is possible to connect ...
I think the meaning behind 'identical particles' is very hard to pin down without directly using mathematical definitions*. The analogy with (secretly numbered) billiard balls gives a strong intuition for non-identical particles. There are also intuitive examples that behave more like identical particles:
For example, the intuition for symbols nicely matches identical symbol/particle behaviour:
If I represent a Helium atom with the symbol "H" and no atom with "_", the balloons interior might be described by
"H__H_H____H__H__...
That was the first thing I did when I created an account here.
Oops - I didn't notice the 'load more' option for the posts on your profile earlier, I upvoted your post now.
I have not yet written any posts myself and have only skimmed the detailed rules about karma some time ago, but I can easily imagine that the measures against spam can sometimes lead good posts from new accounts to be overlooked.
a) I liked reading your guide: You managed to include many important LW-related concepts while still keeping a hands-on feeling. This makes it a nice reference for people who do not enjoy a more technical/analytical approach. Have you considered creating a link-post on lesswrong?
b) You write:
The good news is that the virtuous cycle here also works: I've found that if one person is consistently unusually virtuous in their conversations and arguments, a little bubble of sanity spreads around that person to everyone in the vicinity over time.
This see...
One could say that there is still a difference between probabilities so high/low that you can use ~1/~0 writings and probable but not THAT probable situations such as 98:2
I don't think that Eliezer would disagree with this.
As I understand it, he generally argues for following the numbers and in this post he tries to bind the reader's emotions to reality: He gives examples that make it emotionally clear that it already is in our interest to follow the numbers ('hot water need not *necessarily* burn you, but you correctly do not count on th...
Thanks for writing this post! Your writing helps me a lot in tying together other's claims and my own experiences into a more coherent model.
As Richard_Kennaway points out in their comment, the goal of insight meditation and 'enlightenment' is not necessarily the same as the goal of rationality (e.g. instrumental rationality/shaping the world's future towards a desired goal seems a part of rationality but not of 'enlightenment' as far as I can tell). I would be very interested in your opinion of how instrumental rationality r...
My original reading was 'there was less arrogance in Einstein's answer than you might think'. After rereading Eliezer's text and the other comments again today, I cannot tell how much arrogance (regarding rationality) we should assume. I think it is worthwhile to compare Einstein not only to a strong Bayesian:
On the one hand, I agree that a impressive-but-still-human Bayesian would probably have accumulated sufficient evidence at the point of having the worked-out theory that a single experimental result against the theory is not enough...
I do not have any experience with tulpas, but my impression of giving one's models the feel of agency is that one should be very careful:
There are many people who perceive the world as being full of ghosts, spirits, demons, ..., while others (and science) do not encounter such entities. I think that perceiving one's mental models themselves as agentic is a large part of this difference (as such models can self-reinforce by triggering strong emotions)
If I model tulpas as a supercharged version of modelling other people (where the tulpa may be experienced ...
Regarding "intuitive moral sense", I would add that one's intuitions can be somewhat shaped by consciously thinking about their implications, noticing inconsistencies and settling on solutions/improvements.
For example, the realisation that I usually care about people more the better I know them made me realize that the only reason I do not care about strangers at all is the fact that I do not know them. As this collided with another intuition that refuses such a reason as arbitrary (I could have easily ended up knowing and thus caring for di...
I do not know about scientific studies (which does not mean much), but at least anecdotally I think the answer is a yes at least for people who are not trained/experienced in making exactly these kinds of decisions.
One thing I have heard anecdotally is that people often significantly increase the prize when deciding to build/buy a house/car/vacation because they "are already spending lots of money, so who cares about adding 1% to the prize here and there to get neat extras" and thus spend years/months/days of income on things which they would not have boug... (read more)