All of sayan's Comments + Replies

Enjoyed reading this. Looking forward to the next posts in the sequence.

Are Dharma traditions that posit 'innate moral perfection of everyone by default' reasoning from the just world fallacy?

2Gordon Seidoh Worley4y
What Dharma traditions in particular so you have in mind, because I can't think of one i would describe as saying everyone had innate "moral" perfection unless you sufficiently twist around the word "moral" such that it's use is confusing at best.
3Matt Goldenberg4y
I wonder if there's a game theoretic and evolutionary argument that could be made here about cooperation being the sane default in the absence of other priors.

Can we have a market with qualitatively different (un-interconvertible) forms of money?

1a gently pricked vein4y
I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange. An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tokens. Maybe a better question to ask is "do we have utility functions that are partial orders and thus would benefit from many isolated markets?", because if so, you wouldn't have to worry about enforcing anything, many different currencies will automatically come into existence and be stable. Of course, more generally, you wouldn't quite have complete isolation, but different valuations of goods in different currencies, without "true" fungibility. I think it is quite possibe that our preference orderings are in fact partial and the current one-currency valuation of everything might be improved.

How would signalling/countersignalling work in a post-scarcity economy?

Can you define a post-scarcity economy in terms of what you anticipate the world to look like?

What are some effective ways to reset the hedonic baseline?


As far as I understand, this post decomoses 'impact' into value impact and objective impact. VI is dependent on some agent's ability to reach arbitrary value-driven goals, while OI depends on any agent's ability to reach goals in general.

I'm not sure if there exists a robust distinction between the two - the post doesn't discuss any general demarcation tool.

Maybe I'm wrong, but I think the most important point to note here is that 'objectiveness' of an impact is defined not to be about the 'objective state of the world' - rather about how 'general to all agents' an impact is.

VI depends on the ability to do one kind of goal in particular, like human values. OI depends on goals in general. If I understand correctly, this is wondering whether there are some impacts that count for ~50% of all agents, or 10%, or .01% - where do we draw the line? It seems to me that any natural impact (that doesn't involve something crazy like "if the goal encoding starts with '0', shut them off; otherwise, leave them alone") either affects a very low percentage of agents or a very high percentage of agents. So, I'm not going to draw an exact line, but I think it should be intuitively obvious most of the time. This is exactly it.

I think this post is broadly making two claims -

  1. Impactful things fundamentally feel different.

  2. A good Impact Measure should be designed in a way that it strongly safeguards against almost any imperfect objective.

It is also (maybe implicitly) claiming that the three properties mentioned completely specify a good impact measure.

I am looking forward to reading the rest of the sequence with arguments supporting these claims.

I don't know that I'd claim that these completely specify a good impact measure, but I'd imagine most impact measures satisfying these properties are good (i.e. natural curves fit to those three points end up pretty good, I think).

What gadgets have improved your productivity?

For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

I find having a skateboard is a compact way to shave minutes off of the sections of my commute where I would otherwise have to walk. It turns a 15 minute walk to the bus stop into a 5 minute ride, which adds up in the long run.
* Multiple large monitors, for programming. * Waterproof paper in the shower, for collecting thoughts and making a morning todo list * Email filters and Priority Inbox, to prevent spurious interruptions while keeping enough trust that urgent things will generate notifications, that I don't feel compelled to check too often * USB batteries for recharging phones - one to carry around, one at each charging spot for quick-swapping

I am thinking about these questions about a lot without actually reaching anywhere.

What is the nature of non-dual epistemology? What does it mean to 'reason' from the (Intentional Stance)[], from inside of an agent?

Okay, natural catastrophes might not be a good example. (Edited)

Helping out with disaster/emergency relief efforts might get people out of their comfort zone.

If there is no self, what are we going to upload to the cloud?

[This comment is no longer endorsed by its author]Reply
The brain, I guess.

It is so difficult to understand the difference and articulate in pronunciation some accent that is not one's native, because of the predictive processing of the brain. Our brains are constantly appropriating signals that are closely related to the known ones.

Is there a good bijection between specification gaming and wireheading vs different types of Goodhart's law?

Seems like this has been done already. []

Extremely low probability events are great as intuition pumps, but terrible as real world decisionmaking.

Speculation: People never use pro-con lists to actually make decisions, they rather use them rationalizingly to convince others.

The internet might be lacking multiple kind of curation and organization tools? How can we improve?

Pathological examples of math are analogous to adversarial examples in ML. Or are they?

What are the possible failure modes of AI-aligned Humans? What are the possible misalignment scenarios? I can think of malevolent uses of AI tech to enforce hegemony and etc etc. What else?

What's a good way to force oneself outside their comfort zone where most expectations and intuitions routinely fail?

This might become useful to build antifragility about expectation management.

Quick example - living without money in a foreign nation.

Is it possible to design a personal or group retreat for this?

What kills you doesn't make you stronger. You want to get out of your comfort zone, not out of your survival zone.

Would CIRL with many human agents realistically model our world?

What does AI alignment mean with respect to many humans with different goals? Are we implicitly assuming (with all our current agendas) that the final model of AGI is to being corrigible with one human instructor?

How do we synthesize goals of so many human agents into one utility function? Are we assuming solving alignment with one supervisor is easier? Wouldn't having many supervisors restrict the space meaningfully?

Where is the paradigm for Effective Activism? On a first thought, it doesn't even seem to be difficult to do better than status quo.

How specifically would you do better than status quo? I could easily dismiss some charities for causes I don't care about, or where I think they do more harm than good. Now there are still many charities left whose cause I approve of, and that seems to me like they could help. How do I choose among these? They publish some reports, but are the numbers there the important ones, or just the ones that are easiest to calculate? For example, I don't care if your "administrative overhead" is 40%, if that allows you to spend the remaining 60% ten times more effectively than a comparable charity with smaller overhead. Unfortunately, the administrative overhead will most likely be included in the report, with two decimal places; but the achieved results will be either something nebulous (e.g. "we make the world a better place" or "we help kids become smarter"), or they will describe the costs, not the outcomes (e.g. "we spent 10 millions to save the rainforest" or "we spent 5 milions to teach kids the importance of critical thinking"). Now, I don't have time and skills to become a full-time charity researcher. So if I want to donate well, I need someone who does the research for me, and whose integrity and sanity I can trust.

Quick question. Given that now the Conservative Agency paper is available, what am I missing if I just read the paper and not this post? It seems easier to me to follow the notations of the paper. Is there any significant difference between the formalization of this post and the paper?

Read the paper [] for now, and read the upcoming Reframing Impact sequence later this year. There is a significant difference, but this post seems bad at communicating the key paradigm shifts I originally envisioned communicating (hence the sequence).

I read books on multiple devices - GNU/Linux, Android, and Kindle. Last time I checked, Calibre was too feature-rich and heavy, but lacked a simple getting-out-of-my way workflow for syncing my reading between devices. Is there a better solution now?

Calibre is great for me when syncing epub/mobi books from my computer to my Kindle Paperwhite (I don't think I've ever encountered a major problem in this particular process, only on very old books which it has trouble converting). Besides that I use it to convert epub/mobi books onto html which is where I like to read when in my computer (using the browser, chromiun or firefox, which means I don't use the screen reader of Calibre, this way I can make easy modifications with css and inject css to highlight the most important parts with several different encodings allowed by the extension I talked about in my answer). It's too feature-rich and heavy and it gets in your way, but it solves many simple problem if you use only some of its features. This is the open-source repository [] and I recommend always being up-to-date with the latest release (that may solve some of your problems). Besides that I really recommend Calibre, it's an essential tool for my purposes.

I love how you emphasized learning Unix tools. I use other things mentioned here except tmux. Would you be willing to share your tmux workflow in more detail with keybindings?

Here's .tmux.conf, however it mostly covers the in-tmux things like split/tab management (e.g. I open & switch to new tabs with alt-1/2/... instead of default C-b 1/2/... This mirrors the browser behavior and is 1 less keypress): [] Tmux allows neat tricks like sending a window between sessions or sending keypresses to a session. E.g. I have a script called "portal" that opens a new window in a target tmux session (that we're opening a "portal" to) with the current directory, and brings that window to the foreground. Another benefit of tmux is that all of my editor sessions are independent of Xorg and so can survive a restart of X or be reused from a different X session (e.g. when testing a WM). Here's sort of a teaser of which tmux / urxvt sessions I have bound in sxhkd (some are still bound from dwm config): [] The launchers themselves (e.g. "ship", "tower", "girl") are unfortunately not online at this point. What these files do is open (with few exceptions) a floating window with the named tmux session and bring it to front, or run the args in a new tmux window of the target session. These are the different-purpose knowledge-management sessions I was referring to. Between those are 2 firefox sessions, which is another thing perhaps worth mentioning. I run 8 thunderbird sessions with RSS feeds and and ~20 firefox sessions. Two of those found in sxhkd config are floating, for quick anonymous / non-anonymous-programming-related lookups. I find separating the browser sessions very valuable for honing the suggestion streams that google / youtube / ... throw at us.

Just finished reading Yuval Noah Harari's new book 21 Lessons for the 21st Century. Primary reaction: even if you already know all the things being presented in the book, it is worth a read just because of the clarity into the discussion the book offers.

Without saying anything about the content, I don't find this comment valuable.

This is an amazingly comprehensive and useful paper. I wish it was longer with little summaries of some papers it references, rather than just citing them.

I also wish somebody creates a video version of it in the spirit of CGP Grey's video on the classic Bostrom paper, so that I can just redirect people to the video instead of sub-optimally trying to explain all these things myself.

Shared the draft with you. Please let me know your feedback.

Shared the draft with you. Feel free to comment and question.

I found it by typing in this url: The other way people who it is shared with can get to it is via the url of the page itself, as noted here [].

I have started to write a series of rigorous introductory blogposts on Reinforcement Learning for people with no background in it. This is totally experimental and I would love to have some feedback on my draft. Please let me know if anyone is interested.

Also interested - programmer with very limited knowledge of RL.
Interested! I'm a programmer who has had no exposure to ML (yet).
Also interested!
I'm interested! I'll be reading from the perspective of, "Technical person, RL was talked about for 2 days at the end of a class, but I don't really know how anything works."