Do you have any thoughts on those UI improvement written down anywhere?
I'll admit to being one of the users that really spams reactions on posts. I like them as a form of highlighting for review and as a form of backchannel communication. I would be much happier if people would use more reacts towards me. So I would be upset with UI modifications to restrict reacts, but fully support updates to make the UI around viewing reacts cleaner and more useful.
I wrote a longer comment with some feature suggestions. If you have time it would be nice to hear your thoughts.
Seems like JustDone gives abnormally high AI content estimations. Plausibly this is to scare you into using their "text humanizer" in which an AI re-writes what you wrote to make it seem less like an AI to an AI... I weep for humanity.
I'd recommend reading and commenting until you have enough karma to submit your post to the LW editor who can more straightforwardly tell you why your post would or wouldn't be rejected.
PS: I would like to encourage you, like everyone, to stop focusing on AI capabilities and instead focus on AI interpretability and preference encoding.
Hi!
What sorts of mathematics are you interested in? I'm interested in topology and manifolds which I hope to apply to understanding the semantics of latent spaces within neural networks, especially the residual stream of transformers. I'm also interested in linear algebra for the same reason. I would like to learn more about category theory, because it seems interesting. Finally, I like probability theory and statistics because, like you, I'd like to "correctly understand reality and rationality".
[EDIT: I think issues stem from different people using reacts in different ways and having different assumptions about their use. I think I am probably using them in a less common way than other people, but I also find myself believing I am using them in a better way than other people. As such, I am trying to put in effort to communicate my POV. I would appreciate if anyone who disagrees with me would do so with a higher bandwidth signal than just pressing the "Agreement: Downvote" button. Perhaps by using some inline reacts on my comment?]
Haha! Sorry if I'm bothering anyone! ☺♡
I really like reacts and am bothered in essentially the opposite direction as Sodium in that I think reacts are a very useful backchannel communication, and see it as a minor moral failing that most users do not use them more.
I think it's great that many reacts are based on LW ideals for discourse. I don't know exactly how they are managed, but I think they could be even more valuable if there was some team that reviewed how people are currently using them and then improved and updated react descriptions and usage guides based on that. A descriptivist approach.
I also think a prescriptive approach would also be good. People should be suggesting concepts for reacts that they think would be valuable for communication, and people should be figuring out how to promote proper use of reacts.
I do agree that relevance may be an issue. I would like it if everyone would drop ~10 reacts while reading a post, but then, if all of them showed in the UI, it would be too noisy to make sense of easily. I think there are a few ways around this:
Another issue, I don't know if this is the case or not, but if each react on your comment or post shows up as it's own entry in the notifications list, then that would be annoying because it would make it hard to see the more important notifications. So reacts should probably be batched like karma is somehow. (And really, I think a bunch of improvements could be made to the notifications UI.)
All that said, I strongly oppose restricting who can use reacts and how many reacts they can use. Rather, more people should be encouraged to use more reacts more competently and the UI for viewing / ignoring reacts should be improved.
Thanks : )
Yeah! That was the post that got me to really deeply believe the Orthogonality Thesis. "Naturalistic Awakening" and "Human Guide to Words" are my two favourite sequences.
OISs are actually a slightly broader definition than optimization processes for two reasons though: (1) OISs have capabilities, not intelligence, and (2) OIS capabilities are arbitrarily general.
(1) The important distinction is that OISs are defined in terms of capabilities not in terms of intelligence, where capabilities can be broken down into skills, knowledge, and resource access.
This is valuable for breaking skills down into skill domains, which is relevant for risk assessment, while intelligence is a kind of generalizable skill that seems to be very poorly defined and usually more distracting to valuable analysis in my opinion.
Also, resource access has the compounding property that knowledge and skill also have which could potentially lead to dangerously compounding capabilities. Making it explicit that "intelligence" is not the only aspect of an OIS that has this compounding property seems important.
(2) Is less well considered and less important. The example I have for this is a bottle cap. A bottle cap makes it more likely that water will stay in a bottle, but it isn't an optimizer, it is an optimized object. When viewed through the optimizer lens, the bottle cap doesn't want to keep the water in, rather, it was optimized by something that does want to keep the water in, so it is not an optimizer. That is, the cap has extremely fragile capabilities. It keeps the water in when it is screwed on, but if it is unscrewed it has no ability on it's own to put itself back on or try to continue keeping the water in. This must be very nearly the limit in how little it is possible for capabilities to generalize.
However, from the OIS lens, the cap indeed makes water staying in the bottle a more likely outcome, and we can say that in some sense it does want to keep the water in.
I find it a little frustrating how general this makes the definition, and I'm sure other people will as well, but I think it is more useful in this case to cast a very wide net and then try to understand the differences between the kinds of things caught by that net, rather than working with the overly limited definitions that fail to reference the objects I am interested in. It also highlights the potential issues with highly optimized fragile OIS. If we need them to generalize, it is a problem that they won't, and if we are expecting safety because something "isn't actually an optimizer" that may not matter if it is sufficiently well optimized over a sufficiently dangerous domain of capability.
ToW: Response to "6 reasons why alignment-is-hard discourse...". I liked this post. I'd like to write out some of my thoughts on it.
ToW: Exploration of simulacrum levels. It feels to me like the situation should be more of an interconnected web thing than discrete levels, but I haven't thought it through enough yet.
ToW: "Does demanding rights make problem solvers feel insulted?" informal social commentary exploring my thoughts on the relationship between human rights and the systems we employ to ensure standards of human rights can be met.
I would rather it was done with an experimental & humanitarian perspective, rather than with profit seeking or ideological goals. Although the latter may unfortunately be unavoidable.