Posts

Sorted by New

Wiki Contributions

Comments

Hi jyby

I'd be interested in hearing more of your thoughts here. I think you formulated the question and alluded to your current leanings, but I'd like to hear more about what form of authoritarianism you think is required to prevent the collapse of biodiversity and climate change. Would you be willing so share more?

I agree with the cat and mouse metaphor and that we should assume an AI to be hyper competent.

At the same time, it will be restricted to operating within the constraints of the systems in can influence. My main point, which I admit was poorly made, is that cross site scripting attacks can be covered with a small investment, which eliminates clever java script as a possible attack vector. I would place lower probability on this being the way an AI escapes.

I would place higher probability on an AI exploiting a memory buffering type error similar to the one you referenced. Furthermore I would expect it to be in software it is running on top of and can easily experiment/iterate on. (OS, container, whatever) Whereas browser interactions are limited in iteration by the number of times a user calls the service, one would expect the local software can by manipulated and experimented with constantly and only be constrained by the CPU /IO resources available.

That is okay with me, what do you want to discuss?

Disagreements can be resolved!

I see your motivation for writing this up as fundamentally a good one. Ideally, every conversation would end in mutual understanding and closure, if not full agreement.

At the same time, people tend to resent attempts at control, particularly around speech. I think part of living in a free and open society is not attempting to control the way people interact too much.

I hypothesize the best we can do is try and emulate what we see as the ideal behavior and shrug it off when other people don't meet our standards. I try to spend my energy on being a better conversation partner (not to say I accomplish this), instead of trying to make other people better at conversation. If you do the same, and your theory of what people want from a conversation partner accurately models the world, you will have no shortage of people to have engaging discussions with and test your ideas. You will be granted the clarity and closure you seek.

By 'what people want' I don't mean being only super agreeable or flattering. I mean interacting with tact, brevity, respect, receptivity to feedback, attention and other qualities people value. You need to appeal to the other person's interest. Some qualities essential to discussion, like disagreeing, will make certain folks back off, even if you do it in the kindest way possible, but I don't think that's something that can be changed by policy or any other external action. I think it's something they need to solve on their own.

Lot's of low cost ways to prevent this- perhaps already implemented (I don't use GPT3 or I'd verify). Human's have been doing this for awhile, so we have a lot of practice defending against it.

https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html

I enjoyed your post.

I am relatively new to less wrong, but also have been influenced by Buddhism, and am glad to see it come up here.

The confusion you point at between faith and belief is appreciated and was an important distinction I did not make for roughly the first 20 years or so of my life. The foundational axiom I use so as to not fall into the infinite skepticism you mention is the idea that it’s okay to try and build, help, learn, and contribute even if you don't understand things completely. I also hold out hope for the universe and life to ultimately make sense, and I try to understand it, but I suspect it will all amount to an absurd Sisyphean act.

What is referred to as faith or trust in the post I refer to as open mindedness. I think it maps without issue to the same concept you are referring to, but I am open to distinctions being drawn.

The other thing I wanted to mention, if anyone found the distinction between belief and faith especially interesting, and would be interested to understand how even within religious communities belief can be detrimental, I recommend the book The Religious Case Against Belief by James P. Carse. It explores this subject in depth and is quite enjoyable.

I think its fair to say direct democracy would not eliminate lobbying power. And to your final point, I agree that reliable educational resources or perhaps some other solution would be needed to make sure whomever is doing the voting is as rational as they can be. It's not sufficient to only give everyone a vote.

Regarding your point around running ads, to make sure I am understanding: do you mean the number of people who actually read the bill will be sufficiently low, that a viable strategy to get something passed would be to appeal to the non-reading voters and misinform them?

Thank you for additional detail, I understand your point about conformity to rules, the way that increases predictability, and how that allows for larger groups to coordinate effectively. I think I am getting hung up on the word trust, as I tend to think of it as when I take for granted someone has good intentions towards me and basic shared values. (e.g. they can't think whats best for me is to kill me) I think I am pretty much on board with everything else about the article.

I wonder if another productive way to think about all this would be (continuing to riff on interfaces, and largely restating what you have already said) something like: when people form relationships they understand how each other will behave, relationships enable coordination, humans can handle understanding and coordinating up to Dunbar's number, to work around this limit above 150 we begin grouping people- essentially abstracting them back down to a single person (named for example 'Sales' or 'The IT Department'), if that group of people follow rules/process then the group becomes understandable and we can have a relationship and coordinate with that group, and if we all follow shared rules, everyone can understand and coordinate with everyone else without having to know them. I think I am pretty much agreeing with the point you make about small groups being able to predict each other's behavior, and that being key. Instead of saying one person trusts another person, I'd favor one person understands another person. I think this language is compatible with your examples of sarcasm, lies, and the prisoner's dilemma.

Anyway, I'll leave it at that. Thank you for the discussion.

I enjoyed your post. Specifically, using programs as an analogy for society seems like something that could generate a few interesting ideas. I have actually done the same and will share one of my own thoughts in this space at the end.

To summarize some of your key points:

  • A person can define their trust of a group by how well they can mentally predict the behavior of that group.
  • We don't seem to have good social interfaces for large groups, perhaps because we cannot simulate large groups.
  • There is a continuum of formality for social rules with very formal written laws on one end and culture on the other. Understanding and enforcement are high in formal rule sets and low in informal ones.
  • Different social interfaces are more stable at different group sizes.
  • Distance between social groups increase friction, reduce trust, reduce accountability, reduce predictability, and reduce consistency.
  • Enforcement of rules implies monopoly of force implies hierarchy implies inequality.

Regarding mental prediction of group behavior as the definition of trust. I am not sure on this one. What about when you reliably predict someone will lie?

Regarding the continuum of formality for social rules I agree that formality is an important dimension. Although I would suggest decoupling enforcement and understanding. Consider people who work at corporations or live in tyrannies- these environments have high enforcement/concentrations of power, but often an opaque ruleset. Carl Popper in his book The Open Society spends a good amount of time discussing the institutionalization of norms into policies/laws etc, vs rules which simply give people in a hierarchy discretionary power. You may enjoy it. Chapter 17 section VII. The overall point though is that for rules to be understandable in a meaningful sense (beyond "don't piss off the monarch") they can't delegate discretion to other people.

Creating interfaces that are consistent means the circumstances of individuals have to be abstracted away.

Is the idea behind this maybe something like everybody in a democracy implements get_vote(issue) -> true|false?

Is this a problem?


Lastly, to share an idea I am currently trying to research more extensively, but uses the software analogy:

What if someone founded a new political party whose candidates run on the platform that if elected they will send every bill voted on to their constituents using an app of sorts and will always vote the way the constituency says. Essentially having no opinions of their own. I think of this political party as an adapter that turns a representative democracy into a direct (or liquid or whatever you implement in the app) democracy.

I think I am troubled by the same situation as you. How to organize society that uses hierarchy less, but still has law, order, and good coordination between people. To me, more direct forms of democracy are the next logical step. Doing the above would erode lobbying power/corruption. I am researching similar concepts for companies as well.

The title is 'A Hierarchy of Abstraction' but the article focuses on levels of intelligence. The article claims that intelligence positively correlates with the ability to handle high level abstractions, but it does not talk about actual hierarchies of abstraction. For example, I'd expect a hierarchy of abstraction to contain things like: concrete objects, imagined concrete objects, classes of concrete objects, concrete processes, simulated processes, etc. A more accurate title might be 'The Ability to Understand and Use Abstractions in Computer Science as a Measure of Intelligence.'

The article lays out a way of measuring fluid intelligence but does not decouple the crystallized intelligence requirements from the fluid ones. For example, 'Understands Recursion' specifies needing to implement a specific algorithm recursively as a requirement. There are plenty of people who understand and use recursion regularly who do not know that algorithm. (take me) Let's say you test them and they fail. Did they fail because of their fluid intelligence? Did they fail because of a lack of crystallized knowledge related to that specific problem? Did they fail because of abstraction capability requirements in that specific problem, but not recursion in general?

What about recursion as a concept makes it hard for people to understand? I would recommend trying to generalize the requirements more. I would recommend exploring other possible attributions of failure other then fluid intelligence. If the article examined the components of recursion it would be more interesting and compelling. What are the components?

Drilling down into the components of any of these tests will reveal a lot of context and crystallized knowledge that the article may be taking for granted. (curse of knowledge bias) You might be seeing someone struggle with recursion, and the problem isn't that they can't understand recursion, its that they don't have crystallized knowledge of a building block. As someone who understands recursion to a reasonable level, I'd like to see the article point at the key idea behind recursion that people have trouble grasping. Are there a sequence of words that the article can specify where someone understands what each word means, but finds the overall sentence ineffable? Or perhaps they can parrot it back, but they can't apply it to a novel problem. A requirement of this hypothesis is that someone has all prerequisite crystallized knowledge, but still cannot solve the problem. Otherwise these are not 'hard' boundaries of fluid intelligence.

I guess you primarily deal with computers and programming. One way to try and generalize this quickly would be to compare notes across disciplines and identify the pattern. Is there a 'cannot learn pointers' in chemistry for example?

I understand that you are trying to share the gist of an idea, but I think these are things that should be further examined if you want other people to take on this mental model.
Much more needs to be said and examined in an article that lays out 10 specific levels with specific tests.

I'd also be wary of the possibility this entire framework / system looks good because it positions your tribe as superior (computer programmers) and possibly you somewhere comfortably towards the top.

This article triggered me emotionally because I think one of the things that prevents people from learning things is the belief that they can't. I wouldn't want anyone to take away from this article that because they didn't understand pointers or recursion at some point in there life, it was because they are dumb and should stop trying.

Load More