It's a great question. I'm sure I've read something about that, possibly in
some pop book like Thinking, Fast & Slow. What I read was an evaluation of the
relationship of IQ to wealth, and the takeaway was that your economic success
depends more on the average IQ in your country than it does on your personal IQ.
It may have been an entire book rather than an article.
Google turns up this 2010 study from Science
[https://www.science.org/doi/abs/10.1126/science.1193147]. The summaries you'll
see there are sharply self-contradictory.
First comes an unexplained box called "The Meeting of Minds", which I'm guessing
is an editorial commentary on the article, and it says, "The primary
contributors to c appear to be the g factors of the group members, along with a
propensity toward social sensitivity."
Next is the article's abstract, which says, "This “c factor” is not strongly
correlated with the average or maximum individual intelligence of group members
but is correlated with the average social sensitivity of group members, the
equality in distribution of conversational turn-taking, and the proportion of
females in the group."
These summaries directly contradict each other: Is g a primary contributor, or
not a contributor at all?
I'm guessing the study of group IQ is strongly politically biased, with
Hegelians (both "right" and "left") and other communitarians, wanting to show
that individual IQs are unimportant, and individualists and free-market
economists wanting to show that they're important.
This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
I have read (long ago, not sure where) a hypothesis that most people (in the educated professional bubble?) are good at cooperation, but one bad person ruins the entire team. Imagine that for each member of the group you roll a die, but you roll 1d6 for men, and 1d20 for women. A certain value means that the entire team is doomed.
This seems to match my experience, where it is often one specific person (usually male) who changes the group dynamic from cooperation of equals into a kind of dominance contest. And then, even if that person is competent, they have effectively made themselves the bottleneck of the former "hive mind", because now any idea can be accepted only after it has been explained to them in great detail.
That would imply some interesting corollaries:
* The more a team depends on the joint brainpower, the smaller it has to be (up
to the minimum size for the complexity of the ideas sought, or rather
multiplied by a term for that).
* We see that in software teams that are usually limited to a size of around
7.
* The highly productive lightcone teams
[https://www.lesswrong.com/posts/6LzKRP88mhL9NKNrS/how-my-team-at-lightcone-sometimes-gets-stuff-done]
seem to be even smaller.
* At equal size, teams with more women should be more stable. To test this a
domain is needed where there are roughly equal men and women, i.e., not
engineering but maybe science or business administration.
What is the number at the limit of what people can do? I tried to look up the
team size of the people working on the Manhattan project
[https://en.wikipedia.org/wiki/Manhattan_Project#/media/File:Manhttan_Project_Organization_Chart.gif],
but couldn't find details. It seems that individual top scientists were working
closely with teams building stuff (N=1), and there were conferences with
multiple scientists (N>10), e.g., 14 on the initial bomb concept conference
[https://en.wikipedia.org/wiki/Manhattan_Project#Bomb_design_concepts].
4Viliam4mo
What does it actually mean to do things in a group? Maybe different actions
scale differently. I can quickly think of three types of action: Brainstorming
an idea. Collecting feedback for a proposal. Splitting work among multiple
people who do it separately.
Brainstorming and collecting feedback seem like they could scale almost
indefinitely. You can have thousand people generate ideas and send them to you
by e-mail. The difficult part will be reading the ideas. Similarly, you could
ask thousand people to send feedback by e-mail. Perhaps there is a psychological
limit somewhere, when people aware that they are "one in a hundred" stop
spending serious effort on the e-mails, because they assume their contribution
will be ignored.
Splitting work, that probably depends a lot on the nature of the project. Also,
it is a specific skill that some people have and some people don't. Perhaps the
advantage of a good team is the ability to select someone with the greatest
skill (as opposed to someone with the greatest ego) to split the work.
More meta, perhaps the advantage of a good team is the ability to decide how
things will be done in general (like, whether there will be a brainstorming at
all, whether to split into multiple teams, etc.). This again depends on the
context: sometimes the team has the freedom to define things, sometimes it must
follow existing rules.
I am just thinking out loud here. Maybe good teamwork requires that (1) someone
has the necessary skills, and (2) the team is able to recognize and accept that,
so that the people who have the skills are actually allowed to use them. Either
of these two is not enough alone. You could have a team of experts whose
decisions are arbitrarily overriden by management, or a team of stubborn experts
who refuse to cooperate at all. On the other hand, if you had a team of perfect
communicators with e.g. zero programming skills, they probably couldn't build a
nontrivial software project. (There is also the possibility o
4Gunnar_Zarncke4mo
All your thinking out loud makes sense to me. Brainstorm as you suggested
probably doesn't scale well as many ideas will be generated again and again,
maybe even logarithmic distincti results. I once read that husband wife teams do
better on joint tasks than randomly paired people if equal skill. This indicates
that splitting is possible.
But I you seem to go more in the direction of looking for specific mechanisms
while I am more interested in data on scaling laws. Though indeed what are the
scaling parameters? I guess I can be happy if there is any data on this at all
and see what parameters are available.
4Viliam4mo
Yeah.
Well, taking your question completely literally (a group of N people doing an IQ
test together), there are essentially two ways how to fail at an IQ test. Either
you can solve each individual problem given enough time, but you run out of time
before the entire test is finished. Or there is a problem that you cannot solve
(better than guessing randomly) regardless of how much time you have.
The first case should scale linearly, because N people can simply split the test
and do each their own part. The second scale would probably be logarithmic,
because it requires a different approach, and many people will keep trying the
same thing.
...but this is still about how "the number of solved problems" scales, and we
need to convert that value to IQ. And the standard way is "what fraction of
population would do worse than you". But this depends on the nature of the test.
If the test is "zillion simple questions, not enough time", then dozen random
students together will do better than Einstein. But if the test is "a few very
hard questions", then perhaps Einstein could do better than a team of million
people, if some wrong answer seems more convincing than the right one to most
people.
This reminds me of chess; how great chess players play against groups of people,
sometimes against the entire world. Not the same thing that you want, but you
might be able to get more data here: the records of such games, and the ratings
of the chess players.
4Gunnar_Zarncke4mo
Sure, it depends on the type of task. But I guess we would learn a lot about
human performance it we tried such experiments. For example, consider your "many
small tasks" task: Even a single person will finish the last one faster than the
first one in most cases.
I like your chess against a group example.
3jowen4mo
I think in your first paragraph, you may be referring to:
https://mason.gmu.edu/~gjonesb/IQandNationalProductivity.pdf
[https://mason.gmu.edu/~gjonesb/IQandNationalProductivity.pdf]
2Gunnar_Zarncke4mo
My interest is not political - though that might make it harder to study, yes. I
think it's relevant to AI because it could uncover scaling laws. One presumable
advantage of AI is that it scales better, but how does that depend on speed of
communication between parts and capability of parts? I'm not saying that there
is a close relationship but I guess there are potentially surprising results.
Organizations - firms, associations, etc. - are systems that are often not well-aligned with their intended purpose - whether to produce goods, make a profit, or do good. But specifically, they resist being discontinued. That is one of the aspects of organizational dysfunction discussed in Systemantics. I keep coming back to it as I think it should be possible to study at least some aspects in AI Alignment in existing organizations. Not because they are superintelligent but because their elements - sub-agents - are observable, and the misalignment often is too.
I think early AGI may actually end up being about designing organizations that
robustly pursue metrics that their (flawed, unstructured, chaotically evolved)
subagents don't reliably directly care about. Molochean equilibrium fixation and
super-agent alignment may turn out to be the same questions.
This means that you do not need a global context to explain new concepts but only precursor concepts or limited physical context.
This is related to Cutting Reality at its Joints which implicitly claims that reality has joints. But maybe, if there are no such joints, using local explanations is maybe all we have. At least, it is all we have until we get to a precision that allows cutting the joints.
Maybe groups of new concepts can be introduced in a way to require fewer (or an optimum number of) dependencies in ... (read more)
Off-topic: Any idea why African stock markets have been moving sideways for years now despite continued growth both of populations and technology,and both for struggling as well as more developing nations like Kenya, Nigeria, or even South Africa?
African government officials are often more loyal to their clan than to the
government. As a result, you have very poor governance and a lot of corruption
in most African countries. In South Africa, governance quality changed
post-apartheid.
3Gunnar_Zarncke4mo
But shouldn't we see some differences between countries in Africa, then? Kanya
in particular seems to be much more progressive and have better governance than,
e.g., Congo, but growth is rarely above 1% per year.
4Dagon4mo
The cynical and/or woke answer is "colonialism". The growth is not captured by
companies on those exchanges, but by US, EU, and Asian companies. A more
neutral hypothesis (for which I have no evidence and have no clue about the
truth of it) is that much of the growth is via new companies more than increase
in price of existing companies, so no index will show the increase.
jbash wrote in the context of an AGI secretly trying to kill us:
Powerful nanotech is likely possible. It is likely not possible on the first try
The AGI has the same problem as we have: It has to get it right on the first try.
In the doom scenarios, this shows up as the probability of successfully escaping going from low to 99% to 99.999...%. The AGI must get it right on the first try and wait until it is confident enough.
Usually, the stories involve the AGI cooperating with humans until the treacherous turn.
The AGI can't trust all the information it g... (read more)
One of the worst things about ideology is that it makes people attribute problems to the wrong causes. E.g. plagues are caused by sin. This is easier to see in history, but it still happens all the time. And if you get the cause wrong, you have no hope of fixing the problem.
Scott Alexander wrote about how a truth that can't be said in a society tends to warp it, but I can't find it. Does anybody know the SSC post?
This might have been what you were looking for:
https://www.lesswrong.com/posts/D4hHASaZuLCW92gMy/is-success-the-enemy-of-freedom-full
[https://www.lesswrong.com/posts/D4hHASaZuLCW92gMy/is-success-the-enemy-of-freedom-full]
https://www.lesswrong.com/posts/5wGFS2sZhKAihSg6k/success-buys-freedom
[https://www.lesswrong.com/posts/5wGFS2sZhKAihSg6k/success-buys-freedom] Or
Aella's recent substack post, "On Microfame and Staying Tender"
3Gunnar_Zarncke1y
Yes! I meant the first one. The others are also great. Thank you.
Can we compare utility functions across agents? I.e. do utility functions use
the same “units” across different agents?
5Gunnar_Zarncke1y
That is an excellent question. Trying to compare utility functions might
uncover building blocks.
4Dagon1y
For a VNM-agent (one which makes consistent rational decisions), the utility
function is a precise description, not an abstraction. There may be summaries
or aggregations of many utility functions which are more abstract.
When an agent changes, and has a different utility function, can you be sure
it's really the "same" agent? Perhaps easier to model it being replaced by a
different one.
2Gunnar_Zarncke1y
Well, I should have been more clear that I meant real-life agents like humans.
There the change is continuous. It would be possible to model this as a
continuous transition to new agents but then the question is still: What does
stay the same?
4Dagon1y
Humans don't seem to have identifiable near-mode utility functions - they
sometimes espouse words which might map to a far-mode value function, but it's
hard to take them seriously.
THAT is the primary question for a model of individuality, and I have yet to
hear a compelling set of answers. How different is a 5-year old from the
"same" person 20 and 80 years later, and is that more or less different than
from their twin at the same age? Extend to any population - why does
identity-over-time matter in ethical terms?
It's also possible to experience 'team flow,' such as when playing music together, competing in a sports team, or perhaps gaming. In such a state, we seem to have an intuitive understanding with others as we jointly complete the task at hand. An international team of neuroscientists now thinks they have uncovered the neural states unique to team flow, and it appears that these differ both from the flow states we experience as individuals, and from the
An Alignment Paradox: Experience from firms shows that higher levels of delegation work better (high level meaning fewer constraints for the agent). This is also verycommonpracticaladvice for managers. I have also received this advice myself and seen this work in practice. There is even a management card game for it: Delegation Poker. This seems to be especially true in more unpredictable environments. Given that we have intelligent agents giving them higher degrees of freedom seems to imply more ways to cheat, defect, or ‘escape’. Even more so in envir... (read more)
Most people are naturally pro-social. (No, this can't
[https://www.lesswrong.com/posts/zY4pic7cwQpa9dnyk/detached-lever-fallacy] be
applied to AI.) Given a task, they will try to do it well, especially if they
feel like their results are noticed and appreciated.
A cynical hypothesis is that most of the things managers do are actively harmful
to the project; they are interfering with the employees trying to do their work.
The less the manager does, the better the chances of the project. "Delegation"
is simply when manager stops actively hurting the project and allows others to
do their best.
The reason for this is that most of the time, there is no actually useful work
for the manager. The sane thing would be to simply sit down and relax, and wait
for another opportunity for useful intervention to arise. Unfortunately, this is
not an option, because doing this would most likely get the manager fired.
Therefore managers create bullshit work for themselves. Unfortunately, by the
nature of their work, this implies creating bullshit work for others. In
addition to this, we have the corrupted human hardware
[https://www.lesswrong.com/tag/corrupted-hardware], with some managers enjoying
power trips and/or believing they know everything better
[https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect] than people below
them in the hierarchy.
When you create a manager role in your company, it easily becomes a lost purpose
[https://www.lesswrong.com/posts/sP2Hg6uPwpfp3jZJN/lost-purposes] after the
original problems are solved but the manager wants to keep their job.
3Gunnar_Zarncke2y
Check.
Check.
I don't like cynical views and while I have encountered politics and seen such
cases I don't think that paints a realistic view. But I will run with your
cynical view and you won't like it ;-)
So we have these egotistical managers that only want to keep their job and raise
in ranks. Much closer to non-social AI, right? How come more delegation works
better for them too?
Mind you, I might be wrong and it works less and less the further up you go. It
might be that you are right and this works only because people have enough
social behavior hard-wired that makes delegation work.
But I have another theory: Limited processing capacity + Peter Principle.
It makes sense to delegate more - especially in unpredictable environments -
because that reduces your processing load of dealing with all the challenging
tasks and moves it to your subordinates. This leaves less capacity for them to
schema against you and gives you the capacity to scheme against your superior.
Und so up the chain. Capable subordinates that can deal with all the stuff you
throw at them have to be promoted so they have more work to do until they reach
capacity too. So sometimes the smart move is to refuse promotion :-)
3Viliam2y
I guess we agree that limited processing capacity means that interfering with
the work of your underlings -- assuming they are competent and spending enough
of their processing capacity on their tasks -- is probably a bad move. It means
taking the decision away from the person who spends 8 hours a day thinking about
the problem, and assigning it to a person who spent 30 seconds matching the
situation to the nearest cliche, because that's all they had time for between
the meetings.
This might work if the person is such a great expert that their 30 seconds are
still extremely valuable. That certainly is possible; someone with lots of
experience might immediately recognize a frequently-made mistake. It is also is
the kind of assumption that Dunning and Kruger would enjoy researching.
That would make sense. When you are a lowest-level manager, if you stop
interfering, it allows the people at the bottom to focus on their object-level
tasks. But if you are a higher-level manager, how you interact with the managers
below you does not have a direct impact on the people at the bottom. Maybe you
manage your underlings less, and they copy your example and give more freedom to
the people at the bottom... or maybe you just gave them more time to interfere.
So you have more time to scheme... but you have to stay low in the pyramid. Not
sure what you scheme about then. (Trying to get to the top in one huge jump?
Sounds unlikely.)
2Gunnar_Zarncke2y
Have you ever managed or worked closely with great team-leads?
I was a team leader twice. The first time it happened by accident. There was a team leader, three developers (me one of them), and a small project was specified. On the first day, something very urgent happened (I don't remember what), the supposed leader was re-assigned to something else, and we three were left without supervision for unspecified time period. Being the oldest and most experienced person in the room, I took initiative and asked: "so, guys, as I see it, we use an existing database, so what needs to be done is: back-end code, front-end code, and some stylesheets; anyone has a preference which part he would like to do?" And luckily, each of us wanted to do a different part. So the work was split, we agreed on mutual interfaces, and everyone did his part. It was nice and relaxed environment: everyone working alone at their own speed, debating work only as needed, and having some friendly work-unrelated chat during breaks.
In three months we had the project completed; everyone was surprised. The company management assumed that we will only "warm up" during those three months, and when the original leader returns, he will lead us to the glorious results. (In a parallel Ev... (read more)
Thank you a lot. Your detailed account really helps me understand your
perspective much better now. I can relate to your experience in teams where
micromanagement slows things down and prevents actually relevant solutions. I
have been in such teams. I can also relate to it being advantageous when a
leader of questionable value is absent. I have been in such a team too - though
it didn't have such big advantages as in your case. That was mostly because this
team was part of a bigger organization and platform where multiple teams had to
work together to something done, e.g. agree on interfaces with other teams. And
in the absence of clear joint goals that didn't happen. Now you could argue that
then the management one level up was not doing its job well and I agree. But the
absence of that management wouldn't have helped either - it could have led to a)
each team trying to solve some part of the problem. It could have led to b) some
people from both teams getting together and agreeing on interfaces and joining
goals or it could have led to c) the teams agreeing on some coordination for
both teams. a) in most cases leads to some degree of chaos and failure and b)
establishes some kind of leadership on the team level (like you did in your
first example) and c) results over time in some leadership one level up. I'd
argue that some kind of coordination structure is needed. Where did the project
you did implement in your first case come from? Somebody figure out that it
would provide value to the company. Otherwise, you might have built a beautiful
project that didn't actually provide value. I think we agree that the company
you worked in did have some management that provided value (I hope it was no
moral maze). And I agree that a lot of managers do not add value and sometimes
decrease it. On the other hand, I have worked for great team leads and
professional managers. People who would listen, let us make our own decisions,
give clear goals but also limits, help, and redu
4Viliam2y
I have seen this happen also in a small team. Two or three guys started building
each his own part independently, then it turned out those parts could not be put
together; each of them insisted that others change their code to fit his API,
and refused to make the smallest change in his API. It became a status fight
that took a few days. (I don't remember how it was resolved.)
In another company, there was a department that took care of everyone's servers.
Our test server crashed almost every day and had to be restarted manually; we
had to file a ticket and wait (if it was after 4PM, the server was restarted
only the next morning) because we did not have the permission to reset the
server ourselves. It was driving us crazy; we had a dedicated team of testers,
and half of the time they were just waiting for the server to be restarted; then
the week before delivery we all worked overtime... that is, until the moment the
server crashed again, then we filed the ticket and went home. We begged our
manager to let us pool two hundred bucks and buy a notebook that we could turn
into an alternative testing environment under our control, but of course that
would be completely against company policy. Their manager refused to do anything
about it; from their perspective, it meant they had every day one support ticket
successfully closed by merely clicking a button; wonderful metric! From the
perspective of our manager's manager, it was a word against a word, one word
coming from the team with great metrics and therefore more trustworthy. (The
situation never got solved, as far as I know.)
...I should probably write a book one day. Except that no one would ever hire me
afterwards. So maybe after I get retired...
So, yes, there are situations that require to be solved by greater power. In
long term it might even make sense to fire a few people, but the problem is that
these often seem to be the most productive ones, because other people are slowed
down by the problems they caus
2Gunnar_Zarncke2y
Thank you. I agree with your view. Motte and bailey of management yep. I
especially liked this:
It turns out that the alignment problem has some known solutions in the human case. First, there is an interesting special case namely where there are no decisions (or only a limited number of fully accounted for decisions) for the intelligent agent to be made - basically throwing all decision-making capabilities out of the window and only using object recognition and motion control (to use technical terms). With such an agent (we might call it zero-decision agent or zero-agent) scientific methods could be applied on all details of the work process and hig... (read more)
I wonder how could one outlaw a thing like this. Suppose that most managers
believe that Taylorism works, but it is illegal to use it (under that name).
Wouldn't they simply reintroduce the practices, step by step, under a different
name? I mean, if you use a different name, different keywords, different
rationalization, and introduce it in small steps, it's no longer the same thing,
right? It just becomes "industry standards". (If there happens to be an exact
definition, of course, this only becomes an exercise how close to the forbidden
thing you can legally get.)
From the Wikipedia article, I got the impression that what was made illegal was
the use of stop-watch. Okay, so instead of measuring how many seconds you need
to make a widget, I am going to measure how many widgets you make each day --
that is legal, right? The main difference is that you can take a break, assuming
it will allow you to work faster afterwards. Which may be quite an important
difference. It this what it is about?
2Gunnar_Zarncke2y
I assume that that's what happened. Some ideas from scientific management were
taken and applied in less extreme ways.
4Gordon Seidoh Worley2y
I think there's something here, but it's usually thought of the other way
around, i.e. solving AI alignment implies solving human alignment, but the
opposite is not necessarily true because humans are less general intelligences
than AI.
Also, consider that your example of Taylorism is a case study in an alignment
mechanism failing, in that it tried to align the org but failed in that it
spawned the creation of a subagent (the union) that caused it to do something
management might have considered worse than the loss of potential gains given up
by not applying Taylorism.
Anyway, this is a topic that's come up a few times on LessWrong; I don't have
links handy though but you should be able to find them via search.
2Gunnar_Zarncke2y
I'm not trying to prove full alignment from these. It is more like a) a case
study at actual efforts to align intelligent agents by formal means and b) the
identification of conditions where this does succeed.
Regarding its failure: It seems that a close reading of its history doesn't
prove that: a) Taylorism didn't fail within the factories and b) the unions were
not founded within these factories (by their workers) but existed before and
pursued their own agendas. Clearly real humans have a life outside of factories
and can use that to coordinate - something that wouldn't hold for a zero-agent
AI.
I tried to find examples on LW and elsewhere. That is what turned up the link at
the bottom. I am on LW for quite a while and have not seen this discussed in
this way. I have searched again and all searches involving combinations of human
intelligence, alignment and misc words for analogy or comparison turn up not
much than this one which matches just because of its size:
https://www.lesswrong.com/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda
[https://www.lesswrong.com/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda]
Can you suggest better ones?
2Gunnar_Zarncke2y
Thank you for your detailed reply. I was already wondering whether anybody saw
these shortform posts at all. They were promoted at a time but currently it
seems hard to notice them with the current UI. How did you spot this post?
4Gordon Seidoh Worley2y
I read LW via /allPosts and they show up there for me. Not sure if that's the
default or not since you can configure the feed, which I'm sure I've done some
of but I can't remember what.
Hi, I have a friend in Kenya who works with gifted children and would like to get ChatGPT accounts for them. Can anybody get me in touch with someone from OpenAI who might be interested in supporting such a project?
I have been thinking about the principle Paul Graham used in Y combinator to improve startup funding:
all the things [VCs] should change about the VC business — essentially the ideas now underlying Y Combinator: investors should be making more, smaller investments, they should be funding hackers instead of suits, they should be willing to fund younger founders, etc. -- http://www.paulgraham.com/ycstart.html
What would it look like if you would take this to its logical conclusion? You would fund even younger people. Students that are still in high ... (read more)
Funny, just saw this tweet from Sam Altman
[https://mobile.twitter.com/sama/status/1505599701864701954 ]:
Also this Scholarship
[https://www.lesswrong.com/posts/F7RgpHHDpZYBjZGia/high-schoolers-can-apply-to-the-atlas-fellowship-usd50k].
I think these use the startup founding model. But I think scaling would work
better with more but smaller payouts.
3DonyChristie1y
A related concept: https://twitter.com/mnovendstern/status/1495911334860693507
[https://twitter.com/mnovendstern/status/1495911334860693507]
2Gunnar_Zarncke1y
I'm not sure what the relation is. That seems to predict revenue from startup
financials.
You may have some thoughts about what you liked or didn’t like but didn’t think it worth telling me. This is not so much about me as it is for the people working with me in the future. You can make life easier for everybody I interact with by giving me quick advice. Or you can tell me what you liked about me to make me happy.
Preferences are plastic; they are shaped largely by...
...the society around us.
From a very early age, we look to see who around us who other people are looking at, and we try to copy everything about those high prestige folks, including their values and preferences. Including perception of pleasure and pain.
Worry less that future folks will be happy. Even if it seems that future folks will have to do or experience things that we today would find unpleasant, future culture could change people so that they find these new things pleasant instead.
Seems to be a chicken-and-egg problem here: if people only eat chili peppers
because they see high-status people doing so, why did the first high-status
person start eating them? It would make much more sense if unappealing food was
associated with low status (the losers have to eat chili peppers because they
can't get anything else).
Another question, why are small children so picky about food? Do they perhaps
consider their parents too low-status to imitate? Doesn't seem right,
considering that they imitate them on many other things.
2Gunnar_Zarncke1y
I think small kids are different.
For adults, there are some dynamics but that doesn't invalidate the point that
there is plasticity.
Also some old SSC posts with some theories:
https://slatestarcodex.com/2014/04/22/right-is-the-new-left/
[https://slatestarcodex.com/2014/04/22/right-is-the-new-left/]
https://slatestarcodex.com/2015/10/21/contra-simler-on-prestige/
[https://slatestarcodex.com/2015/10/21/contra-simler-on-prestige/]
1Randomized, Controlled1y
How come these are spoilers?
2Gunnar_Zarncke1y
It is supposed to let you think if you remember the answer or can come up with
it yourself. I explained it in this earlier shortform
[https://www.lesswrong.com/posts/8szBqBMqGJApFFsew/gunnar_zarncke-s-shortform?commentId=dGcnhSnHPNraeGrBZ].
Insights about branding, advertising, and marketing.
It is a link that was posted internally by our brand expert and that I found full of insights into human nature and persuasion. It is a summary of the book How Not to Plan: 66 Ways to Screw it Up:
Roles serve many functions in society. In this sequence, I will focus primarily on labor-sharing roles, i.e. roles that serve splitting up productive functions as opposed to imaginary roles e.g. in theater or play. Examples of these roles are (ordered roughly by how specific they are):
Parent
Engineer (any kind of general type of job)
Battery Electronics Engineer (any kind of specific job description)
Chairman of a society/club
Manager for a certain (type of) project in a company
Roles are important. This shortform is telling you why. An example: The role of a moderator in an online forum. The person (in the following called agent) acting in this role is expected to perform certain tasks - promote content, ban trolls - for the benefit of the forum. Additionally, the agent is also expected to observe limits on these tasks e.g. to refrain from promoting friends or their own content. The owners of the forum and also the community overall effectively delegate powers to the agent and expect alignment with the goals of the forum. This is an alignment problem that has existed forever. How is it usually solved? How do groups of people or single principals use roles to successfully delegate power?
Q: Should a founder move into their parents' basement and live off ramen?
A: If a founder is willing to move into their parents' basement and live off ramen in order to save money and grow their business, then yes, they should do so.
I'd be interested to hear the answer to "What has Paul Graham been wrong about?"
LLM:
A: Paul Graham has been wrong about a few things, but the most notable is his belief that the best startups are founded by young people. This has been pro
Interestingly, the average startup founder does appear to be in their 40's (A
quick Google search says 42 for most sources but I also see 45), and the average
unicorn (billion-dollar) startup founder is 34.
https://www.cnbc.com/2021/05/27/super-founders-median-age-of-billion-startup-founders-over-15-years.html
[https://www.cnbc.com/2021/05/27/super-founders-median-age-of-billion-startup-founders-over-15-years.html]
So, I guess it depends on how close to the tail you consider the "best
startups". Google, for instance, had Larry Page and Sergei Brin at 25 when they
formed it. It does seem like, taken literally, younger = better.
However, I imagine most people, if they were to consider this question, wouldn't
particularly care about the odds of being the next Google vs. being the next
Atlassian - both would be considered a major success if they're thinking of
starting a startup! But someone like Paul Graham actually would care about this
distinction. So, in this case, I'd say that the LLM's response is actually
correct-in-spirit for the majority of people who would ask this query, even
though it's factually not well specified.
This implies potentially interesting things about how LLM's answer queries - I
wonder if there are other queries where the technically correct answer isn't the
answer most people would be seeking, and the LLM gives the answer that isn't
maximally accurate, but actually answers most people's questions in the way they
would want.
3Ann4mo
There's most definitely a category of people who would think a billion-dollar
startup was decidedly not best, and in fact had failed their intention.
Alignment idea: Myopic AI is probably much safer than non-myopic AI. But it can't get complicated things done or anything that requires long-term planning. Would it be possible to create a separate AI that can solve only long-term problems and not act on short timescales?
Then use both together? That way we could inspect each long-term issues without risk of them leading to short-term consequences. And we can iterate on the myopic solutions - or ask the long-term AI about the consequences. There are still risks we might not understand like johnswentworth's gun powder example. And the approach is complicated and that is also harder to get right.
There was a post or comment that wrong or controversial beliefs can function as a strong signal for in-group membership, but I can't find it. Does anybody know?
From a discussion about self-driving cars and unfriendly AI with my son: For a slow take-off, you could have worse starting points than FSD: The objective of the AI is to keep you safe, get you where you want, and not harm anybody in the process. It is also embedded into the real world. There are still infinitely many ways things can go wrong, esp. with a fast take-off, but we might get lucky with this one slowly. If we have to develop AI then maybe better this one than a social net optimizing algorithm unmoored from human experience.
A person who has not yet figured out that collaborating with other people has mutual benefits must think that good is what is good for a single person. This makes it largely a zero-sum game, and such a person will seem selfish - though what can they do?
A person who understands that relationships with other people have mutual benefits but has not figured out that conforming to a common ruleset or identity has benefits for the group must think that what is good for the relationship is good for both participants. This can pit relationships agains... (read more)
In a class I taught at Berkeley, I did an experiment where I wrote a simple little program that would let people type either “f” or “d” and would predict which key they were going to push next. It’s actually very easy to write a program that will make the right prediction about 70% of the time. Most people don’t really know how to type ra
This is a trigger, a routine, and a reward — the three parts necessary to build a habit. The trigger is the pleasant moment, the routine is the noticing t, and the reward is the feeling of joy itself.
Try to come up with examples; here are some:
- Drinking water.
- Eating something tasty
- Seeing small children
- Feeling of cold air
- Warmth of sunlight
- Warmth of water, be it bathing, dishwashing, etc.
It is a scientific test that measures gender stereotypes.
The test asks questions about traits that are classified as feminine, masculine, and neutral. Unsurprisingly, women score higher on feminine, and men on masculine traits but Bem thought that strong feminine *and* masculine traits would be most advantageous for both genders.
My result is consistently average feminity, slightly below average masculinity. Yes really. I have done the test 6 times since 2016 and the two online tests mostly agree. And it fits:
Blame holes in blame templates (the social fabric of acceptable behavior) are like plot holes in movies.
Deviations between what blame templates actually target, and what they should target to make a better (local) world, can be seen as “blame holes”. Just as a plot may seem to make sense on a quick first pass, with thought and attention required to notice its holes, blame holes are typically not noticed by most who only work hard enough to try to see if a particular behavior fits a blame template. While many ar
Leadership Ability Determines a Person's Level of...
Effectiveness.
(Something I realized around twelve years ago: I was limited in what I could achieve as a software engineer alone. That was when I became a software architect am worked with bigger and bigger teams.)
From "The 21 Irrefutable Laws of Leadership
By John C. Maxwell":
Factors That Make a Leader
1) Character – Who They Are – true leadership always begins with the inner person. People can sense the depth of a person's character.
2) Relationships – Who They Know – with deep relationships with the right ... (read more)
To achieve objective analysis, analysts do not avoid what?
Analysts do not achieve objective analysis by avoiding preconceptions; that would be ignorance or self-delusion. Objectivity is achieved by making basic assumptions and reasoning as explicit as possible so that they can be challenged by others and analysts can, themselves, examine their validity.
PS. Any idea how to avoid the negation in the question?
I started posting life insights from my Anki deck on Facebook a while ago. Yesterday, I stumbled over the Site Guide and decided that these could very well go into my ShortForm too. Here is the first:
Which people who say that they want to change actually will do?
People who blame a part of themselves for a failure do not change. If someone says, "I've got a terrible temper," he will still hit. If he says, "I hit my girlfriend," he might stop. If someone says, "I have shitty executive function," he will still be late. If he says, "I broke my
I'm looking for a post on censorship bias (see Wikipedia) that was posted on here on LW or possibly on SSC/ACX but a search for "censorship bias" doesn't turn up anything. Googling for it turns up this:
Philosophy with Children - In Other People's Shoes
"Assume you promised your aunt to play with your nieces while she goes shopping and your friend calls and invites you to something you'd really like to do. What do you do?"
This was the first question I asked my two oldest sons this evening as part of the bedtime ritual. I had read about Constructive Development Theory and wondered if and how well they could place themselves in other persons' shoes and what played a role in their decision. How they'd deal with it. A good occasion to have some philosophical t... (read more)
One time my oldest son asked me to test his imagination. Apparently, he had played around with it and wanted some outside input to learn more about what he could do. We had talked about https://en.wikipedia.org/wiki/Mental_image before and I knew that he could picture moving scenes composed of known images. So I suggested
a five with green white stripes - diagonally. That took some time - apparently, the green was difficult for some reason, he had to converge there from black via dark-green
The origin of the word role is in the early 17th century: from French rôle, from obsolete French roule ‘roll’, referring originally to the roll of paper on which the actor's part was written (the same is the case in other languages e.g. German).
The concept of a role you can take on and off might not have existed in general use long before that. I am uncertain about this thesis but from the evidence I have seen so far, I think this role concept could be the result of the adaptations to the increasing division of labor. Before that peop... (read more)
A role works from a range of abstraction between professions and automation. In a profession one person masters all the mental and physical aspects of trade and can apply them holistically from small details of handling material imperfections to the organization of the guild. At the border to automation, a worker is reduced to an executor of not yet automated tasks. The expectations on a master craftsman are much more complex than on an assembly-line worker.
With more things getting automated this frees the capacity to automate m... (read more)
In any sizable organization, you can find a lot of roles. And a lot of people filling these roles - often multiple ones on the same day. Why do we use so many and fine-grained roles? Why don’t we continue with the coarse-grained and more stable occupations? Because the world got more complicated and everybody got more specialized and roles help with that. Division of labor means breaking down work previously done by one person into smaller parts that are done repeatedly in the same way - and can be assigned to actors: “You are now the widget-maker.” This w... (read more)
What are the common aspects of these labor-sharing roles (in the following called simply roles)?
One common property of a role is that there is common knowledge by the involved persons about the role. Primarily, this shared understanding is about the tasks that can be expected to be performed by the agent acting in the role as well as about the goals to be achieved, and limits to be observed as well other expectations. These expectations are usually already common knowledge long beforehand or they are established when the agent takes on the role.
Has anybody ever tried to measure the IQ of a group of people? I mean like letting multiple people solve an IQ test together. How does that scale?
I have read (long ago, not sure where) a hypothesis that most people (in the educated professional bubble?) are good at cooperation, but one bad person ruins the entire team. Imagine that for each member of the group you roll a die, but you roll 1d6 for men, and 1d20 for women. A certain value means that the entire team is doomed.
This seems to match my experience, where it is often one specific person (usually male) who changes the group dynamic from cooperation of equals into a kind of dominance contest. And then, even if that person is competent, they have effectively made themselves the bottleneck of the former "hive mind", because now any idea can be accepted only after it has been explained to them in great detail.
Organizations - firms, associations, etc. - are systems that are often not well-aligned with their intended purpose - whether to produce goods, make a profit, or do good. But specifically, they resist being discontinued. That is one of the aspects of organizational dysfunction discussed in Systemantics. I keep coming back to it as I think it should be possible to study at least some aspects in AI Alignment in existing organizations. Not because they are superintelligent but because their elements - sub-agents - are observable, and the misalignment often is too.
Language and concepts are locally explainable.
This means that you do not need a global context to explain new concepts but only precursor concepts or limited physical context.
This is related to Cutting Reality at its Joints which implicitly claims that reality has joints. But maybe, if there are no such joints, using local explanations is maybe all we have. At least, it is all we have until we get to a precision that allows cutting the joints.
Maybe groups of new concepts can be introduced in a way to require fewer (or an optimum number of) dependencies in ... (read more)
Off-topic: Any idea why African stock markets have been moving sideways for years now despite continued growth both of populations and technology,and both for struggling as well as more developing nations like Kenya, Nigeria, or even South Africa?
jbash wrote in the context of an AGI secretly trying to kill us:
The AGI has the same problem as we have: It has to get it right on the first try.
In the doom scenarios, this shows up as the probability of successfully escaping going from low to 99% to 99.999...%. The AGI must get it right on the first try and wait until it is confident enough.
Usually, the stories involve the AGI cooperating with humans until the treacherous turn.
The AGI can't trust all the information it g... (read more)
Paul Graham on Twitter:
Scott Alexander wrote about how a truth that can't be said in a society tends to warp it, but I can't find it. Does anybody know the SSC post?
Society tells agents how to move(act). Agents tell society how to curve(by local influence).
Paul Graham:
This is related to the recently discussed (though I can't find where) problem that having a blog and growing audience constrains you.
Utility functions are a nice abstraction over what an agent values. Unfortunately, when an agent changes, so does its utility function.
I'm leaving this here for now. May expand on it later.
Team Flow Is a Unique Brain State Associated with Enhanced Information Integration and Interbrain Synchrony
... (read more)An Alignment Paradox: Experience from firms shows that higher levels of delegation work better (high level meaning fewer constraints for the agent). This is also very common practical advice for managers. I have also received this advice myself and seen this work in practice. There is even a management card game for it: Delegation Poker. This seems to be especially true in more unpredictable environments. Given that we have intelligent agents giving them higher degrees of freedom seems to imply more ways to cheat, defect, or ‘escape’. Even more so in envir... (read more)
I was a team leader twice. The first time it happened by accident. There was a team leader, three developers (me one of them), and a small project was specified. On the first day, something very urgent happened (I don't remember what), the supposed leader was re-assigned to something else, and we three were left without supervision for unspecified time period. Being the oldest and most experienced person in the room, I took initiative and asked: "so, guys, as I see it, we use an existing database, so what needs to be done is: back-end code, front-end code, and some stylesheets; anyone has a preference which part he would like to do?" And luckily, each of us wanted to do a different part. So the work was split, we agreed on mutual interfaces, and everyone did his part. It was nice and relaxed environment: everyone working alone at their own speed, debating work only as needed, and having some friendly work-unrelated chat during breaks.
In three months we had the project completed; everyone was surprised. The company management assumed that we will only "warm up" during those three months, and when the original leader returns, he will lead us to the glorious results. (In a parallel Ev... (read more)
It turns out that the alignment problem has some known solutions in the human case. First, there is an interesting special case namely where there are no decisions (or only a limited number of fully accounted for decisions) for the intelligent agent to be made - basically throwing all decision-making capabilities out of the window and only using object recognition and motion control (to use technical terms). With such an agent (we might call it zero-decision agent or zero-agent) scientific methods could be applied on all details of the work process and hig... (read more)
Hi, I have a friend in Kenya who works with gifted children and would like to get ChatGPT accounts for them. Can anybody get me in touch with someone from OpenAI who might be interested in supporting such a project?
I have been thinking about the principle Paul Graham used in Y combinator to improve startup funding:
What would it look like if you would take this to its logical conclusion? You would fund even younger people. Students that are still in high ... (read more)
If you want to give me anonymous feedback, you can do that here: https://www.admonymous.co/gunnar_zarncke
You may have some thoughts about what you liked or didn’t like but didn’t think it worth telling me. This is not so much about me as it is for the people working with me in the future. You can make life easier for everybody I interact with by giving me quick advice. Or you can tell me what you liked about me to make me happy.
Preferences are plastic; they are shaped largely by...
...the society around us.
From Robin Ha... (read more)
Insights about branding, advertising, and marketing.
It is a link that was posted internally by our brand expert and that I found full of insights into human nature and persuasion. It is a summary of the book How Not to Plan: 66 Ways to Screw it Up:
https://thekeypoint.org/2020/03/10/how-not-to-plan-66-ways-to-screw-it-up/
(I'm unaffiliated)
Roles serve many functions in society. In this sequence, I will focus primarily on labor-sharing roles, i.e. roles that serve splitting up productive functions as opposed to imaginary roles e.g. in theater or play. Examples of these roles are (ordered roughly by how specific they are):
Yo... (read more)
Roles are important. This shortform is telling you why. An example: The role of a moderator in an online forum. The person (in the following called agent) acting in this role is expected to perform certain tasks - promote content, ban trolls - for the benefit of the forum. Additionally, the agent is also expected to observe limits on these tasks e.g. to refrain from promoting friends or their own content. The owners of the forum and also the community overall effectively delegate powers to the agent and expect alignment with the goals of the forum. This is an alignment problem that has existed forever. How is it usually solved? How do groups of people or single principals use roles to successfully delegate power?
Someone asked an LLM about startups. For example:
Paul Graham got interested and asked:
LLM:
... (read more)Alignment idea: Myopic AI is probably much safer than non-myopic AI. But it can't get complicated things done or anything that requires long-term planning. Would it be possible to create a separate AI that can solve only long-term problems and not act on short timescales? Then use both together? That way we could inspect each long-term issues without risk of them leading to short-term consequences. And we can iterate on the myopic solutions - or ask the long-term AI about the consequences. There are still risks we might not understand like johnswentworth's gun powder example. And the approach is complicated and that is also harder to get right.
There was a post or comment that wrong or controversial beliefs can function as a strong signal for in-group membership, but I can't find it. Does anybody know?
From a discussion about self-driving cars and unfriendly AI with my son: For a slow take-off, you could have worse starting points than FSD: The objective of the AI is to keep you safe, get you where you want, and not harm anybody in the process. It is also embedded into the real world. There are still infinitely many ways things can go wrong, esp. with a fast take-off, but we might get lucky with this one slowly. If we have to develop AI then maybe better this one than a social net optimizing algorithm unmoored from human experience.
What is good?
A person who has not yet figured out that collaborating with other people has mutual benefits must think that good is what is good for a single person. This makes it largely a zero-sum game, and such a person will seem selfish - though what can they do?
A person who understands that relationships with other people have mutual benefits but has not figured out that conforming to a common ruleset or identity has benefits for the group must think that what is good for the relationship is good for both participants. This can pit relationships agains... (read more)
From my Anki deck:
Receiving touch (or really anything personal) can be usefully grouped in four ways:
Serve, Take, Allow, and Accept
(see the picture or the links below).
A reminder that there are two sides and many ways for this to go wrong if there is not enough shared understanding of the exchange.
http://bettymartin.org/download-wheel/
From my Anki deck:
Mental play or offline habit training is...
...practicing skills and habits only in your imagination.
Rehearsing motions or recombining them.
Imagine some triggers and plan your reaction to them.
This will apparently improve your real skill.
Links:
https://en.wikipedia.org/wiki/Motor_imagery
http://www.bulletproofmusician.com/does-mental-practice.../
http://expertenough.com/1898/visualization-works
From my Anki deck:
Aaronson Oracle is a program that predicts the next key you will type when asked to type randomly and shows how often it is right.
https://roadtolarissa.com/oracle
Here is Scott Aaronson's comment about it:
... (read more)Slices of joy is a habit to...
feel good easily and often.
Trigger Action Plan:
This is a trigger, a routine, and a reward — the three parts necessary to build a habit. The trigger is the pleasant moment, the routine is the noticing t, and the reward is the feeling of joy itself.
Try to come up with examples; here are some:
- Drinking water.
- Eating something tasty
- Seeing small children
- Feeling of cold air
- Warmth of sunlight
- Warmth of water, be it bathing, dishwashing, etc.
-
Refreshing your memory:
What is signaling, and what properties does it have?
- signaling clearly shows resources or power (that is its primary purpose)
- is hard to fake, e.g., because it incurs a loss (expensive Swiss watch) or risk (peacocks tail)
- plausible deniability that it is intended as signaling
- mostly zero-sum on the individual level (if I show that I have more, it implies that others have less in relation)
- signaling burns societal resources
- signaling itself can't be made more efficient, but the resources spent can be used more efficiently in soc
What is the Bem Test or Open Sex Role Inventory?
It is a scientific test that measures gender stereotypes.
The test asks questions about traits that are classified as feminine, masculine, and neutral. Unsurprisingly, women score higher on feminine, and men on masculine traits but Bem thought that strong feminine *and* masculine traits would be most advantageous for both genders.
My result is consistently average feminity, slightly below average masculinity. Yes really. I have done the test 6 times since 2016 and the two online tests mostly agree. And it fits:
What is a Blame Hole (a term by Robin Hanson)?
Blame holes in blame templates (the social fabric of acceptable behavior) are like plot holes in movies.
Leadership Ability Determines a Person's Level of...
Effectiveness.
(Something I realized around twelve years ago: I was limited in what I could achieve as a software engineer alone. That was when I became a software architect am worked with bigger and bigger teams.)
From "The 21 Irrefutable Laws of Leadership
By John C. Maxwell":
Factors That Make a Leader
1) Character – Who They Are – true leadership always begins with the inner person. People can sense the depth of a person's character.
2) Relationships – Who They Know – with deep relationships with the right ... (read more)
To achieve objective analysis, analysts do not avoid what?
Analysts do not achieve objective analysis by avoiding preconceptions; that would be ignorance or self-delusion. Objectivity is achieved by making basic assumptions and reasoning as explicit as possible so that they can be challenged by others and analysts can, themselves, examine their validity.
PS. Any idea how to avoid the negation in the question?
I started posting life insights from my Anki deck on Facebook a while ago. Yesterday, I stumbled over the Site Guide and decided that these could very well go into my ShortForm too. Here is the first:
Which people who say that they want to change actually will do?
People who blame a part of themselves for a failure do not change.
If someone says, "I've got a terrible temper," he will still hit. If he says, "I hit my girlfriend," he might stop.
If someone says, "I have shitty executive function," he will still be late. If he says, "I broke my
My son (15) shared this Instagram version of Newcomb's Problem.
I'm looking for a post on censorship bias (see Wikipedia) that was posted on here on LW or possibly on SSC/ACX but a search for "censorship bias" doesn't turn up anything. Googling for it turns up this:
https://www.theatlantic.com/business/archive/2012/05/when-correlation-is-not-causation-but-something-much-more-screwy/256918/
Anybody can help?
Philosophy with Children - In Other People's Shoes
"Assume you promised your aunt to play with your nieces while she goes shopping and your friend calls and invites you to something you'd really like to do. What do you do?"
This was the first question I asked my two oldest sons this evening as part of the bedtime ritual. I had read about Constructive Development Theory and wondered if and how well they could place themselves in other persons' shoes and what played a role in their decision. How they'd deal with it. A good occasion to have some philosophical t... (read more)
Philosophy with Children - Mental Images
One time my oldest son asked me to test his imagination. Apparently, he had played around with it and wanted some outside input to learn more about what he could do. We had talked about https://en.wikipedia.org/wiki/Mental_image before and I knew that he could picture moving scenes composed of known images. So I suggested
- a five with green white stripes - diagonally. That took some time - apparently, the green was difficult for some reason, he had to converge there from black via dark-green
- three mice
- three mice,
... (read more)Origins of Roles
The origin of the word role is in the early 17th century: from French rôle, from obsolete French roule ‘roll’, referring originally to the roll of paper on which the actor's part was written (the same is the case in other languages e.g. German).
The concept of a role you can take on and off might not have existed in general use long before that. I am uncertain about this thesis but from the evidence I have seen so far, I think this role concept could be the result of the adaptations to the increasing division of labor. Before that peop... (read more)
The Cognitive Range of Roles
A role works from a range of abstraction between professions and automation. In a profession one person masters all the mental and physical aspects of trade and can apply them holistically from small details of handling material imperfections to the organization of the guild. At the border to automation, a worker is reduced to an executor of not yet automated tasks. The expectations on a master craftsman are much more complex than on an assembly-line worker.
With more things getting automated this frees the capacity to automate m... (read more)
When trying to get an overview of what is considered a role I made this table:
In any sizable organization, you can find a lot of roles. And a lot of people filling these roles - often multiple ones on the same day. Why do we use so many and fine-grained roles? Why don’t we continue with the coarse-grained and more stable occupations? Because the world got more complicated and everybody got more specialized and roles help with that. Division of labor means breaking down work previously done by one person into smaller parts that are done repeatedly in the same way - and can be assigned to actors: “You are now the widget-maker.” This w... (read more)
What are the common aspects of these labor-sharing roles (in the following called simply roles)?
One common property of a role is that there is common knowledge by the involved persons about the role. Primarily, this shared understanding is about the tasks that can be expected to be performed by the agent acting in the role as well as about the goals to be achieved, and limits to be observed as well other expectations. These expectations are usually already common knowledge long beforehand or they are established when the agent takes on the role.
The s... (read more)