All of Ericf's Comments + Replies

The described "next image" bot doesn't have goals like that, though. Can you take the pre-trained bot and give it a drive to "make houses" and have it do that? When all the local wood is used up, will it know to move elsewhere, or plant trees?

Yes, maybe? That kind of thing is presumably in the training data and the generator is designed to have longer term coherence. Maybe it's not long enough for plans that take too long to execute, so I'm not sure if Sora per se can do this without trying it (and we don't have access), but it seems like the kind of thing a system like this might be able to do.

If you have to give it a task, is it really an agent? Is there some other word for "system that comes up with its own tasks to do"?

Did you come up with your hunger drive on your own? Sex drive? Pain aversion? Humans count as agents, and we have these built in. Isn't it enough that the agent can come up with subgoals to accomplish the given task?

Note that you have reduced the raw quantity of dust specks by "a lot" with that framing. Heat death of universe is in "only" 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))

200 years ago was 1824. So compared to buying land or company stocks (the London and NY stock exchanges were well established by then) or government bonds.

2Matt Goldenberg13d
Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead. I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).  

Narrator: gold has been a poor bet for 90% of the last 200 years.

(Don't quote me on that, but it is true that gold was a good bet for about 10 years in recent memory, and a bad bet for most post-industrial time)

1Matt Goldenberg22d
Compared to what?  My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time.  E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.

I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.

Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn't say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from "quite low" so I don't think we agree about his chances.

*technically odds can't be zero, but I consider anything less likely than "we are in a simulation that is subject to intervention from outside" to be zero for all decision making purposes.

Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy. If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)

There is an actual 0% chance that anyone other than the Democratic or Republican nominee (or thier replacement in the event of death etc.) becomes president. Voting for/supporting any other candidate has, historically, done nothing to support that candidate's platform in the short or long term. If you find both options without merit, you should vote for your preferred enemy:

  1. Who will be most receptive to your message, either in a compromise, or argument And/or
  2. So sorry about your number 1 issue, neither party cares. What's your number 2 issue, maybe there is a difference there?
I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.

Do you have a link to the study validating that the LLM responses actually match the responses given by humans in that category?

Note one weakness of this technique. An LLM is going to provide what the average generic written account would be. But messages are intended for a specific audience, sometimes a specific person, and that audience is never"generic median internet writer." Beware WIERDness. And note that visual/audio cues are 50-90% of communication, and 0% of LLM experience.

2Gordon Seidoh Worley3mo
You can actually ask the LLM to give an answer as if it were some particular person. For example, just now, to test this, I did a chat with Claude about the phrase "wear a mask", and it produced different responses when I ask it what it would do upon hearing this phrase from public health officials if it was a scientist, a conspiracy theories, or a general member of the public, and in each case it gives reasonably tailored responses that reflect those differences. So if you know your message is going to a particularly unusual audience or you want to know how different types of people will interpret the same message, you can get it to give you some info on this.

How does buying "none of the above" work as you add more entries? If someone buys NOTA today, and the winning entry is #13, does everyone who bought NOTA before it was posted also win?

1Isaac King4mo
Yes, if you buy "other" it splits those shares among all new answers.

Agree that closer to reality would be one advisor, who has a secret goal, and player A just has to muddle through against an equal skill bot with deciding how much advice to take. And playing like 10 games in a row, so the EV of 5 wins can be accurately evaluated against.

Plausible goals to decide randomly between:

  1. Player wins
  2. Player loses
  3. Game is a draw
  4. Player loses thier Queen (ie opponent still has thier queen after all immediate trades and forcing moves are completed)
  5. Player loses on time
  6. Player wins, delivering checkmate with a bishop or knight move
  7. M
... (read more)

Arguing against A doesn't support Not A, but arguing against Not Not A is arguing against A (while still not arguing in favor of Not A) - albeit less strongly than arguing against A directly. No back translation is needed, because arguments are made up of actual facts and logic chains. We abstract it to "not A" but even in pure Mathematics, there is some "thing" that is actually being argued (eg, my grass example).

Arguing at a meta level can be thought of as putting the object level debate on hold and starting a new debate about the rules that do/should govern the object level domain.

Alice: grass is green -> grass isn't not green Bob: the grass is teal -> the grass is provably teal Alice: your spectrometer is miscalibrated -> your spectrometer isn't not miscalibrated.


I'm having trouble with the statement {...and has some argument against C'}. The point of the double negative translation is that any argument against not not A is necessarily an argument against A (even though some arguments against A would not apply to not not A). And the same applies to the other translation - Alice is steelmanning Bob's argument, so there shouldn't be any drift of topic.

That's an interesting point, but I have a couple of replies. * First and foremost, any argument against 'not not A' becomes an argument against A if Alice translates back into classical logic in a different way than I've assumed she is. Bob's argument might conclude 'not A' (because ¬¬¬A→¬A even in intuitionistic logic), but Alice thinks of this as a tricky intuitionistic assertion, and so she interprets it indirectly as saying something about proofs. For Alice to notice and understand your point would, I think, be Alice fixing the failure case I'm pointing out. * Second, an argument against an assumption need not be an argument for its negation, especially for intuitionists/constructivists. (Excluded middle is something they want to argue against, but definitely not something they want to negate, for example.) The nature of Bob's argument against Alice's claim can be quite complex and can occur at meta-levels, rather than occurring directly in the logic. So I guess I'm not clear that things are as simple as you claim, when this happens.

Additionally and separately, the law "X takes effect at time t" will be opposed by the interests that oppose X, regardless of the value of t.

I think your point is that this strategy only works if the voting block’s short-term interests conflict and their long-term interests don’t conflict. I fully agree with that.

Consider a scale that runs from "authentic real life" to "Lotus eater box" At any point along that scale, you can experience euphoria. At the Lotus Eater end, it is automatic. At the real life end, it is incidental. "Games" fall towards the Lotus Eater end of the spectrum, not as far as slot machines, but further from real life than Exercise or Eating Chocolate. Modern game design is about exploiting what is known about what brains like, to guide the players through the (mental) paths necessary to generate happy chems. They call it "being Fun" but that's j... (read more)

90% of games are designed to be fun. Meaning the point is to stimulate your brain to produce feel-good chemicals. No greater meaning, or secret goal. To do this, they have goals, rules, and other features, but the core loop is very simple:

  1. I want to get a dopamine hit, therefore
  2. I open up a game, and
  3. The game provides a structure that I follow, subordinating my "real life" to the artificial goals and laws of the game
  4. Profit!

When the brain generates good feelings, it usually has reasons for doing that, which a game designer has to be aware of. If you keep trying to make it generate good feelings without respecting the deeper purposes of the source of the feelings, afaik it generally stops working after a bit.

My aspiration is to make games that are compatible with living in real life. It's a large underserved market.

I don't think the assumption of equal transaction costs holds. If I want to fill in some potholes on my street, I can go door to door and ask for donations - which costs me time but has minimal and well understood costs to the other contributors. If I have to add "explain this new thing" and "keep track of escrow funds" and "cycling back and telling everyone how the project funding is going, and making them re-decide if/how much to contribute" that is a whole bunch of extra costs.

Also, of the public good is not quantum (eg, I could fix anywhere from 1-10 o... (read more)

A normal Kickstart is already impossible to price correctly - 99% either deliberately underprice to ensure "success" (the "preorders" model) or accidentally underpriced and cost the founders a ton of unpaid overtime (the gitp model) or they overpriced and don't get funded.

A clarification:

Consider the premises (with scare quotes indicating technical jargon):

  1. "Acting in Bad Faith" is Baysean evidence that a person is "Evil"
  2. "Evil" people should be shunned

The original poster here is questioning statement 1, presenting evidence that "good" people act in bad faith too often for it to be evidence of "evil."

However, I belive the original poster is using a more broad definition of "Acting in Bad Faith" than the people who support premise 1.

That definition, concisely, would be "engaging in behavior that is recognized in context a... (read more)

A strange game.

The only winning move is not to play.

Just don't use the term "conspiracy theory" to describe a theory about a conspiracy. Popular culture has driven "false" into the definition of that term, and wishful appeals to bare text doesn't make that connection go away. It hurts that some terms are limited in usability, but the burden of communication falls on the writer.

Setting aside the object level question here, trying to redefine words in order to avoid challenging connotations is a way to go crazy.

If someone is theorizing about a conspiracy, that's a conspiracy theory by plain meaning of the words. If it's also true, then the connotation about conspiracy theories being false is itself at least partly false. 

The point is to recognize that it does belong in the same class, and how accurate/strong those connotations are for this particular example of that reference class, and letting connotations shift to match as ... (read more)

Answer by EricfJul 31, 202310

The innocent explanation is that the SS got back to him just before some sort of 90 day deadline, and he did the math. In which case the tweet could have been made out of ignorance, like flashing an "OK" sign in the "White Power" orientation. It's not easy to keep up with all the dog whistles out there.

Still political malpractice to not track and avoid those signals, though. If you "accidentally" have a rainbow in the background of a campaign photo, that counts as aligning with the LGBTQ+ crowd - same thing with putting "88" in a campaing post & Natzis.

So, the tweet aligns hos campaign with the Natzis, but might have done it accidentally.

Neurotypicals have weaker preferences regarding textures and other sensory inputs. By and large, they would not write, read, or expect others to be interested in a blow-by-blow of asthetics. Also, at a meta level, the very act of writing down specifics about a thing is not neurotypical. Contrast this post with the equivalent presentation in a mainstream magazine. The same content would be covered via pictures, feeling words, and generalities, with specific products listed in a footnote or caption, if at all. Or consider what your neurotypical friend's Face... (read more)

Oooh! High agreement on something this downvoted is curiosity catnip! (Currently I see -18 for position, and +7 for agreement... I haven't touched either button, but I'll definitely upvote a response to my questions here <3) I thought "this is nice" would be a common human reaction, but apparently I'm miscalibrated? The "agreement votes" suggest that even people who think you're being mean kinda grudgingly admit that you're saying something accurate... ...but like... What?  Don't "normal people" also like in a basic public space (that isn't a museum or a dance club or a bedroom or... etc) that are clean bright soft simple naturalistic spaces? I'm honestly curious if some things that I think of as kinda sorta universally joyful are actually "mere" parochial preference? One possibility that I'm considering is that by "neuro-divergent" you just mean "smart and thoughtful and hopeful and optimistic, and willing to try things according to naive first principles just in case they turn out as great as it seems like they'd turn out, and generally having an uncrushed spirit"? It does seem to me like maybe normal people are extremely sad and broken a lot of times, and maybe that's all you're pointing to somehow, but that's a self-congratulatory theory, and also it isn't very predictive of any details, and so my default mental move is to doubt it and test it. Hence: can you explain what you meant by that? :-)
Answer by EricfMay 20, 202331

The real answer is that you should minimize the risk that you walk away and leave the door open for hours, and open it zero times whenever possible. The relative heat loss from 1 vs many separate openings is not significantly different from each-other, but it is much more than 0, and the tail risk of "all the food gets warm and spoils" should dominate the decisions

Answer by EricfMay 17, 202383

I don't thunk your model is correct. Opening the fridge causes the accumulated cold air to fall out over a period of a few (maybe 4-7?) seconds, after which it doesn't really matter how long you leave it open, as the air is all room temp. The stuff will slowly take heat from the room temp air, at a rate of about 1 degree/minute. Once the door is closed, it takes a few minutes (again, IDK how long) to get the air back to 40F, and then however long to extract the heat from the stuff. If you are chosing between "stand there with it open" and "take something o... (read more)

2Trevor Hill-Hand9mo
Something that may help build a better model/intuition is this video from Technology Connections: I mentally visualize the cold air as a liquid when I open the door, or maybe picturing it looking similar to the fog from dry ice. Since it's cold, it falls downward, "pouring" out onto the floor, and probably does not take more than a few seconds, though I would love to see someone capture it on video with a thermal camera. After that, I figure it doesn't really matter how long the door is open, until you start talking about leaving it open for 10+ minutes where you can then start to worry about the food's temperature rising, and the fridge wasting energy trying to cool the open space. On the timescale of just a few moments while you grab stuff, the damage is already done once you open it the first time, and leaving it open or opening/closing it again doesn't really affect anything. This is also why grocery stores and restaurant kitchens tend to have reach-in fridges, open from the top like a chest freezer, instead of vertical doors (though, that's also for convenience).
2[comment deleted]9mo

Re: happiness, it's that meme graph: Dumb: low expectations, low results, is happy Top: can self-modify expectations to match reality: is happy Muddled middle: takes expectations from environment, can't achieve them, is unhappy.

1Jackson Wagner10mo
This is funny, although of course what this is really pointing to isn't a literal U-shaped graph, but that it's really better to think about this in a much more multidimensional way, rather than just trying to graph happiness vs intelligence.  Of course there are all sorts of other traits (like conscientiousness, etc) that might influence happiness.  But more importantly IMO is what you are pointing to -- there are all sorts of different "mindsets" that you can take towards your life, which have a huge impact on happiness... maybe high-IQ slightly helps you grope your way towards a healthier mindset, but to a large extent these mindsets / life philosophies seem independent of intelligence.  By "mindset", I am thinking of things like: -  "internal vs external locus of control" - level of expectations like you say, applied to lots of different life areas where we have expectations - stoic vs neurotic/catastrophizing attitude towards events - how you relate to / take expectations and desires your social environment (trying to keep up with the joneses, vs deliberately rebelling, vs lots of other stances). - being really hard on yourself vs having self-compassion vs etc And so on; too many to mention.

The definition of Nash equilibrium is that you assume all other players will stay with thier strategy. If, as in this case, that assumption does not hold then you have (I guess) an "unstable" equilibrium.

The other thing that could happen is silent deviations, where some players aren't doing "punish any defection from 99" - they are just doing "play 99" to avoid punishments. The one brave soul doesn't know how many of each there are, but can find out when they suddenly go for 30.

It's not. The original Nash construction is that player N picks a strategy that maximizes thier utility, assuming all other players get to know what N picked, and then pick a strategy that maximizes thier own utility given that. Minimax as a goal is only valid for atomic game actions, not complex strategies - Specifically because of this "trap"

Ok, let's see.  Wikipedia: This is sensible. Then... from the Twitter thread: This seems incorrect.  The Wiki definition of Nash equilibrium posits a scenario where the other players' strategies are fixed, and player N chooses the strategy that yields his best payoff given that; not a scenario where, if player N alters his strategy, everyone else responds by changing their strategy to "hurt player N as hard as possible".  The Wiki definition of Nash equilibrium doesn't seem to mention minimax at all, in fact (except in "see also"). In this case, it seems that everyone's starting strategy is in fact something like "play 99, and if anyone plays differently, hurt them as hard as possible".  So something resembling minimax is part of the setup, but isn't part of what defines a Nash equilibrium.  (Right?) Looking more at the definitions... The "individual rationality" criterion seems properly understood as "one very weak criterion that obviously any sane equilibrium must satisfy" (the logic being "If it is the case that I can do better with another strategy even if everyone else then retaliates by going berserk and hurting me as hard as possible, then super-obviously this is not a sane equilibrium"). It is not a definition of what is rational for an individual to do.  It's a necessary but nowhere near sufficient condition; if your decisionmaking process passes this particular test, then congratulations, you're maybe 0.1% (metaphorically speaking) on the way towards proving yourself "rational" by any reasonable sense of the word. Does that seem correct?

There is a more fundamental objection: why would a set of 1s and 0s represent (given periodic repetition in 1/3 of the message, so dividing it into groups of 3 makes sense) specifically 3 frequencies of light and not

  1. Sound (hat tip The Hail Mary Project)
  2. An arrangement of points in 3d space
  3. Actually 6 or 9 "bytes" to defie each "point"
  4. Or the absolute intensity or scale of the information (hat tip Monty Python tiny aliens)
I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.

I think the key facility of am agent vs a calculator is the capability to create new short term goals and actions. A calculator (or water, or bacteria) can only execute the "programming" that was present when it was created. An agent can generate possible actions based on its environment, including options that might not even have existed when it was created.

I think even these first rough concepts have a distinction between beliefs and values. Even if the values are "hard coded" from the training period and the manual goal entry.

Being able to generate short term goals and execute them, and see if you are getting closer to your long tern goals is basically all any human does. It's a matter of scale, not kind, between me and a dolphin and AgentGPT.

In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.

Star Trek (and other Sci fi) continues to be surprisingly prescient, and "Computer, create an adversary capable of outwitting Data" creating an agen AI is actually completely realistic for 24th century technology.

Our only hopes are:

  1. The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
  2. Governments step in and ban things.
  3. Adversarial action kee
... (read more)
Speculative: Another point is that it may be speed of thought, action, and access to information that bottlenecks human productive activites - that these are the generators of the quality of human thought. The difference between you and Von Neumann isn't necessarily that each of his thoughts was magically higher-quality than yours. It's that his brain created (and probably pruned) thoughts at a much higher rate than yours, which left him with a lot more high quality thoughts per unit time. As a result, he was also able to figure out what information would be most useful to access in order to continue being even more productive. Genius is just ordinary thinking performed at a faster rate and for a longer time. GPT-4 is bottlenecked by its access to information and long-term memory. AutoGPT loosens or eliminates those bottlenecks. When AutoGPT's creators figure out how to more effectively prune its ineffective actions and if costs come down, then we'll probably have a full-on AGI on our hands.

The devil, as they say, is in the details. But worst case scenario is to flip a coin - don't be Buridan's Ass and starve to death because you can't decide which equidistant pile of food to eat.

Making choices between domains in pursuit of abstract goals:

Say I have an agent with the goal of "win $ in online poker" and read/write access to the internet. Obviously that agent will simulate millions of games, and play thousands of hands online to learn more about poker and get better. What I don't expect to ever see (without explicit coding by a human) is that "win $ at poker" AI looking up instructional youtube videos to learn from human experts, or telling its handlers to set up additional hardware for it, or writing child AI programs with different... (read more)

Better headline would be "I created a market on whether, in 2 months, I will believe that IQ tests measure what I believe to be intelligence" Not a particularly good market question.

Changed the title. I personally find that I often like the subjective questions because they are an opportunity for people to share their information about the topic.

What we saw when the I-15 Corridor was expanded (souther California, from Riverside to San Diego inland) was that over time people were willing to live further away from work, because the commute was "short enough," but as more people did that it got crowded again. So, total vehicle miles increased, without increasing the number of vehicle trips, since each trip was longer.

Highlighting the point in the Q&A: If you are having fun in HS or College, you don't need to leave. Put that extra energy that could be going towards graduating early into a side project (learn plumbing, coding, carpentry, auto maintenance, socializing, networking, youtubeing, dating, writing, or anything else that will have long term value regardless of what your career happens to be).

I'm a big fan of "take community College courses, and have them count for HS credits and towards your associates/bachelors" if your HS allows it.

Jave you tried playing with two (or 3 or 4) sides considered "open" - allowing groups to live if they touch those sides (abstracting away a larger board, to teach or practice tactical moves)?

"Baby sign" is just a dozen or so concepts like "more", "help", "food", "cold" etc. The main benefit is that the baby can learn to control thier hands before they learn to control thier vocal chords.

I'll just note here that "ability to automate the validation" is only possible when we already know the answer. Since the automated loom, computers have been a device for doing the same thing, over and over, very fast.

You don't necessarily need to know the correct answer beforehand to be able to validate whether or not an answer is correct. If we take Eliezer's problem of generating text that matches a given hash value, it's easy to validate whether an answer is true or not even if you don't know the answer beforehand. What's important is that the AI is sometimes able to generate correct answers. If the criteria for a correct answer are well-defined enough it can go from solving a problem 1% of the time correctly to solving it 100% of the time correctly.  ChatGPT is used by millions of people and a good portion of that will click the feedback button, especially if they optimize their UI for that. It's possible to build automated processes that will look at the problems where it currently frequently makes mistakes and learn to avoid them. It is possible to build a self-improving system around that.  If you let it do that for 10,000 different problems I would expect that it learns some reasoning habits that generalize and are useful for solving other problems as well. 

Let us introduce a third vocabulary word: Asset. An Asset is something that is consumed to provide Value, like cash in a mattress, food in a refrigerator, or the portion of a house or car that is depreciating as it is used. One of the miracles of the modern age is the ability of banks to turn Assets into Wealth many times over. It's a bit of technology built on societal trust. In the stock market example, it isn't double-counting, just different perspectives. Stock shares are a claim on the company, so the Google code is included in the Wealth of Google, a... (read more)

I distinguish between Wealth and Value as concepts. Lots of things provide Value (a croissant, a free library app, refactoring code, project management), but Wealth is specifically things that provide ongoing value when used, without being used up. For example, a code base provides ongoing value to its owners, and a refactoring code base provides more ongoing value, so that increases Wealth. Living near a beach or other nice place is a form of Wealth. Money in the bank, or in stocks, that is generating enough income to outpace inflation is Wealth. Strong relationships is Wealth. Useful knowledge is Wealth. In summary, Wealth is anything that generates (not "is convered to") Value over time.

if the code base for Google Search represents wealth, but is itself a critical component of Google-the-company’s success, then doesn’t that mean that for any financial instrument based on Google (i.e. Google stock, bonds issued by Google), to consider it also a form of wealth would be to double count that code base? i’m skeptical that money can be both a claim on wealth and also a form of wealth. it seems like it should be strictly one or the other, else you end up with a bank owning a bank owning a bank owning … with each additional layer of ownership somehow resulting in more wealth, and that seems questionable to me.
This has the consequence that in a time of no economic growth money is not wealth, which you may or may not be comfortable with. (I personally think money is a paradigmatic example of wealth, so that any definition of "wealth" that doesn't cover money is ipso facto not a definition I'd want to adopt.)
2Adam Zerner1y
Ah, I like that! I'm going to adopt that as the way I think about wealth. Thanks. Do you know if that is some sort of agreed upon way of thinking about it, or just something you came up with and find useful? Not that there's anything wrong with the latter. I think there is a remaining question about whether value is wanting or liking. If you have access to Facebook it gives you the ability to doomscroll any time you have can use a web browser. You want to doomscroll but you don't actually like it. So is the access to Facebook wealth? I guess we need two different terms, one to address wanting and the other to address liking. I don't see refactored code or a project manager on staff as providing value. I see value as the thing the end user experiences, like the ability to pay their credit card online. But it makes sense to me to consider the refactored code and PM on staff as adding wealth. Each generates value over time. And I like how this addresses my question about indirect/distal causes of value to end users: it doesn't matter that it's indirect, it still helps to generate value over time.
Answer by EricfJan 10, 202341

It turns out publishing bias is one heck of a drug. Every success of automation was touted, and every failure quietly tucked away, until one day the successes started getting smaller and less significant. We still see improvements around the edges of capability, but the big rocks, like making choices between domains in pursuit of abstract goals, remain elusive.

Hmm, can you elaborate on what you mean in the last sentence?

Having read the linked piece, I think it may be more a case of common cause then learning a new skill. People who are good at deciphering one complex system are going to be good at deciphering other complex systems. And people with experience doing that are going to be better than those without. "Seeing the meta" is just a way to ID people who have learned how to learn systems.

Load More