All of Darmani's Comments + Replies

Taking the outside view on code quality

It's a small upfront cost for gradual long-term benefit. Nothing in that says one necessarily outweighs the other. I don't think there's anything more to be had from this example beyond "hyperbolic discounting."

Taking the outside view on code quality

I think it's simpler than this: renaming it is a small upfront cost for gradual long-term benefit. Hyperbolic discounting kicks in. Carmack talks about this in his QuakeCon 2013, saying "humans are bad at integrating small costs over time": https://www.youtube.com/watch?v=1PhArSujR_A

 

But, bigger picture, code quality is not about things like local variable naming. This is Mistake #4 of the 7 Mistakes that Cause Fragile Code: https://jameskoppelcoaching.com/wp-content/uploads/2018/05/7mistakes-2ndedition.pdf

2adamzerner6dYes, but at some point the cost starts to outweigh the benefit. Eg. going from yyyymmdd to currentDate is worthwhile, but going from currentDate to betterName, or from betterName to evenBetterName might not be worthwhile. And so I think you do end up having to ask yourself the question instead of assuming that all code quality improvements are worthwhile. Although I also think there's wisdom in using heuristics rather than evaluating whether each and every case is worthwhile. I agree with the big picture point that things that are sort of siloed off aren't as important for code quality. I chose this example because I thought it would be easiest to discuss. However, although I don't think they are as important, or even frequently important, I do think that stuff like local variable names end up often being important. I'm not sure what the right adjective is here, but I guess I can say I find it to be important enough where it's worth paying attention to.
Lean Startup Reading Comprehension Quiz

I read/listened to  Lean Startup back in 2014. Reading it helped me realize many of the mistakes I had made in my previous startup, mistakes I made even though I thought I understood the "Lean startup" philosophy by osmosis.
 

Indeed, "Lean Startup" is a movement whose terminology has spread much faster than its content, creating a poisoned well that inoculates people against learning 

For example, the term "minimum viable product" has been mutated to have a meaning emphasizing the "minimum" over the "product," making it harder to spread the ac... (read more)

Lean Startup Reading Comprehension Quiz

1. What is the difference between learning and validated learning?

 Validated learning is learning that has been tested empirically against users/the marketplace.

2. True or false: "According to the author, a startup with exponential growth in metrics like revenue and number of customers is doing well." Explain your answer.

 False. This is only true if those metrics imply a path of long-term sustainable profitability. If the startup in question is Github, it probably does. If it's Groupon.....

3. Finish the sentence: "almost every lean startup techni... (read more)

Limiting Causality by Complexity Class

Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known.

For the first: Well, yep, that's why I said I was only 80% satisfied. 

 

For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.

Limiting Causality by Complexity Class

This post is a mixture of two questions: "interventions" from an agent which is part of the world, and restrictions

The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment ... (read more)

1Bunthut3moThis is actually a good illustration of what I mean. You can't shield an experiment from outside influence entirely, not even in principle, because its you doing the shielding, and your activity is caused by the rest of the world. If you decide to only look at a part of the world, one that doesn't contain you, thats not a problem - but thats just assuming that that route of influence doesn't matter. Similarly, "knowledge about repeatability" is causal knowledge. This answer just tells you how to gain causal knowledge of parts of the world, given that you already have some causal knowledge about the whole. So you can't apply it to the entire world. This is why I say it doesn't go well with embedded agency. No? What I'm limiting is what dependencies we're considering. And it seems that what you say after this is about singular causality, and I'm not really concerned with that. Having a causal web is sufficient for decision theory.
Leaky Delegation: You are not a Commodity

Thought I'd share an anecdote that didn't make it into the article: on how doing something yourself can make you a better outsourcer.

About 6 months ago, I went shopping for a logo for one of my projects. It helped greatly that I've spent a lot of time studying visual design myself.

I made a document describing what I wanted, including a mood board of other logos. I showed it to a logo design specialist recommended by a friend. He said "That's the best logo recquisition doc I've seen, and I've seen a lot."

I also showed it to the designer I've been working wi... (read more)

Leaky Delegation: You are not a Commodity

Oh, on the contrary: I think this article misses several things that are quite important (or were brushed under a single sentence like "[main principal/agent problems] are communication and risk." Reason: emphasis on things fewer readers were likely to consider.

So the costs you're describing are indeed real and brushed off to corner. I think both of these fall under transaction costs, and #2 also under centralization and overhead. For #2, I think you mean something other than what "externality" means to me (a cost specifically born by a non-party to a transaction) --- maybe second-order cost?


 

Leaky Delegation: You are not a Commodity

Thanks! This is good.

It's not a physical good, but I had also been thinking that most of the price of renting a venue on the open market is trust (that you won't mess up their space; whether they can give you the keys vs. needing someone to let you in), followed by coordination. Hence, why having a friend let you use their office's conference room on a weekend to do an event might cost $0, while renting such a space might cost $1000.

Leaky Delegation: You are not a Commodity

To clarify: You're not saying the wedding tax is because of insurance costs, as the article is asking about, right?

9Ericf3moWedding tax is, in fact, mostly insurance costs. Not liability insurance, but having to charge more because 1. Weddings get canceled, or have significant scope changes, much more than, say bar mitzvas. 2. Equipment failures are way more serious at a wedding than a birthday party.
Help your rivals when they are numerous

I have a number of issues with this post.

 

First, as others have mentioned, opponents are very much not equal. Further, timing is important: certain trades you should be much more or less likely to take near the end of the game, for example.

Second, I don't think it's valid to look at expected values when all you care about is rank.  Expectation is very much a concept for when you care about absolute amounts.

Third, which perhaps sums everything up: I don't see a valid notion of utility / utility maximization for board games, other than perhaps "pro... (read more)

In software engineering, what are the upper limits of Language-Based Security?

Did not know about the answer/comment distinction! Thanks for pointing that out.

Before I dig deeper, I'd like to encourage you to come bring these questions to Formal Methods for the Informal Engineer ( https://fmie2021.github.io/ ), a workshop organized by myself and a few other researchers (including one or two more affiliated with LessWrong!) specifically dedicated to helping build collaborations to increase the adoption of formal methods technologies in the industry. Since it's online this year, it might be a bit harder to have these deep open-ended co... (read more)

3MakoYass4moWhat, but I'm just a stray dog who makes video games about -... [remembers that I am making a game that centers around an esolang. Turns and looks at my BSc in formal languages and computability. Remembers all of the times a layperson has asked whether I know how to do Hacking and I answered "I'm not really interested in learning how to break things. I'm more interested in developing paradigms where things cannot be broken"]... oh. uh. maybe.
In software engineering, what are the upper limits of Language-Based Security?

It is already possible to build an embedded language inside of Coq which requires arbitrary security proofs for all executable code. It's theoretically possible to make those proofs guard against all known side channels, including, theoretically, hardware-based side channels such as Row Hammer.

Are you asking about which kinds of attacks can't be stopped by improving software? Or are you asking about the theoretical limits of PL technology? The latter question is not so interesting without  more constraints: as stated above, they're basically unbounded.

3MakoYass4mo(If you think the question is too underspecified to answer, you probably shouldn't try to post an answer in the answers section. There is a comments section.) (I'll try to work this into the question description) That would be an interesting thing to see discussed, sure. No, that might be interesting from the perspective of.. what kinds of engineering robustness will exist at the limits of the singularity (this topic is difficult to motivate, but I hope the reader would agree that we should generally try to make forecasts about cosmically large events even when the practical reasons to do it are not obvious. It seems a-priori unlikely that questions of, say, what kinds of political arrangements are possible in a post-organic galaxy sized civilization wont turn out to be important in some way, even if we can't immediately see how.) But I'm mainly wondering from a practical perspective. Programming languages are generally tools of craft, they are for getting things done in reality, even many of the most theoretical languages aspire to that. I'm asking mainly from a perspective of... Can we get the engineers of this imperiled civilization to take their responsibilities more seriously, generally? When it's helpful to be able to prove that something will work reliably before we put it into production, can we make that easy to do? Can any of the tooling design principles from that be generalized to the AGI alignment problem? With regards to Coq, will those languages actually be used in reality, why or why not, should promote them, should we should fund their development? I'm interested in different timescales * the most security-promoting development processes that are currently in wide use. * the most security-promoting development processes that are possible with recently developed technology. * processes that could be developed now. * processes that could come to exist 10 years away; processes that might exist 30-50 years from now. * perhaps some
Extortion beats brinksmanship, but the audience matters

Okay. I think you're saying this is extortion because Walmart's goal is to build a reputation for only agreeing to deals absurdly favorable to them.


If the focus on building a reputation is the distinguishing factor, then how does that square with the following statement: "it is not useful for me to have a credible reputation for following up on brinksmanship threats?"

3Stuart_Armstrong6moBecause a reputation for following up brinksmanship threats means that people won't enter into deals with you at all; extortion works because, to some extent, people have to "deal" with you even if they don't want to. This is why I saw a Walmart-monopsony (monopolistic buyer) as closer to extortion, since not trading with them is not an option.
Extortion beats brinksmanship, but the audience matters

I see. In that case, I don't think the Walmart scenario is extortion. It is not to the detriment of Walmart to refuse to buy from a supplier who will not meet their demands, so long as they can find an adequate supplier who will.

3Stuart_Armstrong6moI'm think of it this way: investigating a supplier to check they are reasonable costs $1 to Walmart. The minimum price any supplier will offer is $10. After investigating, one supplier offers $10.5. Walmart refuses, knowing the supplier will not got lower, and publicises the exchange. The reason this is extortion, at least in the sense of this post, is that Walmart takes a cost (it will cost them at least $11 to investigate and hire another supplier) in order to build a reputation.
Extortion beats brinksmanship, but the audience matters

I'd really appreciate a more rigorous/formal/specific definition of both. I'm not seeing what puts the Walmart example in the "extortion" category, and, without a clear  distinction, this post dissolves.

2Stuart_Armstrong6moI think the distinction is, from the point of view of the extortioner, "would it be in my interests to try and extort Bi, *even if I know for a fact that Bi cannot be extorted and would force me to act on my threat, to the detriment of myself in that situation?" If the answer is yes, then it's extortion (in the meaning of this post). Trying to extort the un-extortable, then acting on the threat, makes sense as a warning to other.
Extortion beats brinksmanship, but the audience matters

Very interesting post. I was very prepared to praise it with "this draws some useful categories for me," but it began to get less clear as I tried more examples. And I'm still trying to come up with a distinction between brinksmanship and extortion. I've thought about the payoff matrices (they look the same), and whether "unilateral attack vs. not" is a distinguishing factor (I don't think so). I still can't find a clear distinction.

 

Examples:

(1) You say that releasing nude photos is in the blackmail category. But who's the audience?

(2) For n=1, m lar... (read more)

7Stuart_Armstrong6moThe other people of whom you have nude photos, who are now incentivised to pay up rather than kick up a fuss. Interesting example that I hadn't really considered. I'd say its fits more under extortion than brinksmanship, though. A small supplier has to sell, or they won't stay in business. If there's a single buyer, "I won't buy from you" is the same as "I will ruin you". Abstracting away the property rights (Walmart is definitely legally allowed to do this), this seems very much an extorsion.
Gems from the Wiki: Paranoid Debating

I tried playing this in 2009 at a math summer program. It scared a lot of people away, but I got a small group to join in. The scoring algorithm was rather questionable, but the game of competitive Fermi estimates was fun.

I can't claim to have improved much at rationality or estimates, but, ever since then, I remember that, to the question "How many liters of water are there in the ocean," the answer "1 mole of liters" is not the mark of a deceiver, but is actually relatively close, being only 1 order of magnitude too low.

If I ever... (read more)

When can Fiction Change the World?

Nice post!


Another example for your list: Altneuland catalyzed the Zionist movement that led to the creation of Israel. The city of Tel Aviv is named after the book. https://en.wikipedia.org/wiki/The_Old_New_Land

2Timothy Underwood9moOh that's cool. I had known about Herzl being a central figure in Zionism, but not that he'd written a novel to push it forward.
A Hierarchy of Abstraction

Was just reading through my journal, and found that I had copied this quote. I think you'll find it to be of interest re: teaching recursion.

-------------------------------------

From “Computing Science: Achievements and Challenges” (1999):

“I learned a second lesson in the 60s, when I taught a course on programming to sophomores, and discovered to my surprise that 10% of my audience had the greatest difficulty in coping with the concept of re, cursive procedures. I was surprised because I knew that the concept of recursion was not ... (read more)

Reveal Culture

I've noticed that the Reveal Culture examples / Tell Culture done right resemble greatly the kinds of communication advocated in the many strands of conflict/communication training I've taken. Connecting your requests to needs, looking for interests instead of positions, seeing the listener's perspective, etc.

For instance, the Tell Culture example example "I'm beginning to find this conversation aversive" is quite close to the example from my training "I notice I'm having a reaction," except that it's close... (read more)

5ChristianKl9moHaving skills and using them in a particular context aren't the same thing. When talking with rationalists I'm using language out of Circling a lot. I do have the skills to use them. On the other hand those language patterns don't come easily when talking with my family. There are different cultural norms for the family conversations that do lead me to interact differently in them.
Towards a Formalisation of Logical Counterfactuals

Hi Bunthut,

First, I'll encourage you to have a look at material on what I thought this post was going to be about from the title: https://en.wikipedia.org/wiki/Counterfactual_conditional . I know about this subject primarily from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.3738&rep=rep1&type=pdf (which is more concrete/mathematical than the Wikipedia article, as it's written by computer scientists rather than philosophers).


Second: If I'm understanding this correctly in my sleep deprived state, you're actually wo... (read more)

3Bunthut9moHello Darmani, I'm indeed talking about a kind of counterfactual conditional, one that could apply to logical rather than causal dependencies. Avoiding multiple antecedents isn't just a matter of what my conceptual toolkit can handle; I could well have done two different types of nodes, that would have represented it just fine. However restricting inferences this way makes a lot of things easier. For example it means that all inferences to any point "after" C come only from propositions that have come through C. If an inference could have multiple antecedents, then there would be inferences that combine a premise derived from K with a premise in B, and its not clear if the separation can be kept here. From the paper you linked... Their first definiton of the counterfactual (the one where the consequent can only be a simple formula) describes the causal counterfactual (well, the indeterminstic-but-not-propabilistic protocolls throw things off a bit, but we can ignore those), and the whole "tremble" analysis closely resembles causal decision theory. Now their concept of knowledge is interesting because its defined with respect to the ecosystem, and so is implicity knowledge of the other agents strategy, but the subsequent definition of belief rather kneecaps this. The problem that belief is supposed to solve is very similar to the rederivation problem I'm trying to get around, but its formulated in terms of model theory. This seems like a bad way to formulate it, because having a model is a holistic property of the formal system, and our counterfactual surgery is basically trying to break a few things in that system without destroying the whole. And indeed the way belief is defined is basically to always assume that no further deviation from the known strategy will occur, so its impossible to use the counterfactuals based on it to evaluate different strategies. Or applying it to "strategies" of logical derivation: If you try to evaluate "what of X wasnt derived by
A Hierarchy of Abstraction

I was 12 or so when I first studied pointers. I did not get them at all back then.

A Hierarchy of Abstraction

Thanks for the explanation. I accept your usage of "abstraction" as congruent with the common use among software engineers (although I have other issues with that usage)). Confusingly, your hierarchy is a hierarchy in g required, not a hierarchy in the abstractions themselves.

I am well-read in Joel Spoelsky, and my personal experience matches the anecdotes you share. On the other hand, I have also tutored some struggling programmers to a high level. I still find the claim of a g-floor incredible. This kind of inference feels like claiming the ins... (read more)

2Pattern9moIt's that the general form is unsolvable, not specific examples, without better tools than the usual ones: +, -, *, /, sqrt, ^, etc. I've heard that with hypergeometric functions it's doable, but the same issue reappears for polynomials of higher degree there as well.
2lsusr9moI think it is more like the irreversibility of the trapdoor functions we use in cryptography. We are unable to prove mathematically they are secure. But an army of experts failing to break them is Bayesian evidence. Sidenote: Lol.
A Hierarchy of Abstraction

You seem to be making a few claims: (1) that these skills require an increasing amount of 1-dimensional intelligence (2) that one cannot do lower-indexed things without doing higher-indexed ones and (3) that there is something fundamental about this.

You obviously do not mean this literally, for there are plenty of people who understand recursion but not pointers (i.e.: intro Python students), and plenty of programmers who have never touched Python.


First, what is an abstraction?


As someone who recently wrote a paper involving Cousot-style abstract interpret... (read more)

4lsusr9moI appreciate your genuine attempt to understand this post. Python is indeed a stand-in for any programming language of comparable complexity. Pointers and recursion could indeed be flipped around. I think we are on the same page concerning these details. 1. Yes. 2. Not quite. There can be a little slippage between close-together indices, especially #6 & #7. More importantly, this hierarchy is about talent and potential, not ability. If someone could do #5 with a little bit of study then that counts as doing #5. It is easy to imagine a mathematician who understands recursion but has never written a line of computer code. But I would be surprised if such a person could not quickly learn to write simple Python scripts. 3. This is technically an implication, not a claim. But…yeah. By "abstraction", I mean a simplification of the universe—a model. By "higher-level abstraction", I mean a model built on top of another model.[1] [#fn-5oeMbqPyN2qEtYjka-1] I am not referring to how these models are built upon one another within mathematics. I am trying to organize them based on how they are constructed inside a typical human brain. For example, a human being learns to write before learning arithmetic and learns arithmetic before learning calculus. It is possible to construct calculus from the Peano axioms while skipping over all of arithmetic. But that is not how a human child learns calculus. To put it another way, a human being learning a complex skill begins by learning to perform the skill consciously. With practice, the skill becomes unconscious. This frees up the conscious mind to think on a higher (what I call more "abstract") level. Rinse and repeat. To use an example from Brandon Sanderson's creative writing lectures, a novice author is worried about how to put words down on a page while a skilled writer is thinking with plot structure. An author cannot afford to think about higher level activities until the lower level activities are
Writing Causal Models Like We Write Programs

John is correct that do() is not imperative assignment. It's a different effect called "lazy dynamic scope."

do() is described fully in our paper on formal semantics for a language with counterfactuals, http://www.jameskoppel.com/files/papers/causal_neurips2019.pdf . The connection with dynamic scope is covered in the appendix, which is not yet online.

Two Alternatives to Logical Counterfactuals
In this possible world, it is the case that "A" returns Y upon being given those same observations. But, the output of "A" when given those observations is a fixed computation, so you now need to reason about a possible world that is logically incoherent, given your knowledge that "A" in fact returns X. This possible world is, then, a logical counterfactual: a "possible world" that is logically incoherent.

Simpler solution: in that world, your code is instead A', which is exactly like A, except that it returns Y ... (read more)

4jessicata1yYes, this is a specific way of doing policy-dependent source code, which minimizes how much the source code has to change to handle the counterfactual. Haven't looked deeply into the paper yet but the basic idea seems sound.
Programming Languages For AI

Hi, I'm a Ph. D. student at MIT in programming languages.

Your choice of names suggests that you're familiar with existing tactics languages. I don't see anything stopping you from implementing this as a library in Ltac, the tactics language associated with Coq.

I'm familiar with a lot of DSLs (here's umpteen of them: https://github.com/jeanqasaur/dsl-syllabus-fall-2016/blob/master/README.md ). I've never heard of one designed before they had an idea what the engine would be.

E.g.: you can write a language for creating variants o... (read more)

2Donald Hobson2yLisp used to be a very popular language for AI programming. Not because it had features that were specific to AI, but because it was general. Lisp was based on more abstract abstractions, making it easy to choose whichever special cases were most useful to you. Lisp is also more mathematical than most programming languages. A programming language that lets you define your own functions is more powerful than one that just gives you a fixed list of predefined functions. In a world where no programming language let you define your own functions, and a special purpose chess language has predefined chess functions. Trying to predefine AI related functions to make an "AI programming language" would be hard because you wouldn't know what to write. Noticing that on many new kinds of software project, being able to define your own functions might be useful, I would consider useful. The goal isn't a language specialized to AI, its one that can easily be specialized in that direction. A language closer to "executable mathematics".
How does OpenAI's language model affect our AI timeline estimates?

Something you learn pretty quickly in academia: don't trust the demos. Systems never work as well when you select the inputs freely (and, if they do, expect thorough proof). So, I wouldn't read too deeply into this yet; we don't know how good it actually is.

2wizzwizz42yhttps://transformer.huggingface.co/ [https://transformer.huggingface.co/] is a nice demonstration of GPT2 that allows you to select the inputs freely.

They claim beating records on a range of standard tests (such as the Winograd schema), which is not something you can cheat by cherry-picking, assuming they are honest about the results.

Vis-a-vis selecting inputs freely: OpenAI also included a large dump of unconditioned text generation in their github repo.

2018 AI Alignment Literature Review and Charity Comparison
The vast majority of discussion in this area seems to consist of people who are annoyed at ML systems are learning based on the data, rather than based on the prejudices/moral views of the writer.

While many writers may take this flawed view, there's also a very serious problem here.

Decision-making question: Let there be two actions A and ~A. Our goal is to obtain outcome G. If P(G | A) > P(G | ~A), should we do A?

The correct answer is “maybe.” All distributions of P(A,G) are consistent with scenarios in which doing A is the right answer, and scen... (read more)

Overconfident talking down, humble or hostile talking up

And Paul Graham in Beating the Averages: http://www.paulgraham.com/avg.html

Cause Awareness as a Factor against Cause Neutrality

I think you hit the kernel of the argument in the first paragraph: If you have an obscure pet cause, then chances are it's because you do have some special knowledge about the problem. The person visiting a random village might not, but the locals do, and hence this is a reason why local charity can be effective, particularly if you live in a remote area where the problems are not quantified (and are hence probably not reading this).

Learning strategies and the Pokemon league parable

Put that in your post! I got what you're saying way better after reading that.

Learning strategies and the Pokemon league parable

I'm confused about whether you're talking about "learning things specifically to solve a problem" (which I've seen called "pull-based learning"), or "learning things by doing projects" (i.e.: project-based learning). The former differs from the "waterfall method" ("push-based learning") only in the sequence and selection: it's just the difference between doing a Scala tutorial because you want to learn Scala, vs. because you just got put on a project that uses Scala (and hence you can sk... (read more)

4Jsevillamol3yI actually got directed to your article by another person before this! Congrats on creating something that people actually reference! On hindsight, yeah, project based learning is nor what I meant nor a good alternative to traditional learning; if you can use cheat codes to speed up your learning using the experience from somebody else you should do so without a doubt. The generator of this post is a combination of the following observations: 1) I see a lot of people who keep waiting for a call to adventure 2) Most knowledge I have acquired through life has turned out to be useless, non transferable and/or fades out very quickly 3) It makes sense to think that people get a better grasp of what skills they need to solve a problem (such as producing high quality AI Alignment research) after they have grappled with the problem. This feels specially true when you are in the edge of a new field, because there is no one else you can turn to who would be able to compress their experience in a digestible format. 4) People (specially in mathematics) have a tendency to wander around aimlessly picking up topics, and then use very few of what they learn. Here I am standing on not very solid ground, because conventional wisdom is that you need to wander around to "see the connections", but I feel like that might be just confirmation bias creeping in.
Apptimize -- rationalist startup hiring engineers

It's more like the Crockford book -- a set of best practices. We use a fairly functional style without a lot of moving parts that makes Java very pleasant to work with. You will not find a SingletonFactoryObserverBridge at this company.

Apptimize -- rationalist startup hiring engineers

Yep. The first thing we do is have a conversation where we look for the 6 company values. Another of them is "commitment," which includes both ownership and grit.

Post ridiculous munchkin ideas!

You mean, a lot of cool mathematicians are eastern European. But Terry Tao and Shinichi Mochizuki are not.

[This comment is no longer endorsed by its author]Reply

Man, this is that thing I was talking about earlier when someone takes a colloquial phrase that sounds like a universal quantifier and interprets it as literally a universal quantifier.

Weekly LW Meetups: Austin, Berlin, Cambridge UK, Champaign IL, Durham NC (2), Washington DC

Chris Olah and I (Jimmy Koppel) are both Thiel Fellows and avid Less Wrongers. We'd be happy to answer any questions about the program.

Does anyone know any kid geniuses?

I know at least four people who started college by age 15. They're not "kid" geniuses anymore though -- the youngest is 16 and slowly going through college part-time, while the oldest is in his 30s and a full math professor at Arizona.

I don't know about the upbringing of the other three, but one attended a program where taking classes mutliple grade-levels ahead is the norm (though no-one else learned calculus in 3rd grade), and attended Canada/USA Mathcamp during the summers of his undergrad.

I second the Olympiads. Terry Tao famously represented Australia at the IMO at age 10, so he's definitely old enough.

Biased Pandemic

With a sufficiently strong player, Arkham Horror is a one player game which seven people play.

0Vaniver9yThis is true- really it's true of any purely cooperative board game- and I've played it single player with several characters several times. For some reason the effect seems stronger with Pandemic.
0shokwave9yI have experienced exactly this.
Diseased disciplines: the strange case of the inverted chart

There is a very healthy (and mathematical) subdiscipline of software engineering, applied programming languages. My favorite software-engineering paper, Type-Based Access Control in Data-Centric Systems, comes with a verified proof that, in the system it presents, data-access violations (i.e.: privacy bugs) are impossible.

This is my own research area ( http://www.cs.cmu.edu/~aldrich/plaid/ ), but my belief that this was a healthy part of a diseased discipline is a large part of the reason I accepted the position.

Completeness, incompleteness, and what it all means: first versus second order logic

Yes, but the point is that we are learning features from empirical observations, not using some magic deduction system that our computers don't have access to. That may only be one bit of information, but it's a very important bit. This skips over the mysterious part in the exact same way that "electrical engineering" doesn't answer "How does a CPU work?" -- it tells you where to look to learn more.

I know far less about empirical mathematics than about logic. The only thing along these lines I'm familiar with is Douglas Lenat's Automat... (read more)

Completeness, incompleteness, and what it all means: first versus second order logic

We form beliefs about mathematics the same way we form beliefs about everything else: heuristic-based learning algorithms. We typically accept things based on intuition and inductive inference until trained to rely on proof instead. There is nothing stopping a computer from forming mathematical beliefs based on statistical inference rather than logical inference.

Have a look at experimental mathematics or probabilistic number theory for some related material.

5cousin_it9ySaying "heuristic-based learning algorithms" doesn't seem to compress probability mass [http://lesswrong.com/lw/ix/say_not_complexity/] very much. It feels like skipping over the mysterious part. How exactly do we write a program that would arrive at the axioms of PA by using statistics? I think if we did that, then the program probably wouldn't stop at PA and would come up with many other interesting axioms, so it could be a useful breakthrough.
Completeness, incompleteness, and what it all means: first versus second order logic

Computers can prove everything about integers that we can. I don't see a problem here.

3cousin_it9yIf the human and the computer are using the same formal system, then sure. But how does a human arrive at the belief that some formal system, e.g. PA, is "really" talking about the integers, and some other system, e.g. PA+¬Con(PA), isn't? Can we formalize the reasoning that leads us to such conclusions? Pointing out that PA can be embedded in some larger formal system, e.g. ZFC, doesn't seem to be a good answer because that larger formal system will need to be justified in turn, leading to an infinite regress. And anyway humans don't seem to be born with a belief in ZFC, so something else must be going on.
Completeness, incompleteness, and what it all means: first versus second order logic

The argument wasn't specific to ZFC or HOL, it was intended to apply to any system that can have a proof checker.

Pointing out the Gödelian limitations of all systems with recursively enumerable axioms doesn't seem like much of criticism of the system of nth-order logic I mentioned. Now I have less of an idea of what you're trying to say.

By the way, I think he's using "full model" to mean "standard model." Presumably, the standard integers are the standard model that satisfies the Peano axioms, while nonstandard integers are any other... (read more)

2cousin_it9yNot much, admittedly :-) The interesting question to me is how to make a computer understand "integers" the way we seem to understand them. It must be possible because humans are not magical, but second-order logic doesn't seem to help much.
Completeness, incompleteness, and what it all means: first versus second order logic

If I understand correctly, you are saying that higher order logic cannot prove all theorems about the integers that ZFC can. That's a very uninteresting statement. Since higher order logic was proven consistent* with ZFC, it is strictly weaker. Second order logic, is, of course, strictly weaker than 4th order logic, which is strictly weaker than 6th order logic, and so on, and is thus much weaker than higher-order logic.

I've never heard it claimed that second-order logic has a unique model of the standard integers. Actually, I've never heard of the standar... (read more)

0cousin_it9yWe don't seem to understand each other yet :-( The argument wasn't specific to ZFC or HOL, it was intended to apply to any system that can have a proof checker. Sorry, I screwed up the wording there, meant to say simply "unique model of the integers". People often talk about "standard integers" and "nonstandard integers" within models of a first-order axiom system, like PA or ZFC, hence my screwup. "Standard model" seems to be an unrelated concept. The original post says something like that, search for "we know that we have only one (full) model".
Completeness, incompleteness, and what it all means: first versus second order logic

Proofs in second-order logic work the exact same way as proofs in first-order logic: You prove a sentence by beginning with axioms, and construct the sentence using inference rules. In first-order logic, you have individuals, functions (from individuals to individuals), terms, and predicates (loosely speaking, functions from individuals to truth values), but may only quantify over individuals. In second-order logic, you may also quantify over predicates and functions. In third-order logic, you add functions of functions of individuals, and, in fourth-order... (read more)

3cousin_it9yThe comprehension axiom schema (or any other construction that can be used by a proof checker algorithm) isn't enough to prove all the statements people consider to be inescapable consequences of second-order logic. You can view the system you described as a many-sorted first-order theory with sets as one of the sorts, and notice that it cannot prove its own consistency (which can be rephrased as a statement about the integers, or about a certain Turing machine not halting) for the usual Goedelian reasons. But we humans can imagine that the integers exist as "hunks of Platoplasm" somewhere within math, so the consistency statement feels obviously true to us. It's hard to say whether our intuitions are justified. But one thing we do know, provably, irrevocably and forever, is that being able to implement a proof checker algorithm precludes you from ever getting the claimed benefits of second-order logic, like having a unique model of the standard integers. Anyone who claims to transcend the reach of first-order logic in a way that's relevant to AI is either deluding themselves, or has made a big breakthrough that I'd love to know about.
[Link] Duolingo

I used Duolingo for a few hours on its first day. (I used to TA for Luis, which helps for getting private invites, at least by knowing to sign up immediately.)

I've basically just gone through passing out of German lessons. This basically consists of taking a 20 question test, in which I translate sentences like "The woman drinks with her cat." and pray I don't make typos on three questions and have to start over. Except that all too often I give correct translations, but their checker isn't attuned to the flexibilities of German word ordering, or... (read more)

Smart and under 20? Peter Thiel wants to pay you to not go to school.

I think a lot of people get put off because Andrew Hsu is first in the alphabet. A lot of them barely come up in Google searches (save for the Fellowship itself).

Load More