Reply to Extreme Rationality: It's Not That Great, Extreme Rationality: It could Be Great, the Craft and the Community and Why Don't Rationalists Win?
I’m going to say something which might be extremely obvious in hindsight:
If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.
Right now, rationalists aren’t winning. Rationality helps us choose which charities to donate to, and as Scott Alexander pointed out in 2009 it gives clarity of mind benefits. However, as he also pointed out in the same article, rationality doesn't seem to be helping us win in individual career or interpersonal/social areas of life.
It’s been nearly ten years since then, and I have yet to see any sign that this fact has changed. I considered the possibility that I just hadn’t heard about other rationalists’ practical success due to having not become a rationalist until around 2015, or simply because no one was talking about their success. Then I realized that was silly. If rationalists had started winning, at least one person would have posted about it here on lesswrong.com. I recently spoke to Scott Alexander, and he said he still agreed with everything he said in his article.
So rationalists aren’t winning. Why not? The Bayesian Conspiracy podcast (if I recall correctly), proposed the following explanation in one of their episodes: that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.
This argument is fundamentally a cop-out. When others win in places where we fail, it makes sense to ask, “How? What knowledge, skills, qualities or experience do they have which we don't? And how might we obtain the same knowledge, skills, qualities or experience?” To say that others are simply more innately talented than we are, and leave it at that, doesn't explain the mechanism behind their hypothesized greater rate of improvement after learning rationality. It tells us why but not how. And if there was such a mechanism, could we not replicate it so we could improve more anyway?
So why aren't we winning? What’s the actual mechanism behind our failure?
It's because we lack some of the skills we need to win - not because we don't want to win, and not because we're lazy.
Rationalists are very good at epistemic rationality. But there's this thing that we've been referring to as "instrumental rationality" which we're not so good at. I wouldn’t say it’s just one thing, though. Instrumental rationality seems like many different arts that we're lumping together.
It's more than that, though. We're not just lumping together many different arts of rationality. As anyone who's read the sequence A Human’s Guide to Words would know, categorization and labeling are not neutral actions for a human. By classifying all rationality as one of two types, epistemic or instrumental, we limit our thinking about rationality. As a result of this classification, we fail to acknowledge the true shape of rationality’s similarity cluster.
The cluster’s true shape is that of instrumental rationality: it is the art of winning, a.k.a. achieving your values. All rationality is instrumental, and epistemic rationality is merely one example of it. The art of epistemic rationality is how you achieve the value of truth. Up until now, "instrumental rationality" has been a catch-all term we've been using for the arts of winning at every other value.
While achieving the value of truth is extremely useful for achieving every other value, truth is still only one value among many. The skills needed to achieve other values are not the same as the skills needed to achieve the value of truth. That is to say, epistemic rationality includes the skill sets that are useful for obtaining truth and “instrumental rationality” includes all other skill sets.
Truth is a precious and valuable thing. It's just not enough by itself to win in other areas of life.
That might seem obvious at face value. However, I'm not sure we understand that on a gut level.
I have the impression that many of us assume that so long as we have enough truth, everything else will simply fall into place - that we’ll do everything else right automatically without needing to really develop or practice any other skills.
Perhaps that would be the case with enough computing power. An artificial superintelligence could perhaps play baseball extremely well with the following method:
1. Use math to calculate where the particles in the bat, the ball, the air, and all the players are moving.
2. Predict which particles have to be moved to and from what positions in order to cause a chain reaction that results in the goal state. In this case, the goal state would be a particle configuration that humans would identify as a won game of baseball.
3. Move the key particles to the key positions. If you fail to reach the goal state, adjust your priors accordingly and repeat the process.
An artificial superintelligence could perhaps navigate relationships, or discover important scientific truths, or really anything else, all by this same method, provided that it had enough time and computing power to do so.
But humans are not artificial superintelligences. Our brains compress information into caches for easier storage. We will not succeed at life just by understanding particle physics, no matter how much reductionism we do. As humans, our beliefs are organized into categorical levels. Even if we know that reality itself is really all just one level, our brains don't have the space to contain enough particle-level knowledge to succeed at life (assuming that particles really are the base level, but we’ll leave that aside for now). We need that knowledge compressed into different categorical levels or we can't use it.
This includes procedural knowledge like "how many particles need to be moved to and from what positions to win a game of baseball". If our brains were big enough to be capable of knowing that, then all we would need to do to win is to obtain that knowledge and then output the correct choice.
For an artificial superintelligence, once it has enough relevant knowledge, it would have all that it needs to make optimal decisions according to its values.
For a human, given the limits of human brains, having enough relevant knowledge isn't the only thing needed to make better decisions. Having more knowledge can be extremely useful for achieving one's other goals besides just knowledge for knowledge’s sake, but only if one has the motivation, skills and experience to leverage that knowledge.
Current rationalists are really good at obtaining knowledge, at least when we manage to apply ourselves. But we're failing to leverage that knowledge. For instance, we ought to be dominating prediction markets and stock markets and outputting a disproportionately high number of superpredictors, to the point where other people notice and take an interest in how we managed to achieve such a thing.
In fact, betting in prediction markets and stock markets provides an external criteria for measuring epistemic rationality - just as martial arts success can be measured by the external criteria of hitting your opponent.
So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?
In my own case, I'm still an undergraduate college student living largely off of my parents' income. I can't afford to bet on things since I don't have enough money of my own for it, and my income is highly irregular and hard to predict so it’s difficult to budget things. I would need to explain the expense to my mother if I started betting. If I did have more money of my own, though, I definitely would be spending some of it on this. Do a lot of other people here have such extenuating circumstances? Somehow that would feel like too much of a coincidence.
It's more likely to be because many of us haven't learned the instrumental skills needed to get ourselves to go out and bet. Such skills might include time management to set aside time to go bet, or interpersonal/communication skills to make sure the terms of the bets are clear and that we're only betting against those who will abide by the terms once they're set.
Prediction markets and stock markets aren't the only opportunity that rationalists are failing to take advantage of. For example, our community almost entirely neglects public relations, despite its potential as a way to significantly increase staff and funds for the causes we care about by raising the sanity waterline. We need better interpersonal/communication skills for interacting with the general public, and we need to learn to be more pragmatic so we will actually be able to get ourselves to do that instead of succumbing to an irrational deep-seated fear of appearing cultish.
Competent business people and self-improvement health buffs do have those skills. We don’t. That’s why we’re not winning.
In short, we need arts of rationality for the pursuit of values beyond mere truth. One of my friends who has read the Sequences has been spending years working on beginning to map out those other arts, and he recently presented his work to me. It's really interesting. I hope you find it useful.
(Note: Said friend will be introducing himself on here and writing a sequence about his work later. When he does I will add the links here.)
Humans who won typically just choose harder goals and don't spend a lot of time patting themselves on the back online. Fwiw superforecasters were disproportionately ssc readers, I interviewed four of them. Also, lw, like most self help communities, attracts the walking wounded. See mental health incidence in the ssc survey. Going from well below average in several metrics to slightly above doesn't look impressive from the outside but is very large from the inside.
I support the opposite perspective - it was wrong to ever focus on individual winning and we should drop the slogan.
"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".
But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.
I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss... (read more)
There's a lot in the word "woo".
One of my favorite examples is Roy Baumeister's book Willpower which he published in 2011. He's a professor who got two years later highest award given by the Association for Psychological Science, the William James Fellow award.
The book builds on a bunch of not-replicateable science and goes on to recommend that people should eat sugar to improve their Willpower, in a way that maps well to what Feymann describes as Cargo Cult science. We know the bad effects of sugar on the human body.
Here we have a distinguished psychologists who wrote in this decade a book that does the equivalent of recommending bloodletting. That's not a community with high epistemic norms.
You Scott recently wrote a post where you were suprised that neuroscience as a field messes up a question such as neurogenesis. Given the track record of the community that should be no suprise as they are largely doing the thing Feymann called Cargo Cult Science. They even publish papers that constantly say that they can predict things better then theoretically possible.
Everybody tries to succeed at his life. It feels to me like "not do self-help" b... (read more)
I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.
I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.
I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.
It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.
In short, “bonding” was at least as bad an idea as I initially suspected. Probably much worse.
Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here's hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.
Edit: oops, cousin_it beat me to it.
The "Intelligent Agent Foundations Forum" at https://agentfoundations.org/.
It was a platform MIRI built for discussing their research, that required an invite to post/comment. There's lots of really interesting stuff there - I remember enjoying reading Jessica Taylor's many posts summarising intuitions behind different research agendas.
It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.
As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI's blessing) imported all the old content. At some point we'll set all the old links to redirect to the AI Alignment Forum too.
Very much disagree - but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don't think rationality works without some community.
First, I don't think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.
Second, I don't think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.
I don't think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.
I mostly agree with this, but want to point at something that your comment didn't really cover, that "whether to go to the homeopath or the doctor" is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you've separated it into "what advice should I follow?" and "what advice is out there?"]
But this requires that the question of how to evaluate strategies be framed more in terms of "I used my judgment to weigh evidence" and less in terms of "I followed the prestige" or "I compared the lengths of their articulated justifications" or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 200... (read more)
I think you are not looking in the right places, as the groups of rationalists I know are doing incredibly well for themselves - tenure-track positions at major universities, promotions to senior positions in US government agencies, incredibly well paid jobs doing EA-aligned research in machine learning and AI, huge amounts of money being sent to the rationalist-sphere AI risk research agendas that people were routinely dismissing a few years ago, etc.
To evaluate this more dispassionately, however, I'd suggest looking at the people who posted high-karma posts in 2009, and seeing what the posters are doing now. I'll try that here, but I don't know what some of these people are doing now. They seem to be a overall high-achieving group. (But we don't have a baseline.)
https://www.greaterwrong.com/archive/2009 - Page 1: I'm seeing Eliezer, (he seems to have done well,) Hal Finney (unfortunately deceased, but had he lived a bit longer he would have been a multi-multi millionaire for being an early bitcoin holder / developer,) Scott Alexander (I think his blog is doing well enough,) Phil Goetz - ?, Anna Salomon (helping run CFAR,) "Liron" - (?, but he... (read more)
Jim Babcock is working on the LW team with Oli, Ray and I :-)
My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).
To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There's many key object-level insights (e.g. logical inductors and other core research), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom's book, several full-time and thoughtful funders in the space - OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.
Perhaps you expected all of this to happen by default, but I've be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it see... (read more)
Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that's largely because it optimizes for research. (You'd likely have had better professors as an undergrad if you went to a worse university - or at least that was my experience.)
My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.
If that was the implication, I apologize - I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits - not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)
Just to fill in the slot: in 2009 I was living in Moscow and mostly just partying and enjoying life, and in 2018 I'm living in Zurich with my wife and five kids. Was very happy with my life then, and am very happy now. Doing nicely in terms of money, but no big accomplishments if that's what you're asking about. And no, I wouldn't attribute it to LW, just normal life going on.
This sounds like "the best way to make sure your readers are successful is to write for people who are already successful". It makes sense if you want to brag about how successful are your readers. But if your goal is to improve the world, how much change would that bring? There is already a ton of material for business people and self-help fans; what is yet another website for them going to accomplish? If people are already competent self-improving businessmen, what motive would they have to learn about rationality?
The past matters, because changing things takes time. O... (read more)
I broadly agree with your main points. However,
I did post about this, and the benefits have continued to accrue. Compared to my past self, I perceive myself to be winning dramatically harder on almost all metrics I care about.
What does winning look like to you? Lots of rationalists have pretty successful careers as programmers, which depending on what they are going for, could be considered winning. Is it that they aren't "winning" by your definition, or theirs?
Can you describe the thing you think rationalists are failing at, tabooing "winning"?
Not the author, but my guess would be this:
On various metrics, there can be differences in quantity, e.g. "a job that pays $10k" vs "a job that pays $20k", and differences in quality, e.g. "a job" vs "early retirement". Merely improving quantity does not make a good story. And perhaps it is foolish, but I imagine "winning" as a qualitative improvement, instead of merely 30% or 300% more of something.
And maybe this is wrong, because a qualitative improvement brings qualitative improvements as a side effect. A change from "$X income" to $Y income" can also mean a change from "worrying about survival" to "not worrying about survival", a change from "cannot afford X" to "bought the X", or even a change from "the future is dark" to "I am going to retire early in 10 years, but as of today, I am not there yet". Maybe we insufficiently emphasize these qualitative changes, because... uhm, illusion of transparency?
I went from borderline nonfunctional to pretty functional. This is not at all obvious even to those who knew me because I had been masking the growing problems really well using just raw intellectual brute force. More "attracts the walking wounded" anecdote.
Further, I kind of expect that Really Winning in the sense you're talking about is far more likely when (a) you get lucky, and/or (b) you're willing to stomp on other people. The first is not increased and the second is actively decreased by LWing (I think and hope).
Also, we have funded, active research into the not-so-covert true goal of original LW.
Can we do better? Yeah, definitely. Is it really so bleak? I don't think so.
I don't see the 'why aren't you winning?' critique as that powerful, and I'm someone who tends critical of rationality writ-large.
High-IQ societies and superforecasters select for demonstrable performance at being smart/epistemically rational. Yet on surveying these groups you see things like, "People generally do better-than-average by commonsense metrics, some are doing great, but it isn't like everyone is a millionaire". Given the barrier to entry to the rationalist community is more, "sincere interest" than "top X-percentile of the population", it would remarkable if they exhibited even better outcomes as a cohort.
There's also going to be messy causal inference worries that cut either way. If there is in some sense 'adverse selection' (perhaps alike IQ societies) for rationalists tending to have less aptitude at social communication, greater prevalence of mental illness (or whatever else), then these people enjoying modest to good success in their lives reflects extremely well on the rationalist community. Contrariwise, there's plausible confounding where smart creative people will naturally gravita... (read more)
Within a narrow field, where data is plentiful, learning rationality is much less powerful than learning from piles of data. Imagine three people, A, B and C. A doesn't know any chess or rationality, B has studied game theory, bays theorem, principles of decision theory and all round rationality. They have never played chess before, and have just been told the rules. C has been playing chess for years.
I would expect C to win easily. Its much easier to learn from experience, and remember your teachers experience, than it is to deduce what good chess strategies are from first principles. The only time I would expect B to win is if they were playing nim, or some other game with a simple winning strategy, and C had an intuition for this strategy, but sometimes made mistakes. I would expect B to beat A however.
Rationality is learning to squeeze every last drop of usefulness out of your data, and doing this is less effective at just grabbing more data when data is plentiful. Financial markets are another plentiful data domain. Many hedge fundies already know game theory, they also have a detailed knowledge of financial minutiae. Wanabe rationalists, If you want to be a banker, go a... (read more)
My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I've been reading SSC, Overcomming Bias, and Lesswrong since the beginning.
Yeah, but which way is the arrow of causality here? Like, was he already a geeky intellectual, and that's why he's both good at calculus/programming and he reads SSC/OB/LW? Or was he "pretty average", started reading SSC/OB/LW, and then that made him become good at calculus/programming?
Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.
"Rationalists are very good at epistemic rationality."
As people very good at _epistemic_ rationality, I am sure you realize that the relevant comparison is success after one had been exposed to rationality with hypothetical success had one not been exposed.
The winningest rationalist I know of is Dominic Cummings, who was the lead strategist behind the Brexit pro-leave movement. While the majority of LWers may not agree with his goals, he did seem to be effective, and he frequently makes references to rationalist concepts (including IIRC some references to the work of Eliezer Yudkowsky) on his blog: https://dominiccummings.com/
I followed his work, and I estimate the difference he made to be very high relative to other individuals working on the issue (on either side). According to his own estimation, his contribution consisted of assembling highly competent people and then minimizing interference from incompetent ones.
Some context: he had previously worked on the campaign to reject the Euro, and so had more experience with the question of 'how people in the UK feel about the EU' than most, which is why there was a push to recruit him for a campaign. Their campaign took a series of basic steps, like tried to determine what voters actually thought, which none of the other campaigns did. Then they tested a bunch of different methods to communicate with the voters effectively (the other campaigns went with old strategies and did not check whether they worked), and focused on driving voter turnout.
In a nutshell what he did was: determine to solve the actual problem, find other like-minded people, and then set about actually trying to solve the problem using basic tools like measurement and experiment. You can find the list of posts on his blog relevant to the campaign here, but I think the real meat is in #20-22. He does not claim responsibility for success, placing most of the credit with the team and most of the blame with an incompetently run Remain campaign.
Trying to 'dominate' the stock market is a very bad idea, roughly analogous to your AI baseball example. The generally accepted best approach is to passively accumulate index funds, which I imagine is exactly what many people here are already doing. For individuals, winning is mostly about not-losing, which tends to be invisible; if you succeed, nothing happens.
What does winning look like?
I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.
I think I could be considered both a rationalist and a winner.
But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.
If rationalists were winning, how would we know? What would winning look like?
The people I know IRL who identify as rationalists are doing great. Not a lot of people bet on prediction markets since the ones that exist are small and hard to use. Not a lot of people bet on stock markets since making money doing so is a boring full-time job.
I presume that the reason people don't post about how they are "winning" is because it's tactless to write a post about how great you are.
If there's a hedge fund out there that leverages superforcasting style reasoning and makes billions with that I doubt that it would be rational for them to openly speak about their secret sausage. It might also be rational for them to currently reinvest all their money if they are getting a great return for it.
I'm quite late (the post was made 4 years ago), and I'm also new to LessWrong, so it's entirely possible that other, more experienced members, will find flaws in my argument.
That being said, I have a very simple, short and straightforward explanation of why rationalists aren't winning.
Domain-specific knowledge is king.
If you are a programmer and your code keeps throwing errors at you, then no matter how many logical fallacies and cognitive biases you can identify and name, posting your code on stackoverflow is going to provide orders of magnitude... (read more)
Sometimes winning is an evidence of non-rationality. For example, if one plays in a lottery and wins a million dollars, - it was still irrational for him to play as most lotteries have negative total expected utility. The same thing is with becoming very rich: most who try, fail.
Imagine the following game: You are put into a bath where you will be a) dissolved with acid with 99 per cent probability and b) you will become a billionaire with 1 per cent. Would you agree to play?
I would say that playing the game is very irrational, and any winner was likely not able to correctly calculate the odds. So extreme winning is a signal of some form of irrationality.
It could be that people don't use their rationality skills at their "bottlenecks". You could improve many things but if they aren't your bottlenecks the result would be negligible. I've seen people training to recognize their biases but not using this for strategic planning and just doing "the safe thing" everyone does.
I don't think it's accurate to say that our rationality techniques are only about pursuing truth. It might be true that the sequences are mostly about this but a lot has happened since the sequences were written.
If you look at the recent CFAR handbook there's plenty of techniques that are useful for getting things done.
Humans are only in a small part pliable reasoning. Most of what makes us us is genetic, subconscious and not available to introspection. We have more blind spots than we have sighted, and we actively resist correcting those blind spots. LW-style rationality tends to appeal to the people who on average are at or below mean in interpersonal skills, so you start with a huge handicap and learning about biases and how to deal with them only gives you some marginal advantage over those like you, not a magic bullet to achieve your goals. Speaking of the goals, hu... (read more)
Did your friend ever finish that sequence? I'd still be quite interested in seeing it. After reading Chinese Businessmen: Superstition Doesn't Count, I've become very interested in becoming more instrumental.
If you want to know more about really winning vs. theoretically winning, you might be interested in what Aristotle taught about baseball: https://sniggle.net/TPL/index5.php?entry=03Feb04