I was inspired by the recent post discussing self-hacking for the purpose of changing a relationship perspective to achieve a goal. Despite my feeling inspired, though, I also felt like life hacking was not something I could ever want to do even if I perceived benefits to doing it. It seems to me that the place where I would need to begin is hacking myself in order to cause myself to want to be hacked. But then I started contemplating whether this is a plausible thing to do.

In my own case, there are two concrete examples in mind. I am a graduate student working on applied math and probability theory in the field of machine vision. I was one of those bright-eyes, bushy-tailed dolts as an undergrad who just sort of floated to grad school believing that as long as I worked sufficiently hard, it was a logical conclusion that I would get a tenure-track faculty position at a desirable university. Even though I am a fellowship award winner and I am working with a well-known researcher at an Ivy League school, my experience in grad school (along with some noted articles) has forced me to re-examine a lot of my priorities. Tenure-track positions are just too difficult to achieve and achieving them is based on networking, politics, and whether the popularity of your research happens to have a peak at the same time that your productivity in that area also has a peak.

But the alternatives that I see are: join the consulting/business/startup world, become a programmer/analyst for a large software/IT/computer company, work for a government research lab. I worked for two years at MIT's Lincoln Laboratory as a radar analyst and signal processing algorithm developer prior to grad school. The main reason I left that job was because I (foolishly) thought that graduate school was where someone goes to specifically learn the higher-level knowledge and skills to do theoretical work that transcends the software development / data processing work that is so common. I'm more interested in creating tools that go into the toolbox of an engineer than with actually using those tools to create something that people want to pay for.

I have been deeply thinking about these issues for more than two years now, almost every day. I read everything that I can and I try to be as blunt and to-the-point about it as I can be. Future career prospects seem bleak to me. Everyone is getting crushed by data right now. I was just talking with my adviser recently about how so much of the mathematical framework for studying vision over the last 30 years is just being flushed down the tubes because of the massive amount of data processing and large scale machine learning we can now tractably perform. If you want to build a cup-detector for example, you can do lots of fancy modeling, stochastic texture mapping, active contour models, fancy differential geometry, occlusion modeling, etc. Or.. you can just train an SVM on 50,000,000 weakly labeled images of cups you find on the internet. And that SVM will utterly crush the performance of the expert system based on 30 years of research from amazing mathematicians. And this crushing effect only stands to get much much worse and at an increasing pace.

In light of this, it seems to me that I should be learning as much as I can about large-scale data processing, GPU computing, advanced parallel architectures, and the gross details of implementing bleeding edge machine learning. But, currently, this is exactly the sort of thing I hate and went to graduate school to avoid. I wanted to study Total Variation minimization, or PDE-driven diffusion models in image processing, etc. And these are things that are completely crushed by large data processing.

So anyway, long story short: suppose that I really like "math theory and teaching at a respected research university" but I see the coming data steamroller and believe that this preference will cause me to feel unhappy in the future when many other preferences I have (and some I don't yet know about) are effected negatively by pursuit of a phantom tenure-track position. But suppose also that another preference I have is that I really hate "writing computer code to build widgets for customers" which can include large scale data analyses, and thus I feel an aversion to even trying to *want* to hack myself and orient myself to a more practical career goal.

How does one hack one's self to change one's preferences when the preference in question is "I don't want to hack myself?"

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 8:08 AM

Lacking motivation to change is often called 'ambivalence' in the scientific self-help literature. One well-studied technique for overcoming ambivalence is called 'motivational interviewing'. I've uploaded a recent review article on the subject for you here. Enjoy.

[-][anonymous]13y110

I'm more interested in creating tools that go into the toolbox of an engineer than with actually using those tools to create something that people want to pay for.

There are people who get paid to write and maintain such tools. (In fact, I am such a person.) For example, Mathematica costs money. That money goes into a giant pile, then some of it is sent to the programmers who work on this imaginary machine that other people use to do stuff.

And that SVM will utterly crush the performance of the expert system based on 30 years of research from amazing mathematicians.

If brute force is more effective... then it's better. There's probably a post in the sequences about this. Do you care about machine vision, or do you care about fancy algorithms? If you care about machine vision, then you should want the most effective approach (the one that is best at "winning"), whatever its nature. There is no such thing as "cheating" in pursuit of such a goal. On the other hand, if you care about fancy algorithms (which is a legitimate thing to care about!), then brute force is by definition uninteresting.

Now, you might argue that brute force is currently better at machine vision than fancy algorithms, but eventually brute force will reach a limit, whereas fancy algorithms could surpass that limit. (This would be a "local maximum" objection.) Maybe that's true! But you don't seem to be saying it.

Also, if brute force and fancy algorithms were equally effective, but fancy algorithms used less resources, then they would be superior for that reason. (It is also sometimes reasonable to choose something less effective that uses vastly fewer resources, but that's a quality-expense tradeoff.)

[-][anonymous]13y40

If brute force is more effective... then it's better. There's probably a post in the sequences about this. Do you care about machine vision, or do you care about fancy algorithms? If you care about machine vision, then you should want the most effective approach (the one that is best at "winning"), whatever its nature. There is no such thing as "cheating" in pursuit of such a goal. On the other hand, if you care about fancy algorithms (which is a legitimate thing to care about!), then brute force is by definition uninteresting.

I don't think that it's quite this easy to depict the problem. In applications where I think the brute force SVM approach is fundamentally a more correct way to model and attack the problem, I'm all for its use. At the same time though, I don't care at all about "fancy algorithms." What I think is that the landscape of modern research is far too risk averse (there is a book coming out soon by Peter Thiel and Garry Kasparov of all people on this very topic -- that human ingenuity and insight has actually slowed in the last few decades despite advances in computing technology).

I think that to solve hard machine vision problems, like perception and cognition, you have to depart from the machine learning paradigm. Machine learning might be a low-level tool for dealing with feature extraction and selection, which plays a role, but is far from effective for higher end problems. For example, I work on activity and behavior understanding. On any timescale above about 1-2 seconds, humans consistently recognize activities by projecting protagonist style goal structures onto their observations (i.e. the cue ball is trying to hit the black ball into that pocket... the cue ball becomes an agent with volition instead of merely a pawn in some sort of physics experiment). Currently, machine vision researchers approach the task of activity understanding as they approach everything else, just train a giant SVM (or some other kernel classifier, sparse coding, dictionary methods, etc. etc. etc.) The performance is state of the art for specific applications, but we don't actually know anything about how to perceive activities. It offers no insight whatsoever into the real cognitive structure that's underlying the task of perception. There are obviously many controversial opinions about a question like this, but my perspective is that probabilistic reasoning and graphical models are a much better way to work on this sort of problem, but even those methods need to be researched and extended to a much more theoretically mature level than where they currently are.

But no one (and I mean no one) will pay you money to do that style of research. Unless you produce a widget that performs task X for company Y at state of the art levels and can deliver it very soon, you get no funding and you get shunned by the major computer vision journals (ECCV, ICCV, CVPR, NIPS, SIGGRAPH). To get your work published and to win grants, you have to market it based primarily on the way you can promise to deliver pretty pictures in the short term. Machine learning is much better at this than advanced mathematical abstractions, and so advanced mathematical abstraction approaches to vision (which will benefit us all very much further into the future) are not being funded by risk averse funding sources.

Nobody is demanding that machine vision must succeed at preposterously difficult tasks anymore. Most consumers of this sort of research are saying that computing power is sufficient now that, as far as commercial applications go, we just need to hammer away on efficiency and performance in very controlled, specific settings. Being a "general vision theorist" has utterly no place anymore, in academia or in corporate research.

This is the root of my specific issue. My goal is to study difficult problem in computer vision that cannot be well-solved with the machine learning paradigm. I believe these need advanced theory to be well-solved, theory which doesn't exist yet. Just like the A.I. winter though, someone would have to fund me without knowing that the end result will benefit them more than their competitors, or if there will be any commercial benefit at all. My goal is to understand perception better and to only start to worry about what that better understanding will do for me later on.

Also, can you be more specific about how this works at Wolfram Research. I frequently attend their career presentations here at my university and then try to talk to the technical representatives afterward and learn about specific opportunities. It doesn't seem to be related to the kind of tool-creating research I'm talking about. In fact, I think that proprietary programming platforms like Matlab, Mathematica, and Maple, are a large part of the problem in some respects. These tools teach engineers how to be bad programmers and how to care more about hacks that produce pretty pictures than about really insightful methods that solve a problem in a way that provides more knowledge and more insight than merely pretty pictures.

When I mentioned "creating tools that go in the toolbox of an engineering" what I mean was inventing new techniques in say, calculus of variations, or new strategies for simulated annealing and Markov random field analysis. I mean theoretical tools that offer fundamentally more insightful ways to do modeling for engineering problems. I did not mean that I want to create software tools that make easier interfaces for doing machine learning, etc.

[-][anonymous]13y10

Machine learning is much better at this than advanced mathematical abstractions, and so advanced mathematical abstraction approaches to vision (which will benefit us all very much further into the future) are not being funded by risk averse funding sources.

Okay, that's a "local maximum" objection.

Also, can you be more specific about how this works at Wolfram Research.

I don't work there. Mathematica was just the first example that came to my mind of something that might be used in the sciences.

I mean theoretical tools that offer fundamentally more insightful ways to do modeling for engineering problems.

Oh, I was confused by your terminology. When I hear "tools", I think "software".

[-][anonymous]13y00

a

[This comment is no longer endorsed by its author]Reply

If you're feeling trapped due to disciplinary boundaries I'd recommend reading this. The gist of it is that going against the grain of your own preferences is sub-optimal, with respect to reaching the fabled "mastery" or "10 000 hours" level of expertise in any given field. If you do what you like, what comes easily to you, you'll reach this level much faster. But sometimes your preferences are such that instead of picking a field and sticking to it, you'll just have to cross a bunch of boundaries and basically define a field of your own to reach mastery in.

For instance, what would be some domain, other than vision, where the math that you like might be applicable, but which is immune to the data abundance issue for some foreseeable period?

Surely there are mathematical research questions about how to do large-scale data processing? Research them.

For example, not all large-scale data processing problems parallelize well in the current state-of-the-art. There are problems that would greatly benefit from being able to throw thousands of machines at them, except that attempting to do so hits bottlenecks of an algorithmic nature. Finding better algorithms to get past those bottlenecks would be worth a lot of money to the right people.

[-][anonymous]13y00

I think the paradigm of large scale data processing is itself uninteresting to me. I want to study problems in machine vision and perception that are not well-solved simply by processing large amounts of data ... i.e. problems where if you give me a large amount of data, it is not at all known what I should do with that data to produce a "good" solution to the problem. Once you know what you're supposed to do with the data, I feel like the rest is just engineering, which is uninteresting to me. After 3 years of reading huge chunks of the vision literature, and contemplating this and discussing this with many other faculty and researchers, the consensus seems to be that even if such a problem did exist, no one would be interested in publishing results on it or giving you money to study it. This is why I need to hack myself to cause myself to want to study the "uninteresting" engineering / data processing aspect.

Hacking preferences directly is hard, if you don't have anything that you care more about than your current preference in a particular area. Is there anything that you care about more than your current research path?

Most hacks that I've successfully pulled off involved me wanting to go through the steps necessary to do the hack more than I wanted to not go through the steps. For instance, hacking my diet was fairly straightforward because I cared more about health and energy level stability than I cared about continuing to eat the way I have been eating. Especially after I noticed that I was eating the way I had been for essentially no reason.

[-][anonymous]13y10

Agreed. The difference between reward and pleasure is a salient distinction here -- it's difficult to implement a hack solely for its own sake. My suggestion to the OP would be to find reasons to change their preferences, and start by weighing the following factors:

Do you care about maximizing your future likelihood of a stable lifestyle? How about maximizing your earning potential? How about the degree of enjoyment you obtain from your work?

Look for links between them -- for me, "enjoyment obtained from work" has a very strong weighting effect on "maximizing stability". I'm sufficiently bad at keeping a job that's all stress and no rewards for it to be detrimental to seek one (this is not a strength, but it seems to be a limit I've encountered often), and hence to optimize myself for such a search. If your current satisfaction with the prospect of changing your focus is very low, can you leverage something else you value highly against that?

If not, it seems like you're looking at either creating a new niche for yourself (with the associated risks of that) or trying to change your field altogether. Given how much you'll benefit from your cached skill and expertise gained by pursuing what you enjoy, I'd recommend the former. In your case: current research on machine vision is overwhelmingly focused on methods you dislike. Can you imagine any scenarios where your preferred method might have an edge, or work to supplement the existing method? Can you find a way to persuasively convey them such that you might be able to stake out some territory doing research on it?

If the answer is still no: break your habits, be more empirical. You might need to go exploring and see if you can discover any new interests or talents to which your energies might be efficiently put, which might also entail some degree of "starting over from scratch." This option is not for everyone and I don't recommend it cavalierly, but it's something to think about.

Why do you really like "math theory and teaching at a respected research university"? Is it for the money, the status, contributing to scientific progress in a "meaningful way", or benefiting society (in a utilitarian sense)? Do you intrinsically like doing research or teaching, and if so do you care about what area of math you research or teach? Which of these would you be most willing to give up if you had to?

(One reason I ask these questions is that I'm not sure scientific/technological progress is generally a good thing at this point. The default Singularity scenario is probably a bad one, and most scientific/technological progress just brings the Singularity closer without making a positive scenario more likely. It would be nicer to have a few more decades of pre-Singularity time to solve FAI-related philosophical problems, for example.)

Depending on your answers, there might be something else you should do instead of hacking yourself to like "writing computer code to build widgets for customers". Also, have you seen the previous LW discussions on career choice?

The default Singularity scenario is probably a bad one, and most scientific/technological progress just brings the Singularity closer without making a positive scenario more likely.

How much of modern science brings one closer to a potential intelligence explosion type Singularity event? If such an event is something that is likely to occur it would need to not be dependent on a lot of different technologies.

So what technologies could actively be a problem?

Well one obvious one is faster computers. The nightmare scenario is that we find some little clever trick we're missing to run smart AI and the first one we turn on thinks hundreds of times faster than us at the start.

The next possible really bad set of technologies are nanotech stuff. If the AI finds an easy way to get access to highly flexible nanotech based on methods we have then we're sort of screwed. (This one seems extremely unlikely to me. The vast majority of people in nanotech keep emphasizing how difficult any sort of constructor bot would be.) The next issue is possible advanced mathematical algorithms. The really bad case here is that an AI gets to look at the arXiv and quickly sees a set of papers which when put together give something like a general SAT solver that solves 3-SAT with n conditionals in Kn^2 steps for some really small constant K. This is bad.

Similar remarks apply to an AI that finds a really fast quantum algorithm to effectively solve some NP hard problem. Seriously, one of the worst possible ideas you can have in the world is to run an AI on a functioning quantum computer or give it access to one. Please don't do this. I'm someone who considers fooming-AI to be unlikely, and believe that BQP is a proper subset of NP and this possibility makes me want to run out and scream at people like Roger Penrose who specifically want to see if we need a quantum computer for intelligence to work. Let's not test this.

But outside these four possibilities the remaining issues are all more exotic and less likely. For example, I'm not worried that an AI will right out of the box figure out a way to make small wormholes and take advantage of closed-time like curves simply because if it has that sort of tech level then it has already won.

So the vast majority of scientific research seems to do very little for helping an AI go foom.

But it does seem that continued scientific research does help us understand which sort of AI threats are more likely. For example if we end up proving some very strong version of P!=NP, then this will make the clever algorithms attack much less likely. If BQP is strictly less than NP in a strong sense, and room temperature strong nanotech turns out to be not doable then most of the nasty foom scenarios go away. Similarly improving computer security directly reduces the chance that an AI will manage to get access to internet things it shouldn't (although again, basic sanity says anything like a strong AI should not be turned on with internet access, so if it gets internet access it has possibly already won. This is reduces the chance of a problem in one specific scenario that isn't terribly likely but it does reduce it.)

Furthermore, scientific progress helps us deal with other existential risks as well as get more of a handle on which existential risks are a problem. Astronomy, astrophysics and astrobiology all help us get a better handle on whether the great filter lies behind us or in front of us and what the main causes are. It wouldn't for example surprise me if in 30 or 40 years we will have good enough telescopes that we can not only see Earth like planets we can see if they had massive nuclear wars (indicating that that might be a possible major filtration event) or that the planet's surface is somehow covered with something like diamond (indicating some point possibly in the very far past, a serious nanotech disaster occurred). A better space program also helps deal with astronomical existential risks like asteroids.

So, overall it seems that most science is neutral to a Singularity situation. Of the remainder some might increase the chance of a near term Singularity and some might decrease it. A lot of science though helps deal with other existential risks and associated problems.

So the vast majority of scientific research seems to do very little for helping an AI go foom.

I guess it wasn't clear but I also consider a Hansonian/Malthusian upload-driven Singularity to be bad.

So, overall it seems that most science is neutral to a Singularity situation.

The mechanism I had in mind was that most scientific/technological progress (like p4wnc608's field of machine vision for example) has the effect of increasing the demand for computing hardware and growing the overall economy, which allows continued research and investment into more powerful computers, bringing both types of Singularity closer.

[-][anonymous]13y10

I can address the other questions later on, but I am actually interested in looking to complexity limits for FAI problems. My initial reaction to Yudkowsky's post about cohesive extrapolated volition was that such a thing is probably not efficiently computable, and even if it is, it is probably not stable (in the control theory sense; i.e. a tiny error in CEV yields a disastrously large error in terms of the eventual outcome). It isn't like there is just one single time that we have to have a mathematically comprehensible description of volition. As computational resources grow, I imagine the problem of CEV will be faced many times in a row on rapidly larger scales, and I'm interested in knowing how a reasonable CEV computation scales asymptotically in the size of the projected future generation's computing capabilities. Very very naively, for example, let's say that the number of processors N of some future AI system plays a major role in the mathematical structure of my description of my volition that I need to be prepared to hand to it to convince it to help me along (I know this is a shortsighted way of looking at it, but it illustrates the point). How does the calculation of CEV grow with N. If computing the CEV in a mathematically comprehensible way grows faster than my compute power, then even if I can create the initial CEV, somewhere down the chain I won't be able to. Similarly, if CEV is viewed as a set of control instructions, then above all it has to be stable. If mis-specifying CEV by a tiny percentage yields a dramatically bad outcome, then the whole problem of friendliness may itself be moot. It may be intrinsically unstable.

As far as "math teaching at a respected research university" goes, there are a few reasons. I have a high aesthetic preference for both mathematics and the human light-bulb-going-off effect when students overcome mathematical difficulties, so the job feels very rewarding to me without needing to offer me much in the way of money. I enjoy creating tools that can be used constructively to accomplish things, but I don't enjoy being confined to a desk and needing to focus on a computer screen. The most rewarding experience I have found along these lines is developing novel applied mathematical tools that can then be leveraged by engineers and scientists who have less aversion to code writing. Moreover, I have found that I function much better in environments where there is a vigorous pace to publishing work. At slower places, I tend to chameleonize and become slower myself, but at vibrant, fast-paced places, I seem to function on all cylinders, so to speak. This is why a "respected research university" is much more appealing than a community college or smaller state level college.

I'm very disillusioned with the incentive scheme for academia as a whole. Applied mathematics with an emphasis on theoretical tools is one domain where a lot of the negative aspects have been kept at bay. Unfortunately, it's also a field where statistically it is very hard to get a reasonably stable job. As far as areas of math go, I greatly enjoy theoretical computer science, probability theory, and continuous math that's useful for signal processing (complex analysis, Fourier series, functional analysis, machine learning, etc.)

I had not seen the previous post on career choice and will look into it. But the main reason for this thread was that I think that as far as getting a job and sustaining myself goes, I'm better off trying to hack my preferences and causing myself to actually enjoy computer programming, instead of finding it loathsome as I do now. This is based on a non-trivial amount of interaction with people in the start-up community, in academia, and at government research labs.

In one of the previous discussions, I suggested taking a job as a database/web developer at a university department. I think you don't actually need to hack yourself to enjoy computer programming to do this, because if you're a fast programmer you can finish your assignments in a small fraction of the time that's usually assigned, which leaves you plenty of time to do whatever else you want. So if you just want to get a job and sustain yourself, that seems like something you should consider.

But that advice doesn't take into account your interest in FAI and "I have found that I function much better in environments where there is a vigorous pace to publishing work". If you think you might have the potential to make progress in FAI-related research, you should check out whether that's actually the case, and make further decisions based on that.

[-][anonymous]13y10

For one thing, I am not a very fast programmer. I only know Python, Matlab, and a tiny bit of C/C++. Most of the programming I do is rapid prototyping of scientific algorithms. The reason why I hate that sort of thing is that I feel more like I am just scanning the literature for any way to hack at an engineering solution that solves a problem in a glitzy way in the short term. Professors seem to need to do this because their careers rest on being able to attract attention to their work. Prototyping the state-of-the-art algorithms of your peers is an excellent way to do this since you end up citing a peer's research without needing to develop anything fundamentally new on your own. If you can envision a zany new data set and spend a small amount of money to collect the zany data and have grad students or Mechanical Turkers annotate it for you, then you can be reasonably assured that you can crank out "state of the art" performance on this zany data set just by leveraging any recent advances in machine learning. Add a little twist by throwing in an algorithm from some tangentially related field, and presto, you've got a main event conference presentation that garners lots of media attention.

That cycle depresses me because it does not fundamentally lead to the generation of new knowledge or expertise. Machine learning research is a bit like a Chinese takeout menu. You pick a generic framework, a generic decision function class, some generic bootstrapping / cross-validation scheme, etc., and pull a lever, and out pops some new "state of the art" surveillance tool, or face recognition tool, or social network data mining tool, etc. None of this causes us to have more knowledge in a fundamental sense; but it does pander to short term commercial prospects.

Also, after working as a radar analyst at a government lab for two years, I don't think that the suggestion of taking some kind of mindless programming day job just to fund my "research hobbie" is actually viable for very many people. When I developed algorithms all day, it zapped my creativity and it really felt soul crushingly terrible all day every day. The literal requirement that I sit in front of a computer and type code just killed all motivation. I was very lucky if I was able just to read interesting books when I went home at night. The idea that I could do my work quickly and eek out little bits of time to "do research" seems pretty naive to the actual task of research. To be effective, you've got to explore, to read, to muck around at a white board for two hours and be ready to pull your hair out over just not quite getting the result you anticipated, etc. I wouldn't want to half-ass my passion and also half-ass my job. That would be the worst of both worlds.

As for FAI research, I feel that the rational thing to do is to not pursue it. Not because I am against it or disinterested, but because it is such a cloistered and closed-off field. As much as FAI researchers want to describe themselves as investing in long-term, high-risk ideas, they won't do that for motivated potential researchers. There's such little money in FAI research that it would be comparable to taking out a multi-hundred thousand dollar loan to self-fund a graduate degree in law from an obscure, rural university. Law degress do not allow you to feed yourself unless you leave the field of law and work very hard to gain skills in a different area, or you go to the best law schools in the country and ride the prestige, usually still into non-law jobs.

This is why I think the self-hacking is necessary. If I work for a startup company, a research lab, government research, etc., then I am only going to be paid to write computer code. Since tenure track faculty jobs are diminishing so rapidly, even being at a prestigious university does not give you much of a chance to obtain one. If you study science in grad school and you want to earn more than $30,000 per year, your primary job will most likely be writing computer code (or you can leave science entirely and do scummy things like corporate finance or consulting, but my aversion to those is so large that I can effectively ignore them as options).

I did not know that machine learning approaches were that scarily effective...

All I can say (as an undergraduate) is that if I were you, I would keep looking for more options (in addition to the three you listed.) There seem to be many successful people who work in something entirely unrelated to what they studied in graduate school. I imagine that the experience you are currently having could be one of the reasons why such people make that switch. Look around; perhaps you can find something else you will like just as much, with better job prospects.

[-][anonymous]13y-10

I am in very close to the same position as you (applied math grad student with almost the same interests) and I am quite sanguine about the future, barring worries about my own risk of failure.

Mainly because I may be less far along in my research career and I don't yet feel precommitted to any research methods that look like they're not working. Also because I have no real aversion to crass commercialism.

Thought 1: as far as I know, they still use a lot of PDE's in computer graphics. Nobody's going to write an SVM that can replace Pixar.

Thought 2: I don't really believe pure dumb ML can solve the serious vision problems in the long run. It just looks like it works for now because you can throw a lot of processing power at a question. But this is not how your brain does it; there's built-in structure and actual geometric information based on the assumption that we live in a physical world where images come from light illuminating objects. I have heard a few professors lament the shortsightedness of so-called machine vision researchers. If you want to do the deep stuff, maybe the best thing to do is work with one of the contrarian professors. That's (approximately) what I'm doing, though I'm not working on vision at the moment. Or, more speculatively --- there is a trend for some Silicon Valley types to invest in long-term basic research that universities don't support. Maybe you could see if something like that could work for you.

Thought 3: if you're interested in hacking yourself to be okay with not working in academia, consider that it's more altruistic. A professor benefits from taxpayer dollars and the security of tenure (which protects him from competition by newcomers.) A developer in the private sector produces value for the rest of society, without accepting any non-free-market perks.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]13y30

there is a trend for some Silicon Valley types to invest in long-term basic research that universities don't support.

Can you point me to any specific examples of this? I have a grad student colleague here who is very involved with face detection and tracking and his work has essentially blown the state-of-the-art performance out of the water. Because of this, he's heavily involved with various startups and web businesses looking to use his better face detection methods. When I queried him for advice, he basically said that not only is long-term, basic research very risky (especially if the researcher has a tendency to look for elegant mathematical solutions), but literally no one will pay you for it. He insisted that you won't find any companies doing long term basic research because it won't benefit them more than competitors in the long run.

One counter example to this might be Willow Garage. However, I think they still are not doing the very basic theoretical math research that they will actually wish they had once a personal robotics industry does start booming. I've really racked my brain trying to come up with places that actually pursue the theory because of the long term practical benefits.

Moreover, I am very discouraged about the state of academic publishing right now. The main reason I want to hack myself is to change my preferences about being a university researcher. Currently, I see only two alternatives: university researcher or corporate/industrial/government researcher. I had always thought that in the former, people paid you grant money because of your ingenuity and foresight and the whole point was to use tax money to allow researchers to conduct high-risk research that commercial entities could not afford to risk their money on. As it turns out though, both of these options require you to pander to whatever the popular commercial interests of your day happen to be, even if you think that the popular commercial interests have got it all wrong and are going down the wrong track.

It makes me feel that I need to hack myself to want to want to just be a programmer for some company somewhere. Make enough money to do cryonics and have an internet connection and just float along. I feel very discouraged to really try anything else. And since my current preferences hold that computer programming for the sake of widget building is soul-crushingly terrible, I feel like it's a big Catch-22 putting me in an ambivalent stalemate.

Lastly, the altruism argument you mention doesn't appeal to me. I think society should have a tenure class of professors able (and required) to do riskier / theoretical research. The amount of work it takes to get to that position in life ought to outweigh whatever solely-free-market altruism a corporate scientist might be prideful of. But the reality is that by the time I am in a position to seriously apply for a tenure track job (2 more years of school, 2 years of post-doc, 5 years as assistant professors, so roughly 9 years from now), tenured positions just simply will not exist. It's already a dying business model, which makes me feel like all of the time I've already spent training non-practical skills into myself was a massive unforeseen waste.

[-][anonymous]13y00

It wasn't that I know of an existing organization that does what you want. It's more that there exist things out there (like the SENS foundation, or Halcyon Labs, or SIAI itself) designed to do science in other fields. I agree that it would be hard to move into the "computer vision start-up" space with a more long-term focus, at least these days.