All of nhamann's Comments + Replies

Your account of "proof" is not actually an alternative to the "proofs are social constructs" description, since these are addressing two different aspects of proof. You have focused on the standard mathematical model of proofs, but there is a separate sociological account of how professional mathematicians prove things.

Here is an example of the latter from Thurston's "On Proof and Progress in Mathematics."

When I started as a graduate student at Berkeley, I had trouble imagining how I could “prove” a new and interesting mathe

... (read more)

I agree on most of this, but would you mind explaining why you think neuroscience is "mostly useless?" My intuition is the opposite. Also agreed that pure mathematics seems useful.

5Vladimir_Nesov12y
Even if we knew everything about brains, right now we lack conceptual/philosophical insight to turn that data into something useful. In turn, neuroscience is not even primarily concerned with getting such data, it develops its own generalizations that paint a picture of roughly how brains work, but this picture probably won't be detailed enough to capture the complexity of human (extrapolated) value, even if we knew how to interpret it, which we don't.
1vallinder12y
I was also wondering about neuroscience. If we take a CEV approach, wouldn't neuroscience be useful for actually determining the volitions to be extrapolated?

Would you mind tabooing the word "preference" and re-writing this post? It's not clear to me that the research cited in your "crash course" post actually supports what you seem to be claiming here.

If you can come up with better images to represent Friendly AI, please let me know!

How about an image of a paper clip?

3JoshuaZ12y
That seems to be too much like an image that will only appeal to people who are already familiar not only with the ideas of FAI but with specific variants that are discussed only on LW. For most people that will probably be confusing.

Apologies for the pedantry that follows.

Today, we know how Hebb's mechanism works at the molecular level.

This quote gives the impression that there is a unitary learning mechanism at work in the brain called "Hebbian learning," and that how it works is well understood. It is my understanding that this is not accurate.

For example, spike-timing-dependent plasticity is a Hebbian learning rule which has been postulated to underlie at least some forms of long-term potentiation and long-term depression. However, there is ongoing debate as to how ac... (read more)

3lukeprog13y
Agreed. Fixed. Thanks.

That thread is way too long, so I'm not going to read it, but I did a quick search for and didn't see any discussion on what I consider the dealbreaker when considering the evidence for or against most religions (but especially any flavor of Christianity), which is the existence of "souls." Simply put, the "soul" hypothesis doesn't jive with current evidence from physics, and it doesn't pay rent with regard to observations from neuroscience (or any kind of observations, for that matter). I strongly suspect that the Book of Mormon doesn'... (read more)

Isn't 12.0 something like quadruple-beta of the "Stable" version of Chrome?

I'm not entirely sure what you mean here. It's the current stable release

OP: For the record, I'm on Chrome 13 and I haven't noticed anything like you mentioned here. The graphical glitches make me think something is up with your video card or the drivers for it, but if it's only happening for LW...I'm not sure what to tell you.

2RobertLumley13y
Ahh, I guess it is. It's been a long time since I used Chrome, I thought they were around 8.0 or 9.0.

In the past year I've been involved in two major projects at SIAI. Steve Rayhawk and I were asked to review existing AGI literature and produce estimates of development timelines for AGI.

You seem to suggest that this work is incomplete, but I'm curious: is this available anywhere or is it still a work in-progress? I would be very interested in reading this, even if its incomplete. I would even be interested in just seeing a bibliography.

5Peter_de_Blanc13y
It is not available. The thinking on this matter was that sharing a bibliography of (what we considered) AGI publications relevant to the question of AGI timelines could direct researcher attention towards areas more likely to result in AGI soon, which would be bad.
0timtyler13y
Seen Ben and Seth's paper on this topic? It has a bibliography.
5Douglas_Knight13y
The first two are not Raw_Power's usage. Definition #3 of the third link is. plonk

I'm interested in ... winning arguments ...

Ack, that won't do. It is generally detrimental to be overly concerned with winning arguments. Aside from that, though, welcome to LW!

2khafra13y
But winning arguments is what reason is for! edit: I don't think I've ever gotten 4 replies to a comment, let alone 4 replies at once to a six-month-old comment. But since it got so much attention, I should clarify that I intentionally conflated different meanings of purposefulness for dramatic effect.

What. That quote seems to be directly at odds with the entire idea of "Friendly AI". And of course it is, as a later version of Eliezer refuted it:

(In April 2001, Eliezer said that these comments no longer describe his opinions, found at "Friendly AI".)

I'm also not sure it makes sense to call SIAI a "closed-source" machine intelligence outfit, given that I'm pretty sure there's no code yet.

They appear to be aiming for whole brain emulation, trying to scale up previous efforts that simulated a rat neocortical column.

Here's another interim report on the longitudinal effects of CR on rhesus monkeys, this one a bit more recent (2009) than the one linked in the OP. From the abstract:

We report findings of a 20-year longitudinal adult-onset CR study in rhesus monkeys aimed at filling this critical gap in aging research. In a population of rhesus macaques maintained at the Wisconsin National Primate Research Center, moderate CR lowered the incidence of aging-related deaths. At the time point reported 50% of control fed animals survived compared with 80% survival of CR anim

... (read more)

Have you read A Human's Guide to Words? You seem to be confused about how words work.

0CharlesR13y
I haven't read the entire sequence but have studied some of the entries. I've had this question--is it right to call it a confusion?--ever since I read Taboo Your Words but didn't ask about it until now.

Looking back at your posts in this sequence so far, it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions." I guess they've been well-sourced, which is worth something. But it seems like we're still waiting on substantial new insights about metaethics, sadly.

8lukeprog13y
Seeing as lots of people seemed to benefit even from the 'What is Metaethics' post, I'm not too worried that LW regulars won't learn much from a few of the posts in this series. If you already grok 'Austere Metaethics', then you'll have to wait a few posts for things to get interesting. :)

I admit it's not very fun for LW regulars, but a few relatively short and simple posts is probably the bare minimum you can get away with while still potentially appealing to bright philosopher or academic types, who will be way more hesitant than your typical contrarian to dismiss an entire field of philosophy as not even wrong. I think Luke's doing a decent job of making his posts just barely accessible/interesting to a very wide audience.

[anonymous]13y15

it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions."

No, he said quite a lot more. E.g. why philosophers do that, why it is a bad thing, and what to do about it if we don't want to fall into the same trap. This is all neccessary ground work for his final argument.

If the state of metaethics were such that most people would already agree on these fundamentals then you would have a point, but lukeprog's premise is that it's not.

"Save the world" has icky connotations for me. I also suspect that it's too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.

1Giles13y
I hate to say it, but the icky connotations are sort of the point. I'm interested in people who want to save the world enough to overcome the icky factor. I realise that "Lonely Dissent" is essentially a troll's manifesto, and I apologise. But I'm publicly committing to stop writing trollish LW posts.

Good reminder that reversed stupidity is not intelligence.

Adding to the list: Hans Berger invented the EEG while trying to investigate telepathy, which he was convinced was real. Even fools can make important discoveries.

4Clippy13y
But increasing one's foolishness does not increase the expected rate of discovery.

Won't music-theoretic analysis be basically irrelevant to a description of why some people enjoy, for instance, Merzbow?

One thing I didn't see you mention is neuroscience. My understanding is that some AGI researchers are currently taking this route; e.g. Shane Legg, mentioned in another comment, is an AGI researcher who is currently studying theoretical neuroscience with Peter Dayan. Demis Hassabis is another person interested in AGI who's taking the neuroscience route (see his talk on this subject from the most recent Singularity Summit). I'm personally interested in FAI, and I suspect that we need to study the brain to understand in more detail the nature of human prefe... (read more)

4Zetetic13y
As far as neuroscience goes, yes I have strongly considered it. I think that I would like to do a program in computational neuroscience. The joint program at U Pitt and Carnegie Mellon looks interesting for this sort of thing, of course MIT and Caltech both have solid programs but I am not confident that my record is strong enough to get into either of those schools. The day job route makes me somewhat nervous because: (a) I'm not sure how difficult it is to get published without the right background/support (b) I'm worried that I'll be isolated from other researchers who might have insight I could benefit from

What about in the case where the first punch constitutes total devastation, and there is no last punch? I.e. the creation of unfriendly AI. It would seem preferable to initiate aggression instead of adhering to "you should never throw the first punch" and subsequently dying/losing the future.

Edit: In concert with this comment here, I should make it clear that this comment is purely concerned with a hypothetical situation, and that I definitely do not advocate killing any AGI researchers.

5fubarobfusco13y
Sure, but under what conditions can a human being reliably know that? You're running on corrupted hardware, just as I am. Into the lives of countless humans before you has come the thought, "I must kill this nonviolent person in order to save the world." We have no evidence that those thoughts have ever been correct; and plenty of evidence that they have been incorrect.

Ahh, good point. My comment is somewhat irrelevant then with regards to this, as it seems that what you're interested in is beyond the scope of science at present.

My gold standard for understanding reality is science, i.e., the process of collecting data, building models, making predictions, and testing those predictions again and again and again. In the spirit of "making beliefs pay rent" if Buddist meditation leads to less distorted views of reality then I would expect that "enlightened" Buddists would make especially successful scientists. As a religious group the Jews have been far more productive than the Buddists. Apparently Buddist physicists have no special advantage at building models th... (read more)

0DavidM13y
Unfortunately, as far as I know, it's an issue that hasn't been studied...but because of the detailed knowledge that has come out of communities interested in enlightenment, I see no principled reason why it couldn't be studied. Actually, I think it's low-hanging fruit.

A brief poke around in Google Scholar produced these papers, which look useful:

Alterations in Brain and Immune Function Produced by Mindfulness Meditation. Psychosomatic Medicine 65:564 –570 (2003)

Mindfulness training modifies subsystems of attention. COGNITIVE, AFFECTIVE, & BEHAVIORAL NEUROSCIENCE Volume 7, Number 2, 109-119

Long-term meditation is associated with increased gray matter density in the brain stem. NeuroReport 2009, 20:170–17

Attention regulation and monitoring in meditation. Trends Cogn Sci. 2008 April; 12(4): 163–169.

3Zetetic13y
Much appreciated! I was hoping that I might be able to get some meta-analysis out of one of the meditation advocates, but unfortunately it has not been offered up. I do not even know what enlightenment is (or if it is even an actual phenomenon, beyond placebo) in terms of physiology/brain chemistry. It sounds like a threshold dose of LSD, judging by the subjective definitions. Because of this, I am not interested in enlightenment, but I am interested on any known enhancing effects of meditation techniques.
2Jonathan_Graehl13y
Also, meditation reduces pain sensitivity, even for future pain.
3DavidM13y
Thanks for the references. I should have made clear that I meant, not that there are no peer-reviewed studies about meditation, but there are none that I know of that concern enlightenment, the typical stages of meditative experience leading up to it, cognitive / neurophysiological sequelae, etc. (which are what I would find interesting in this context). If you know otherwise, I'd love to hear about it.

You think that claiming to have no understanding at all of ordinary words is getting at reality?

It's almost never sufficient, but it is often necessary to discard wrong words.

-2Peterdjones13y
..and it's necessary to have a reasoned motivation for that. If you could really disprove things just by unmotivated refusal to use language, you could disprove everything. Meta-principle: treat one-size-fits-all arguments with suspicion.

It was interesting to see the really negative comment from (presumably the real) Greg Egan:

The Yudkowsky/Bostrom strategy is to contrive probabilities for immensely unlikely scenarios, and adjust the figures until the expectation value for the benefits of working on — or donating to — their particular pet projects exceed the benefits of doing anything else. Combined with the appeal to vanity of “saving the universe”, some people apparently find this irresistible, but frankly, their attempt to prescribe what rational altruists should be doing with their t

... (read more)
6Steve_Rayhawk13y
Previous arguments by Egan: http://metamagician3000.blogspot.com/2009/09/interview-with-greg-egan.html Sept. 2009, from an interview in Aurealis. http://metamagician3000.blogspot.com/2008/04/transhumanism-still-at-crossroads.html From April 2008. Only in the last few comments does Egan actually express an argument for the key intuition that has been driving the entire rest of his reasoning. (To my eyes, this intution of Egan's refers to a completely irrelevant hypothetical, in which humans somehow magically and reliably are always able to acquire possession of and make appropriate use of any insentient software tools that will be required, at any given moment, in order for humans to maintain hypothetical strategic parity with any contemporary AI's.)
2XiXiDu13y
I think Greg Egan makes an important point there that I have mentioned before and John Baez seems to agree: Actually this was what I had in mind when I voiced my first attempt at criticizing the whole endeavour of friendly AI, I just didn't know what exactly was causing my uneasiness. I am still confused about it but think that it isn't much of a problem as long as friendly AI research is not being funded at the cost of other risks that are more thoroughly based on empirical evidence rather than the observation of logically valid arguments. To be clear, as I wrote in the post above, I think that there are very strong arguments in support of friendly AI research. I believe that it is currently the most important cause one could support, but I also think that there is a limit to what one should do in the name of mere logical implications. Therefore I partly agree with Greg Egan. ETA There's now another comment by Greg Egan:
3[anonymous]13y
Greg Egan's view was discussed here a few months ago.
0shokwave13y
Surely not ... Does Greg Egan understand how "a small chance every year" can build into "almost certain by this date"? Because that was convincing for me: I can easily see humans building work-arounds or stop-gaps for most major problems, and continuing business mostly as usual. We run out of fossil fuels, so we get over our distrust of nuclear energy because it's the only way. We don't slow environmental damage enough, so agriculture suffers, so we get over our distrust of genetically modified plants because it's the only way. And so on. Then some article somewhere reminded me that business as usual includes repeated attempts at artificial intelligence. And runaway AI is not something we can build a work-around for; given a long enough timespan and faith in human ingenuity, we'll push through all the other non-instant-game-over events until we finally succeed at making the game end instantly.

Speaking as someone whose introduction to transhumanist ideas was the mind-altering idea shotgun titled Permutation City, I've been pretty disappointed with his take on AI and the existential risks crowd.

A reoccurring theme in Egan's fiction is that "all minds face the same fundamental computing bottlenecks", serving to establish the non-existence of large-scale intrinsic cognitive disparities. I always figured this was the sort of assumption that was introduced for the sake of telling a certain class of story - the kind that need only be plausib... (read more)

0[anonymous]13y
Yep, Egan created a Yudkowsky and Overcoming Bias/LessWrong stand-ins for mockery in his most recent novel, Zendegi. There was a Less Wrong discussion at the time.

Suggestion: when you read a piece of nonfiction, have a goal in mind

Agreed. See also: Chase your reading

Hmm, but it does seem like trauma triggers and the psychic-distress-via-salmon work via the same mechanism. So probably the key here is to distinguish between actual psychic stress and feigned stress used for status maneuvers. It is not, however, clear to me how to do that in general.

No, the key here is to distinguish between actual psychic stress not used for status maneuvers and actual psychic stress used for status maneuvers. Which is of course even harder.

Another case that's interesting to consider is the Penny Arcade dickwolves controversy. The PA fellows made a comic which mentioned the word "rape", some readers got offended, and the PA guys, being thick-skinned individuals, dismissed and mocked their claims of being offended by making "dickwolves" T-shirts. Hubbub ensues.

What's most interesting about this case is that, apart from perhaps some bloggers, many of the people taking offense appear to be rape survivors for whom reading the word "rape" is traumatic (I guess? This i... (read more)

2HughRistik13y
I think the point is that people shouldn't use it as a joke so much.

I think that the mechanism for rape trauma triggers is different from the mechanism for Muhammed representation offense taking, and so the two should probably be treated differently. The trouble with the Dickwolves controversy is that you wound up with offense-takers and trauma-havers on the same side, in the same camp, so they got conflated.

I agree, yours is a more reasonable interpretation. I think I was interpreting "winds" as referring to "the winds of evidence," which is not reasonable in this context.

I do think your accusing me of "tribal affiliation signaling" was unnecessary and uncharitable: I don't consider Bush to have been a significantly worse a president than any other recent presidents. I just happened to have run into the quote awhile back, and in my misinterpretation thought it was a good anti-rationality quote.

Edit: I did some thinking to try to ... (read more)

4[anonymous]13y
I swear that it was not my intent to make any statement about your motivation, and I have evidence of my intent. In another comment in this discussion I wrote: Notice that I wrote "down the years" and "has been". I put in those words intentionally, to direct attention toward the repetition of the quote down the years and not toward the occurrence here in this forum. Imagine if I had instead written: I might have written that because that still allows the intended historical interpretation, but it is more ambiguous because it also allows an interpretation that attacks you for posting the quote here now. I took pains to add words to avoid that interpretation. Admittedly, I was not as careful over in this part of the discussion. The memes you carry are not all your fault. I know that.

Here's the expanded quote:

Is it hard to make decisions as President? Not really. If you know what you believe, decisions come pretty easy. If you're one of these types of people that are always trying to figure out which way the wind is blowing, decision making can be difficult. But I find that -- I know who I am. I know what I believe in, and I know where I want to lead the country. And most of the decisions come pretty easily for me, to be frank with you.

When we take into account further context that this was spoken to children in elementary sc... (read more)

[anonymous]13y21

You are reading Bush as saying he won't update his priors on the evidence. But to me it is obvious that Bush is saying exactly what everybody says about themselves and about the people they support, which is that they won't shift with the political winds.

Here's an example of a person who follows Bush's advice. He is an atheist and a Darwinist. He enters a Christian Creationist community. Around him everyone is a Christian and a Creationist. They make fun of him for being a Darwinist. He has two options:

A) He can make life easy for himself by seeing which w... (read more)

3RHollerith13y
Name an effective U.S. President who did not have great confidence in his ability to make decisions, though. Or name one who doubted his own goodness or questioned his basic beliefs or basic goals.
[anonymous]13y16

you need to take into account background knowledge about George W. Bush (such as that he is a person who believes that God talks to him.)

Oh good lord, this whole topic so far is two quotes from Republican Presidents, and the supposed irrationality of the quotes seems to be nothing more than strained readings of what they meant. Can people come up with any examples of irrational/arational quotes that aren't just a labored attempt to ridicule the chieftain of the enemy tribe as a form of tribal affiliation signaling?

Is it hard to make decisions as president? Not really. If you know what you believe, decisions come pretty easy. If you’re one of these types of people that are always trying to figure out which way the wind is blowing, decision making can be difficult.

-- George W. Bush

8Vladimir_M13y
This quote sounds somewhat trite, but its message is straightforward, clear, and coherent, and while one may disagree with the opinion it expresses, it is at the very least plausible prima facie. As with the Regan quote in the earlier comment, I am baffled as to what elements of irrationality you (and those who upvoted the comment) find in it, let alone what makes it so remarkable that it's worth quoting years after it was said.

It's not clear to me what the disagreement is here. Which heuristic are you defending again?

If it's not published, it's not science

Response: Can we skip the pointless categorizations and evaluate whether material is valid or useful on a case by case basis? Clearly there is some material that has not been published that is useful (see: This website).

If it's not published in a peer-reviewed journal, there's no reason to treat it any differently than the ramblings of the Time Cube guy.

Response: Ahh yes, anything not peer-reviewed clearly contains Time... (read more)

1[anonymous]13y
The problem of publication bias is another reason to be wary of the publication heuristic recommended a few comments above. If you follow that heuristic rigorously, you will necessarily expose yourself to the systematic distortions arising from publication bias. This is not to say that you should therefore believe the first unpublished paper you come across. It's only to point out that the publication heuristic has certain problems, and while not ignored, it should be supplemented. You ignore unpublished research at your peril. In an ideal world, peer review filters the good from the bad and nothing else. We do not live in an ideal world, so caveat lector. The process of journal publication is also extremely slow, so that refusal to read unpublished research threatens to retard your progress. This link gives time to publication for several journals - the average appears to be well over a year and approaching two years. What's two years in Internet Time? Pretty long.

Yeah, I've tried org-mode, but the problem isn't that its Emacs-based (I use Emacs to write code), but it's that it isn't web-based. I wanted my notes to be accessible not only from both OSes I dual boot, but from pretty much any computer I might ever be at. I could make the file accessible I guess by putting it in a Dropbox public folder, but then there's still the issue of "what if the computer I'm on doesn't have Emacs".

Also the time-intensitivity thing of rolling my own code isn't a major drawback, as I'm trying to find a programming job at the moment and I needed something to add to my portfolio. :D

0jwhendy13y
Good points, especially if you're trying to get into programming anyway :) Out of curiosity, could I ask how often you're at a computer that you need the functionality of org-mode which doesn't run emacs? I can't really think of an occasion when I'd need the functionality that wouldn't be my own computer. I've also run emacs successfully on Linux, Win, and OS X. I keep my personal org-mode file on my OS X partition and edit it both from Linux and OS X (I keep it on OS X because Linux can read non-journaled HFS+, but OS X doesn't read EXT4 and is touch with EXT2/3). Lastly, I'll actually often use git between work and home. I pull from either when I start up, edit my stuff, and then commit and push when I'm done. I ask about the functionality because you can open the file with any text editor on any computer if you just need to get into your data here and there. You could also add headlines manually pretty easily. Again, you might have a far different use case than I do. I just can't think of needing to access my org-mode file frequently from, say, a public library or a friend's computer. Good luck on your quest for the perfect PIM :)

I'm not really familiar with the topic matter here, but I want to note that Michael Nielsen contradicts what you said (though Nielsen isn't exactly an unbiased source here as an Open Science advocate):

Perelman's breakthrough solving the Poincare conjecture ONLY appeared at the arXiv

The important point is that it doesn't appear that Perelman produced the paper for publishing in a journal, but he made it and left it on the arXiv, which was later (you claim) published in journals. That's quite a different view than "if it's not published, it's not science"

6David_Gerard13y
Indeed. However, you've raised a single remarkable exception to a general heuristic as if a single example is all that is needed to thorougly refute a general heuristic, and of course that's not the case. The overwhelming majority of papers put on arXiv and nowhere else are: * [ ] comparable to Perelman's proof of the Poincare conjecture * [ ] not comparable to Perelman's proof of the Poincare conjecture?

This post that you have excreted has essentially zero content. You restate the core idea behind the representativeness heuristic repeatedly, and baldly assert that there are good reasons for people having the intuitions that they do, that people are "using valuable real life skills" when they give incorrect answers to questions. No ones arguing that it hasn't been an evolutionarily useful heuristic, just that it happens to be incorrect from time to time. I cannot figure out where in your post you actually made an argument that the conjunction fa... (read more)

If you want to play that game, then it's not clear to me that the SIAI is doing "science" either, given that the focus is on existential risk due to AI (more like "philosophy" than "science") and formal friendliness (math).

I think a better interpretation of your quote is to replace the word "science" with "disseminated scholarly communication."

5CronoDAS13y
Good point.

Perelman's proof of the Poincare conjecture was never published in an academic journal, but was merely posted on arXiv. If that's not science, then being correct is more important than being "scientific".

[anonymous]13y11

Perelman's proof has been published, e.g. this by the AMS, which has a rigorous refereeing process for books, and this in Asian Journal of Math with a more controversial refereeing process.

Though Perelman's preprints appeared in 2002 and 2003, the Clay prize (which Perelman turned down) was not offered to him until last year, because the rules stipulate that the solutions to the prize problems have to stand unchallenged in published, peer-reviewed form for a certain number of years.

0CronoDAS13y
Indeed, math isn't science. ;) I wonder - if Perelman was just "some guy" with no reputation as a mathematician, would anyone have noticed when he uploaded his proof?
4David_Gerard13y
Brilliant, yes. So what would be oxygen?

If you want to carry a brimming cup of coffee without spilling it, you may want to "change" your goal to instead primarily concentrate on humming.

I keep reading this over and over, trying to figure out what it means. What does humming have to do with not spilling a cup of coffee?

6jimrandomh13y
One way people spill drinks is by overcorrecting for waves. That only happens if you're looking at the drink and trying not to spill it, so focusing on something else avoids that failure mode.

Pungent is a web-based note-taking app that I'm working on. I made this because I had a need for something to organize personal notes, but nothing I found was satisfactory. Right now it's essentially a less-featured clone of Workflowy, but I plan to develop it further once I figure out what direction to go in. Development is on hold for the moment while I spend some time using it and figuring out what I want it to do.

I'm also working on a research project to try to understand how human cognition works. I think FAI is really interesting + important, but I'm... (read more)

0jwhendy13y
orgmode does this insanely well and looks like what workflowy does but less flashy and not web-based. You can narrow to a subtree (see C-x n s) and then un-narrow (see C-x n w). In addition, you can track todos, record data in tables, export to html, PDF, or even a Beamer presentation. Anyway, it's pretty darn amazing. I've hunted around a lot for various notes/todos solutions, probably like yourself -- OneNote, EverNote, Google Notebook, TiddlyWiki, Monkey-Pirate-GTD-TiddlyWiki, TaskPaper (also pretty much what Workflowy looks like), Task Coach, iGTD... Nothing has touched orgmode :) I wrote a little bit about it on my blog HERE. Emacs has a steep learning curve, but it can't be any more time intensive than rolling your own code!
2jsalvatier13y
A free wordpress blog would probably work well as a research journal. They're really easy and look nice.

True heroism is minutes, hours, weeks, year upon year of the quiet, precise, judicious exercise of probity and care—with no one there to see or cheer.

— David Foster Wallace, The Pale King

Yeah, more people donated to an animal shelter than to an organization working on existential risk. Makes me feel all warm and fuzzy inside. No, wait, the opposite of that.

Sorry, I could not make sense of any of this. Especially the symbolic part, but also the conversation part. And all the other parts too.

0Seremonia13y
Sorry, perhaps you can try understanding on thought experiment section. Anyway, If it does not help, sorry, even when you feel my arguments are not much understood, but my argument can not be deleted, so in the end I tried to respond as simple as possible. Once again, sorry for this.

Yes, especially all of it.

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing ... (read more)

6alfredmacdonald11y
YeahOKButStill has an interesting take on the interaction between philosophy done in blogs and philosophy done in journals:

No, I agree that much science and philosophy can be done in blogs and so on. Usually, it's going to be helpful to do some back-and-forth in the blogosphere before you're ready to publish a final 'article.' But the well-honed article is still very valuable. It is much easier for people to read, it cites the relevant literature, and so on.

Articles could be, basically, very well-honed and referenced short summaries of positions and arguments that have developed over dozens of conversations and blog posts and mailing list discussions and so on.

Can't imagine the other commenters learned programming by jumping into Scheme or Haskell, or reading SICP, or whatever it is they're recommending :-)

Agreeing with this. I love CS theory, and I love SICP, but I learned to program by basically ignoring all that and hacking together stuff that I wanted to make. If you want to learn to program, you should probably make things first.

I remember reading the argument in one of the sequence articles, but I'm not sure which one. The essential idea is that any such rules just become a problem to solve for the AI, so relying on a superintelligent, recursively self-improving machine to be unable to solve a problem is not a very good idea (unless the failsafe mechanism was provably impossible to solve reliably, I suppose. But here we're pitting human intelligence against superintelligence, and I, for one, wouldn't bet on the humans). The more robust approach seems to be to make the AI motivated to not want to do whatever the failsafe was designed to prevent it from doing in the first place, i.e. Friendliness.

Here are the reasons to be skeptical that I picked up from that blog post:

  • The website of the Journal of Cosmology is ugly
  • The figures in the paper are "annoying"
  • Perhaps the claimed bacteria aren't bacteria at all, but just squiggles.
  • The photos of the found bacteria aren't at the same magnification as photos of real bacteria
  • It seems like the bacteria are too well-preserved for having traveled the solar system for such a long time.
  • Haha, maybe next they'll find bigfoot footprints on a meteor.
7virtualAdept13y
PZ's his own special brand of abrasive and dismissive, but I went and read most of the paper, and while he's not exactly rigorous with explaining his criticisms, I think they're based in good ones. While the design of the JoC website shouldn't affect assessment of the article, the fact that a paper on such a potentially high-impact subject isn't in a mainstream journal at all does and should send up some red flags that there might be issues with the paper that would keep it from getting past peer review. My biggest issue with the paper is that the study isn't controlled. They took appropriate steps to prevent contamination of their samples, but they don't have any reasonable negative control set up that would give them some perspective on their comparison to the living bacteria. Their "conclusions" are suggestive, rather than conclusive - it all depends on holding up these meteorites to pictures of actual bacteria and saying "Look! They look alike! And there's some enriched carbon and stuff in these fossils!" Which could certainly be interesting, but for the paper to pass muster with mainstream science, they would need to offer a convincing test that would disprove their hypothesis were it to come out a certain way. (Hey, that sounds familiar!) As it stands, they can't. They can only say that their observations look interesting. Given this, the whole thing reads like they went out looking for whatever evidence they could fit to their prior hope of finding extraterrestrial life, which doesn't immediately disprove their findings, but it certainly holds them back from credibility.
-2XiXiDu13y
Typical PZ Myers.

I think the point is that if you're trying to convince someone to pay you to write code for them and you have no prior experience with professional programming, a solid way to convince them that you're hireable is contributing significant amounts of code to an open source project. This demonstrates that 1) you know how to write code, 2) that you can work with others and 3) that you're comfortable working with a complicated codebase (depending on the project).

I'm not certain that its the most effective way to achieve this objective, but I can't think of a better alternative. Suggestions are welcome.

0[anonymous]13y
In my case, I found a local startup that employed students to test their code (we'd get a new build every couple of days and run it through a set of tests) on a part-time temp basis, paid by the hour. As the only non-student doing it, I worked more-than-full-time hours for a few months, and got noticed for having a work ethic.

Math is not necessary for many kinds of programming. Yeah, some algorithms make occasional use of graph theory, and there certainly are areas of programming that are math-heavy (3d graphics, perhaps? Also, stuff like Google's PageRank algorithm uses linear algebra), but there are huge swaths of software development for which no (or little) math is needed. In fact, just to hammer on this point, I distinctly remember sitting in a senior-level math course and overhearing some math majors discuss how they once took an introductory programming course and found ... (read more)

Load More