All of fowlertm's Comments + Replies

Cognitive scientist Joel Chan on metascience, scaling and automating innovation, collective intelligence, and tools for thought.

My admin pointed out the RSS feed (which I assume is what you found) and he's going to see if there's a way to make subscribing easier. 

Thanks for bringing this to my attention!

fowlertm's Shortform

I'm looking for a really short introduction to light therapy and a rig I can put in my basement-office. Over the years I've noticed my productivity just falls off a goddamn cliff after sundown during the winter months, and I'd like to try to do something about it. 

After the requisite searching I see a dozen or so references across lesswrong, and was wondering if someone could just tell me how the story ends and where I can shop for bulbs. 

For the most part I was thinking about just making things brighter, but I'm open to trying red-light therapy too if people have had success with that.  

3Matt Goldenberg1yI like Ben Kuhn's solution in this comment: https://www.benkuhn.net/lux/#comment-1595033477 A few 7-way splitters and a whole lot of 100 watt equivalent LEDs.
I'm interested in a sub-field of AI but don't know what to call it.

Thanks for the recommendations. One thing that would is just knowing what this is called. Do your books give it a name?

I'm interested in a sub-field of AI but don't know what to call it.

Not yet. That's part of what we're hoping to learn about here.

Running a Futurist Institute.

I like that idea too. How hard is it to publish in academic journals? I don't have more than a BS, but I have done original research and I can write in an academic style.

0IlyaShpitser4yPretty hard, I suppose. -------------------------------------------------------------------------------- It's weird, though, if you are asking these types of questions, why are you trying to run an institute? Typically very senior academics do that. (I am not singling you out either, I have the same question for folks running MIRI).
Running a Futurist Institute.

A post-mortem isn't quite the same thing. Mine has a much more granular focus on the actual cognitive errors occurring, with neat little names for each of them, and has the additional step of repeatedly visualizing yourself making the correct move.

https://rulerstothesky.com/2016/03/17/the-stempunk-project-performing-a-failure-autopsy/

This is a rough idea of what I did, the more awesome version with graphs will require an email address to which I can send a .jpg

0Lumifer4yNeat little names, I see. Thank you, I'll pass on the jpg awesomeness.
Running a Futurist Institute.

Different reasons, none of them nefarious or sinister.

I emailed a technique I call 'the failure autopsy' to Julia Galef, which as far as I know is completely unique to me. She gave me a cheerful 'I'll read this when I get a chance" and never got back to me.

I'm not sure why I was turned down for a MIRIx workshop; I'm sure I could've managed to get some friends together to read papers and write ideas on a whiteboard.

I've written a few essays for LW the reception of which were lukewarm. Don't know if I'm just bad at picking topics of interest or if it... (read more)

0ChristianKl4yFrom the outside view a person who has no luck building contacts with existing institutions is unlikely to be a good person to start a new institute. Of course getting someone like Eric S. Raymond to be open to write a book with you is a good sign.
1IlyaShpitser4yTry publishing in mainstream AI venues? (AAAI has some sort of safety workshop this year). I am assuming if you want to start an institute you have publishable stuff you want to say.
0Lumifer4yAhem. The rest of the world calls it a post-mortem. See e.g. this [https://en.wikipedia.org/wiki/Postmortem_documentation]. So you do not know why. Did you try to figure it out? Do a post-mortem, maybe?
Running a Futurist Institute.

I hadn't known about that, but I came to the same conclusion!

Running a Futurist Institute.

I gave that some thought! LW seems much less active than it once was, though, so that strategy isn't as appealing. I've also written a little for this site and the reception has been lukewarm, so I figured a book would be best.

2lifelonglearner4yWe're now a lot more active at LW2.0 [https://www.lesserwrong.com/]! Some of my stuff which wasn't that popular here is getting more attention there. Maybe you could try it too?
Running a Futurist Institute.

That's not a bad idea. As it stands I'm pursuing the goal of building a dedicated group of people around these ideas, which is proving difficult enough as it is. Eventually I'll want to move forward with the institute, though, and it seems wise to begin thinking about that now.

Running a Futurist Institute.

I have done that, on a number of different occasions. I have also tried for literally years to contribute to futurism in other ways; I attempted to organize a MIRIx workshop and was told no because I wasn't rigorous enough or something, despite the fact that on the MIRIx webpage it says:

"A MIRIx workshop can be as simple as gathering some of your friends to read MIRI papers together, talk about them, eat some snacks, scribble some ideas on whiteboards, and go out to dinner together."

Which is exactly what I was proposing.

I have tried for years to... (read more)

1Lumifer4yDo you know why?
2John_Maxwell4yMaybe your mistake was to write a book about your experience of self-study instead of making a series of LW posts. Nate Soares [http://lesswrong.com/user/So8res/] took this approach and he is now the executive director [https://intelligence.org/team/] of MIRI :P
Running a Futurist Institute.

You're right. Here is a reply I left on a Reddit thread answering this question:

This institution will essentially be a formalization and scaling-up of a small group of futurists that already meet to discuss emerging technologies and similar subjects. Despite the fact that they've been doing this for years attendance is almost never more than ten people (25 attendees would be fucking woodstock).

I think the best way to begin would be to try and use this seed to create a TED-style hub of recurring discussions on exactly these topics. There's a lot of low-hang... (read more)

Running a Futurist Institute.

(1) The world does not have a surfeit of intelligent technical folks thinking about how to make the future a better place. Even if I founded a futurist institute in the exact same building as MIRI/CFAR, I don't think it'd be overkill.

(2) There is a profound degree of technical talent here in central Colorado which doesn't currently have a nexus around which to have these kinds of discussions about handling emerging technologies responsibly. There is a real gap here that I intend to fill.

2turchin4yYou could start a local chapter of Transhumanist party, or of anything you want and just make gatherings of people and discuss any futuristic topics, like life extension, AI safety, whatever. Official registration of such activity is probably loss of time and money, except you know what are going to do with it, like getting donations or renting an office. There is no need to start any institute if you don't have any dedicated group of people around. Institute consisting of one person is something strange.
5gwern4yYou know, you could do that. By giving them the money.
Come check out the Boulder Future Salon this Saturday!

That hadn't even occurred to me, thank you! Do you think it'd be inappropriate? This isn't a LW specific meetup, just a bunch of tech nerds getting together to discuss this huge tech project I just finished.

Anyone else reading "Artificial Intelligence: A Modern Approach"?

Thanks! I suppose I wasn't as clear as I could have been: I was actually wondering if there are any people who are reading it currently, who might be grappling with the same issues as me and/or might be willing to split responsibility for creating Anki cards. This textbook is outstanding, and I think there would be significant value in anki-izing as much of it as possible.

LINK: Performing a Failure Autopsy

Because I missed numerous implications, needlessly increased causal opacity, and failed to establish a baseline before I started fiddling with variables. Those are poor troubleshooting practices.

Linguistic mechanisms for less wrong cognition

So a semi-related thing I've been casually thinking about recently is how to develop what basically amounts to a hand-written programming language.

Like a lot of other people I make to-do lists and take detailed notes, and I'd like to develop a written notation that not only captures basic tasks, but maybe also simple representations of the knowledge/emotional states of other people (i.e. employees).

More advanced than that, I've also been trying to think of ways I can take notes in a physical book that will allow a third party to make Anki flashcards or ev... (read more)

Deliberate Grad School

I mentioned CMU for the reasons you've stated and because Lukeprog endorsed their program once (no idea what evidence he had that I don't).

I have also spoken to Katja Grace about it, and there is evidently a bit of interest in LW themes among the students there.

I'm unaware of other programs of a similar caliber, though there are bound to be some. If anyone knows of any, by all means list them, that was the point of my original comment.

Deliberate Grad School

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

4Vika6yI think it depends more on specific advisors than on the university. If you're interested in doing AI safety research in grad school, getting in touch with professors who got FLI grants [http://futureoflife.org/AI/2015awardees] might be a good idea.
3iarwain16yWhy do you say Carnegie Mellon? I'm assuming it's because they have the Center for Formal Epistemology [http://www.hss.cmu.edu/philosophy/cfe.php] and a very nice-looking degree program in Logic, Computation and Methodology [http://www.hss.cmu.edu/philosophy/graduate-phd.php]. But don't some other universities have comparable programs? Do you have direct experience with the Carnegie Mellon program? At one point I was seriously considering going there because of the logic & computation degree, and I might still consider it at some point in the future.
Learning takes a long time

Data point/encouragement: I'm getting a lot out of these, and I hope you keep writing them.

I'm one of those could-have-beens who dropped mathematics early on despite a strong interest and spent the next decade thinking he sucked at math before he rediscovered numerical proclivites in his early 20's because FAI theory caused him to peek at Discrete Mathematics.

0JonahS7yThanks :-).
FOOM Articles

Both unknown to me, thanks :)

The outline of Maletopia

Why? What's wrong with wanting to be masculine?

0PhilGoetz7yIf it were wrong, it would be a problem, not problematic. That defies the dictionary definition, but "problem" can mean something with a simple solution that hasn't yet been implemented, while "problematic" connotes a persistent problem with no easy solution. The difficulties with it are already listed in the post, as they're the motivation for the post. Though it might be more fair to say gender is problematic.
Intrapersonal comparisons: you might be doing it wrong.

Interesting tie-in, thanks.

Incidentally, how cool would it be to be able to say "my epistemology is the most advanced"? If nothing else it'd probably be a great pickup line at LW meetups.

Is there a rationalist skill tree yet?

Agreed. I think in light of the fact that a lot of this stuff is learned iteratively you'd want to unpack 'basic mathematics'. I'm not sure of the best way to graphically represent iterative learning, but maybe you could have arrows going back to certain subjects, or you could have 'statistics round II' as one of nodes in the network.

It seems like insights are what you're really aiming at, so maybe instead of 'probability theory' you have a node for 'distributions' and 'variance' at some early point in the tree then later you have 'Bayesian v. Frequentist reasoning'.

This would help also help you unpack basic mathematics, though I don't know much about the dependencies either. I hope too, soon :)

Is there a rationalist skill tree yet?

I thought of that as well, it does need some work done in terms of presentation. It'd be a good place to start, yes.

Programming-like activities?

My two cents: I studied math pretty intensively on my own and later started programming. To my pleasant surprise, the thinking style involved in math transmitted almost directly over into programming. I'd imagine that the inverse is also true.

3[anonymous]7yIndeed, many people cross forward and backward between the two.
Meetup : Denver Area Meetup 2

I'm sorry I missed this and hope it went well. Work has been chaotic lately, but I absolutely support a LW presence in Denver. I've tried once before to get a similar group off the ground, and would be happy to help this one along with presentations, planning, rationalist game nights, whatever.

0TheStevenator7yI'd love it if you could attend. The time is flexible if your schedule needs a little wiggle room.
LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group?

Actually, I folded it into another group called the Boulder Future Salon, which doesn't deal exclusively with x-risk but which has other advantages going for it, like a pre-existing membership.

Steelmanning MIRI critics

How would you recommend responding?

2chaosmage7yI think I'd point out that he's a fairly public person, which both should increase trust and gives more material for ad hominem attacks. And once someone else has dragged the discussion down to a personal level, you might as well throw in appeals to authority with Elon Musk on AI risk, i.e. change the subject.
Steelmanning MIRI critics

I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).

Eliezer hasn't made it any easier on ... (read more)

1chaosmage7ySure MIRI isn't a cult, but I didn't say it was. I pointed out that Eliezer does play a huge role in it and he's unusually vulnerable to ad hominem attack. If anyone does that, your going with "whatever his flaws" isn't going to sound great to your audience.
The Octopus, the Dolphin and Us: a Great Filter tale

"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"

I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.

3[anonymous]7yIt rests on the hypothesis that the AI is not only dangerously intelligent but able to self-improve to levels where it can more-or-less direct an entire civilization's worth of material infrastructure towards its own goals. At that point, it would have an easy time getting a space program going, mining resources from the rest of its solar system, and eventually, achieving interstellar existence (via the sheer patience to cross interstellar distances at sublight speeds).
9James_Miller7yThe universe has a limited amount of free energy. For almost any goal or utility function that an AI had, it would do better the more free energy it had. Hence, almost every type of hyper-intelligent AI that could build self-replicating nanobots would quickly capture as much free energy as it could, meaning it would likely expand outwards at near the speed of light. At the very least, you would expect a hyper-intelligent AI to "turn off stars" or capture there free energy to prevent such astronomical waste of finite resources.
Steelmanning MIRI critics

This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.

I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:

Those that criticize MIRI as an organization or the whole FAI enterprise (people mak... (read more)

Steelmanning MIRI critics

A good point, I must spend some time looking into the FOOM debate.

Steelmanning MIRI critics

I've heard the singularity-pattern-matches-religious-tropes argument before and hadn't given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I'm acquainted with. I'm less sure that it's true of Kurzweil's brand of futurism.

Steelmanning MIRI critics

Correct, I've been pursuing that as well.

Steelmanning MIRI critics

Only the IE as defended by MIRI; it'd be a much longer talk if I wanted to defend everything they've put forward!

2[anonymous]7yShort duration hard takeoff, A la That Alien Message? That's one of the hardest claims for MIRI to justify.
6[anonymous]7yI used ClickCharts [http://www.nchsoftware.com/chart/] to make the diagrams.
Recommendations for donating to an anti-death cause

For those interested, I ended up donating to the Brain Preservation Foundation, MIRI, SENS, and the Alzheimer's Disease Research Fund.

More detail here:

http://rulerstothesky.wordpress.com/2014/04/25/in-memorium/

Truth: It's Not That Great

Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.

Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.

Relevant post:

http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html

7Viliam_Bur8yWhat happens when you try to replicate what your boss is doing? For example when you decide to start your own competing company. Then I suspect it would be useful to know the truths like "my boss always says X, but really does Y when this situation happens", so that when the situation happens, you remember to do Y instead of X. Even if for an employee, saying "you always say X, but you actually do Y" to your boss would be dangerous. So, some truths may be good to know, while dangerous to talk about in front of people who have a negative reaction to hearing them. You may remember that "X" is the proper thing to say to your boss, and silently remember that "Y" is the thing that probably contributes to the success in the position of your boss. Replacing your boss is not the only situation where knowing the true boss-algorithm is useful. For example knowing the true mechanism how your boss decides who will get bonus and who will get fired.
7CronoDAS8ySo saving people 30 and younger so they can die at 80 instead isn't good enough...
Recommendations for donating to an anti-death cause

The Brain Preservation Foundation was one of the first charities I thought of, I'll definitely be considering them.

LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group?

Head over to meetup.com and search for AI and Existential Risk, then join the group. We just had our inaugural meeting.

0KFinn7yI couldn't find this meetup group. Does it still exist?
Futurism's Track Record

I too think it would be economics, though probably of a more philosophical type, like what they do at the London School of Economics.

And yes, I'd be very interested in doing something like that :)

Dark Arts of Rationality

I propose that we reappropriate the white/black/grey hat terminology from the Linux community, and refer to black/white/grey cloak rationality. Someday perhaps we'll have red cloak rationalists.

Dark Arts of Rationality

Another nail hit squarely on the head. Your concept of a strange playing field has helped crystallize an insight I've been grappling with for a while -- a strategy can be locally rational even if it is in some important sense globally irrational. I've had several other insights which are specific instances of this and which I only just realized are part of a more general phenomenon. I believe it can be rational to temporarily suspend judgement in the pursuit of certain kinds of mystical experiences (and have done this with some small success), and I believ... (read more)

0jazmt8yIt seems impossible to choose whether to think of ourselves as having free will, unless we have already implicitly assumed that we have free will. More generally the entire pursuit of acting more rational is built on the implicit premise that we have the ability to choose how to act and what to believe.
Load More