All of kgalias's Comments + Replies

When was the last data migration from LW 1.0? I'm getting an "Invalid email" message, even though I have a linked email here.

3Vaniver6y
Late May, if I recall correctly. We'll be able to merge accounts if you made it more recently or there was some trouble with the import.

For me it just returns "invalid email", though I can see my email in http://lesswrong.com/prefs/update/.

Regarding your last point: is a hellish world preferable to an empty one?

0chaosmage7y
Yes, because it has more potential for improvement. The Earth of a million years ago, where every single animal was fighting for its life in an existence of pain and hunger, was more hellish than the present one, where at least a percent or so are comparatively secure. So that's an existence proof of hellishness going away. Emptiness doesn't go away. Empty worlds evidently tend to stay empty. We now see enough of them well enough to know that.

Does anyone know if and where can I find "IB Mathematics Standard Level Course Book: Oxford IB Diploma Programme" (I need this one specifically)?

https://global.oup.com/education/product/9780198390114/?region=uk

Thanks! This will be helpful.

I don't have time to evaluate which view is less wrong.

Still, I was somewhat surprised when I saw your first comment.

1skeptical_lurker9y
Upvoted for not wasting time!

Is this what you have in mind?

Sugar does not cause hyperactivity in children.[230][231] Double-blind trials have shown no difference in behavior between children given sugar-full or sugar-free diets, even in studies specifically looking at children with attention-deficit/hyperactivity disorder or those considered sensitive to sugar.[232]

wikipedia

5skeptical_lurker9y
No, I have this in mind: http://www.ncbi.nlm.nih.gov/pubmed/17224202

Sugar alone makes it more difficult to concentrate for many people, as well as having many other deleterious effects.

What do you mean?

1skeptical_lurker9y
I mean, if you are oscillating between sugar highs and crashes, it is difficult to concentrate, plus it causes diabetes etc..

Sorry for the pause, internet problems at my place.

Anyways, it seems you're right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it'll take more time than emulation (on average).

I agree.

Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?

1[anonymous]9y
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It's just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done. EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don't know precisely what has to be done, so we can't quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.

Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?

1[anonymous]9y
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today's desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result -- it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a "clever insight" problem, or perhaps more accurately a large series of clever insights.

How is theoretical progress different from engineering progress?

Is the following an example of valid inference?

We haven't solved many related (and seemingly easier) (sub)problems, so the Riemann Hypothesis is unlikely to be proven in the next couple of years.

In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.

1[anonymous]9y
Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won't ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.

Hello! My name is Christopher Galias and I'm currently studying mathematics in Warsaw.

I figured that using a reading group would be helpful in combating procrastination. Thank you for doing this.

This is the part of this section I find least convincing.

2[anonymous]9y
Can you elaborate?

To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?

Yes, that was my (tentative) claim.

We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.

Can't we use a hierarchy of ordinal numbers and a different ordinal sum (e.g. maybe something of Conway's) in our utility calculations?

That is, lying would be infinitely bad, but lying ten times would be infinitely worse.

OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.

I'd rather compare it to some other technological topic, but which doesn't have a relevant franchise in popular culture.

4KatjaGrace9y
To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life? Some other technological topics that hadn't happened in real life when people became concerned about them: * Nuclear weapons, had The World Set Free, though I'm not sure how well known it was (may have been seen as frivolous by most at first - I'm not sure, but by the time there were serious projects to build them I think not) * Extreme effects from climate change, e.g. massive sea level rise, freezing of Northern Europe, no particular popular culture franchise (not very frivolous) * Recombinant DNA technology, the public's concern was somewhat motivated by The Andromeda Strain) (not frivolous I think). Evidence seems mixed.

As a possible failure of rationality (curiosity?) on my part, this week's topic doesn't really seem that interesting.

What topic are you comparing it with?

When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?

1KatjaGrace9y
War is taken fairly seriously in reporting, though there are a wide variety of war-related movies in different styles.

No need to apologize - thank you for your summary and questions.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

No disagreement here.

I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.

How does the Flynn effect affect our belief in the hypothesis of accumulation?

3gallabytes9y
It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.

It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.

Do you think this is a sensible view?

1gallabytes9y
Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence. That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.

The terms that I singled out while reading were: Backpropagation, Bayesian network, Maximum likelihood, Reinforcement learning.

1Paul Crowley9y
That's a tricky problem! If we assume people are doing this in their spare time, then a weekend is the best time to do it: say noon Pacific time, which is 9pm Berlin time. But people might want to be doing something else with their Saturdays or Sundays. If they're doing it with their weekday evenings, then they just don't overlap; the best you can probably do is post at 10am Pacific time on (say) a Monday, and let Europe and UK comment first, then the East Coast, and finally the West Coast. Obviously there will be participants in other timezones, but those four will probably cover most participants.

I was under the impression (after reading the sections) that the argument hinges a lot less on (economic) growth than what might be gleamed from the summary here.

1KatjaGrace9y
Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth. Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.
3NxGenSentience9y
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention? Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.) Try to motivate people about global warming? ("...um....but, but.... well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction... nope the price [lost jobs] of saving the planet is obviously too high...") Want to get non-thinkers to even pick up the book and read the first chapter or two.... talk about money. If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation. ---------------------------------------- Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered. But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom's book, in which case, any such author of a reading guide might be thinking already about this "hook 'em by beginning with economics" tactic, to make the book itself more likely to be read by a wider audience.
2lukeprog9y
Agree.

There's a small chance I might be there - if not, see you next time!

I would be interested, but I'd prefer the day before or so.

Somewhat relevant: http://golem.ph.utexas.edu/category/2007/05/linear_algebra_done_right.html

I've also seen this book described as "one of those texts that feels like a piece of category theory even though it’s not actually about categories", which is high praise.

The cost here might be someone implementing a technical solution.

Are minor nuisances never worth solving?

0Gunnar_Zarncke10y
Not it the cost exceeds the benefits.

I understand. Nevertheless, discussion so far hasn't gotten anywhere. Perhaps downvoting meetup threads would put some pressure on people involved in meetups to resolve the matter.

As of now, I haven't downvoted any meetup-related thread.

I'm the guy who posts the DC meetups. While I'm sympathetic to the problem, I'm not sure what I can do to help, aside from not posting meetups at all (not really an option). Pressuring me won't help you if I can't do anything.

Is it OK for me to downvote meetup threads if I don't want to see them?

0Gunnar_Zarncke10y
I understand that once some dissatisfaction with some minor nuisance (and a minor nuisance the meetups notices are given that you can scroll them away with the flick of a finger) can cause your brain to get into a negative feedback loop where the dissatisfaction gets moved around and increased as long it is not solved (see also http://lesswrong.com/lw/21b/ugh_fields/). But see thru this. It is a minor nuisance. You are above this. Dont let your dissatisfaction fool you. Yedi mind trick: There is no prblem with meetups. Scroll on.

I don't know how other meetups go, but my local meetup is based on the fact that members of the group volunteer to lead the meetup. (on a week by week basis) The person who volunteers puts in some extra amount of their time to ensure that there is a good topic. These people keep the meetups going, and are doing a service for the rationality community.

These people should not be punished with negative karma. If anything, we should be awarding karma for those people who make meetup posts.

Your complaint is about the fact that there is no separate list of meetups and non-meetup posts, and by down voting meetup posts, you are punishing innocent volunteers.

-7lmm10y
3James_Miller10y
A core long-term goal of LessWrong is to build a rationalist community so a necessary condition for a downvote should be that a post doesn't advance this goal.

I think not, unless there are only very specific meetup threads that you don't want to see. E.g. ones with no location in the title.

Any individual meetup thread is very valuable for a small number of people, and indifferent-to-mildly-costly to a large number of people. Votes allow you to express a preference direction but not magnitude, which doesn't actually capture preferences in this case.

3[anonymous]10y
Downvoting, by itself, isn't going to stop anyone from posting meetup threads. That said, there has been discussion/complaints about meetup spam before, so you're not alone. edit: clarify wording

Thanks for the piece of counter-data!

I might look into the book, but the naming convention is a big turnoff.

I already mentioned what Halmos' stance was. What I'm more interested in is how is it possible to work without examples.

1Stabilizer10y
The point I was trying to make is that it may not be necessary to have "a large stack of examples". It might instead be much more useful to have a couple of "protoypal concrete examples...a root example". Kontsevich seems to have similar thought patterns.

That seems somewhat surprising coming from Gowers.

[This comment is no longer endorsed by its author]Reply

No, of course not, but it still might make sense to wonder why it's so.

2ESRogs10y
Yeah, fair point.

Whereas I can (somewhat) make sense of thinking with examples, it seems hard to describe just what exactly does it mean to think with general abstract concepts.

Can you provide some more background? What is a morphism of computations?

0badtheatre10y
Those are basically the two questions I want answers to. In the thread I originally posted in, Eliezer refers to "pointwise causal isomorphism": We could similarly define a pointwise isomorphism between computations A and B. I think I could come up with a formal definition, but what I want to know is: under what conditions is computation A simulated by computation B, so that if computation A is emulating a brain and we all agree that it contains a consciousness, we can be sure that B does as well.

On the other hand, allowing any invertible function to be a _morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers.

I don't understand why this is a counterexample.

0badtheatre10y
Neither do I, but my intuition suggests that a static copy of a brain/the software necessary to emulate it plus a counter wouldn't cause that brain to experience consciousness, whereas actually running the simulation as a reversible computation would...

What fanfics should I read (perhaps as a HPMOR substitute)?

8MathiasZaman10y
There's a new subreddit dedicated to rationalist fiction. You can check out stories linked there. I'm currently reading Rationalising Death, a Death Note fanfic and it's pretty good even though I haven't seen the anime on which it's based. I'm also one-thirds into Amends, or Truth and Reconciliation, which is a decent look at how Harry Potter characters would logically react to the end of the Second Wizarding War. So far no idiot balls and pretty good characterization.
9tgb10y
If you haven't yet taken EY's suggestion in the author's notes to read Worm yet, do so. It's original fiction, but you probably don't mind. Edit: also this might belong in the media thread?

Harry Potter and the Natural 20.

Object level response To the Stars. Meta level, check the monthly media thread archives and/or HPMOR's author notes. They have lots of good suggestions, and in depth reviews.

4Alsadius10y
I quite enjoyed https://www.fanfiction.net/s/2857962/1/Browncoat-Green-Eyes (Yes, it's a Harry Potter/Firefly crossover. It's much, much better than the premise makes it sound)

Reading Model Theory was the first time in my life where I read a chapter of a textbook and it made absolutely no sense. In fact, it took about three passes per chapter before they made sense.

I find this experience common and I'm sure most working mathematicians (as opposed to merely a student) would confirm. One of the most important things is not getting discouraged in the face of total incomprehensibility.

That doesn't seem to be relevant, as krav maga exactly teaches you things like targeting the throat (or groin).

Load More