Thomas Kwa

Student at Caltech.

Thomas Kwa's Posts

Sorted by New

Thomas Kwa's Comments

How to actually switch to an artificial body – Gradual remapping

Minor nitpick, but in section IV I think it's unlikely that hemispherectomy and brain shrinkage can stack linearly while preserving a human-like consciousness, because successful hemispherectomy requires the preserved half to rewire some structures to the functions normally carried out by the removed half (source) whereas halving the amount of tissue greatly reduces the resources available for that. The situation reminds me of neural net compression. We can prune or use quantization, compressing the net by some factor, but the techniques don't stack perfectly because they eliminate some of the same redundancies.

Slightly more relevant is the evolutionary argument that any easy change to the brain that decreases its power consumption or volume must give up something very evolutionarily valuable, since brains use a huge amount of energy and increase deaths from childbirth. That is, the architecture of meat brains isn't too inefficient. While this doesn't refute the idea of transferring consciousness gradually, it makes me skeptical that we can do so with general-purpose hardware economically.

ChristianKl's Shortform

This one?

It was a shock on arriving at the New York Times in 2004, as the paper’s movie editor, to realize that its editorial dynamic was essentially the reverse. By and large, talented reporters scrambled to match stories with what internally was often called “the narrative.” We were occasionally asked to map a narrative for our various beats a year in advance, square the plan with editors, then generate stories that fit the pre-designated line.

For me, an article linking to this one was the fifth Google result for "new york times narrative driven".

Thomas Kwa's Shortform

The most efficient form of practice is generally to address one's weaknesses. Why, then, don't chess/Go players train by playing against engines optimized for this? I can imagine three types of engines:

  1. Trained to play more human-like sound moves (soundness as measured by stronger engines like Stockfish, AlphaZero).
  2. Trained to play less human-like sound moves.
  3. Trained to win against (real or simulated) humans while making unsound moves.

The first tool would simply be an opponent when humans are inconvenient or not available. The second and third tools would highlight weaknesses in one's game more efficiently than playing against humans or computers. I'm confused about why I can't find any attempts at engines of type 1 that apply modern deep learning techniques, or any attempts whatsoever at engines of type 2 or 3.

What will the economic effects of COVID-19 be?

Assumption: we shouldn't expect to be able to make strong quantitative predictions unless we also expect to be able to get rich playing the markets.

I'm confused about your distinction between quantitative and qualitative. The way I understand "quantitative", there isn't profit to be made off of every such prediction-- for example, if copper alloys become widely used in hospitals and consumer products for its antimicrobial properties, the impact on copper prices would be tiny, and the companies making these products are privately traded.

Coronavirus: Justified Practical Advice Thread

That study is on inanimate surfaces, and benzalkonium chloride is the main ingredient of Clorox, Lysol and other disinfectant wipes. So it might be a good idea to switch to isopropanol wipes for surfaces too.

When to Donate Masks?

Can't we tell when the marginal utility of a mask at a certain hospital is high (e.g. by observing that they are totally out of masks and plan to reuse any donated ones) and donate at that point?

[Team Update] Why we spent Q3 optimizing for karma

Clause #3 was chosen to heighten to reward/punishment for especially good or especially bad content. We’re inclined to think that single 100-karma post is worth more than four 25-karma posts and the exponentiation reflects this. (For comparison: 25^1.2 is 47.6, 100^1.2 is 251.2. So in our metric, one 100-karma post was worth about 30% more than four 25-karma posts).

Is the idea behind this that a high-quality post can provide more than a single strong-upvote of value per person, and that total karma is a proxy for this excess value?

Thomas Kwa's Shortform

Eliezer Yudkowsky wrote in 2016:

At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!

(For God’s sake, don’t try doing this yourselves. Everyone does it. They all come up with different utility functions. It’s always horrible.)

His one true utility function was “increasing the compression of environmental data.” Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, “Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.”

At first it seemed to me that EY refutes the entire idea that "increasing the compression of environmental data" is intrinsically valuable. This surprised me because my intuition says it is intrinsically valuable, though less so than other things I value.

But EY's larger point was just that it's highly nontrivial for people to imagine the global maximum of a function. In this specific case, building a machine that encrypts random data seems like a failure of embedded agency rather than a flaw in the idea behind the utility function. What's going on here?

Does the 14-month vaccine safety test make sense for COVID-19?

I think our main confusion is whether Phase 1 trials have to be complete before Phase 2-3 trials start. Surely if Phase 1 trials took 14 months and Phase 2 and 3 trials take additional serial time, there's no way to get the vaccine in mass production within 12-18 months? I'm not 100% sure of this, though.

Does the 14-month vaccine safety test make sense for COVID-19?

I don't think the timeline for Phase 1 trials looks anything like a 14 month delay before Phase 2 trials start.

  • [https://www.lesswrong.com/users/adele-lopez-1] already mentioned the live vs inactivated vaccine distinction.

  • Metaculus (admittedly not the best source of predictions) gives 45% that a vaccine is distributed starting in 2020. One user gives only 70%, taking into account the high urgency and high risk-tolerance of countries like China.

  • The American NIH says "If the clinical trial enrolls participants as planned, researchers hope to have initial data from the clinical trial within three months." This means either (a) they're being slightly misleading, or (b) that further trials will start immediately after that point.

Load More