Morpheus

Wiki Contributions

Comments

How do you write original rationalist essays?

Interesting! I am not quite sure what exactly you want to point towards. For example, I've been very impressed when people like Elizier or Scott came up with concept handles for things like Generalization from Ficional Evicence. But I am not sure this is the kind of "original thought" you mean.

Can you give 1 example of a train of thought in a post that impressed you in a way that you didn't feel like you could produce something qualitatively similar? Or do you feel like this would be hard to do because the kind of "originality" your talking about is more expressed in how it fits into the overarching worldview of a person? Or something else?

Yudkowsky and Christiano discuss "Takeoff Speeds"

[Yudkowsky][23:25]

there's a lot of noise in a 2-stock prediction.

[Christiano][23:25]

I mean, it's a 1-stock prediction about nvidia

I didn't get that part and thought others might not have either. First I thought 2-stock, 1-stock was some jargon I didn't know related to shorting stocks. But as far as I can, tell this simply means that Yudkowsky expected that Christiano invested in both nvidia and more in tsmc, but Christiano just invested in tsmc.

Ngo and Yudkowsky on AI capability gains

Maybe one set of homework exercises like that would be showing you an agent, including a human, making some set of choices that allegedly couldn't obey expected utility, and having you figure out how to pump money from that agent (or present it with money that it would pass up).

Or one could just watch this minutephysics video?

Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

Thanks, your numbered list was very helpful in encouraging to go through the claims. Just two things that stood out to me:

39 Nothing we can do with a safe-by-default AI like GPT-3 would be powerful enough to save the world (to ‘commit a pivotal act’), although it might be fun. In order to use an AI to save the world it needs to be powerful enough that you need to trust its alignment, which doesn’t solve your problem.

  • What exactly makes people sure that something like GPT would be safe/unsafe?
  • If what is needed is some form of insight/break through: Some smarter version of GPT-3 seems really useful? The idea that GPT-3 produces better poetry than me while GPT-5 could help to come up with better alignment ideas, doesn't strongly conflict with my current view of the world?

Worth noting that the more precise #12 is substantially more optimistic than 12 as stated explicitly here.

#12:

“An aligned advanced AI created by a responsible project that is hurrying where it can, but still being careful enough to maintain a success probability greater than 25%, will take the lesser of (50% longer, 2 years longer) than would an unaligned unlimited superintelligence produced by cutting all possible corners.”

This might come across as optimistic if this was your median alignment difficulty estimate, but instead Elizier is putting 95% on this, which on the flip side suggests a 5% chance that things turn out to be easier. This seems rather in line with "Carefully aligning an AGI would at best be slow and difficult, requiring years of work, even if we did know how."

Morpheus's Shortform

I mean, it definitely means that you are good... but is it 1:100 good or 1:1000000 good? At high school both are impressive, but in later life you are going to compete against people who often also were the best in their classes.

Update after a year: I am currently studying CS and I feel like I got kind of spoiled by reading "How to be a straight A student" which was mostly aimed at us-college students, and it was kind of hard to sort out which kinds of advice would apply in Germany and made the whole thing seem easier than it actually is. I am doing ok, but my grades aren't great (my best guess is that in pure grit+IQ I'm somewhere in the upper 40%). In the end, I decided that the value of this information wasn't so great after all, and now I am focusing more on how to actually gain career capital and getting better at prioritizing on a day-to-day basis.

Resurrecting all humans ever lived as a technical problem

I don't see a real difference between your Method 2 and Method 3 excluding timetravel? Are you just trying to emphasize that there might be unknown unknowns or do you mean something different?

What is the evidence on the Church-Turing Thesis?

I agree with most of this except for:

There are shadows of there migth be interesting things that might go a little beyond computation or efficient operation. To put is provocatively if you have advice that has a "asspull" in it then that is not a valid algorithm. One example could be "1. Try a thing. 2. If it fails try another thing". One can turn this into a good algorithm with the flavor of "1. Enumerate all the possible answers 2. check each". For some mathematical tasks it might be that you just do something and something ends up working, there might not be a method to come up with mathematical discoveries.

I am not sure what you mean by that. Are you actually suggesting that brains sometimes might do things that could not be done by any Turing machine? (which I don't find very plausible (though on reflection if there is something in the universe that we don't understand yet it's probably brains, so if we were searching for something that couldn't be modeled by a Turing machine it would be the right place to look?))

Or that there's no algorithm that can discover all of "math"?

In that case I'd like to know what you mean by that and if you can give a specific example of a theorem or something for which a proof would exist (in some system? Provability was the subject that got skipped in our class due to a shorter corona semester. Regretting not knowing enough about this now), but it could not be found by a Turing machine or maybe to the right place/resource to learn more about this.

What is the evidence on the Church-Turing Thesis?

Our brains do not only sense and interact with our environment, they also sense and control our own bodies. And our bodies, at numerous levels, down to our individual cells, sense and control their own status.

I find this a bit confusing, I don't consciously control most of what's going on in my body: I don't have a sense of the status of my mitochondria or any individual cell in my body that aren't specifically developed for sensing. So how is this related to consciousness?

It also convinces me that consciousness is not programmable. It must always self-develop in, not just a brain, but a body that it can control, and use to affect the world it lives in. I don't see why these criteria would need to be tied to each other. A self-driving car is programmable and has a body it controls and affects the world with, so it kind of does not fit into that picture.

Just phrasing these questions convinces me that the Turing machine model of consciousness fails, that consciousness is not an algorithm, and is not remotely computable.

I don't find just phrasing these questions very convincing.

Could all this sensing, all these preferences, and all the control mechanisms, operate off that one tape, threading back and forth through the reader?

I mean yes in principle? Turing machines require infinite memory and infinite time. With these assumptions, operating from a single tape is not very limiting (this xkcd comic illustrates this well). This is not to say that there might be other reasons why processes in the world (or conscious experience for that matter) are not computable.

What is the evidence on the Church-Turing Thesis?

I fleshed out what I meant a bit more what I was imagining:

I have a particular Turing machine (for example one that recognizes the language ) as long as you then limit the amount of transitions/time and the length of the string, then you could construct a finite state machine that reconizes the same language (for example ).

Naively, I'd imagine it to be possible for any particular Turing machine to construct an inductive rule how to construct the n+1-transition-finite state machine and the k+1-memory state machine from the n-time and k-memory state machine? Because in that case I'd imagine specifying a kind of "infinite state machine" by some (1-memory,1-time)-state machine and two induction rules how to extend the state machine as long as no termination state is reached.

What is the evidence on the Church-Turing Thesis?

Thanks for the answer!

Human brains are finite state machines. A Turing machine has unlimited memory and time.

Oops! You're right, and It's something that I used to know. So IIRC as long your tape (and your time) is not infinite you still have a finite state machine, so Turing machines are kind of finite state machines taken to the limit for () is that right?

Load More