All of Lucie Philippon's Comments + Replies

I was allergic to acarids when I was a child, and this caused me a severe asthma crisis when I was around 10. I live in France, and I got prescribed SLIT by the first allergy specialist my mother found, so I guess it's quite a common treatment there. I took it for more than 5 years, and now 8 years later I don't ever have any symptoms of allergy.

I filled in the survey! It was a fun way to relax this morning

Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.

I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.

I only managed to find three which fit somewhat my target:

... (read more)
2Jack O'Brien8mo
I think this is a good thing to do! I reccomend looking up things like "reflections on my LTFF upskilling grant" for similar pieces from lesser known researchers / aspiring researchers.

blog.jaibot.com does not seem to exist anymore.

I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?

If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?

Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.

I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.

We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.

The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.

The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!

At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.

Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.

2Daniel Kokotajlo10mo
OK, I think we are in agreement then. I think we'll be leaving the "world as normal but faster" phase sooner than you might expect -- for example, by the time my own productivity gets a 3x boost even.

I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.

Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/

They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org

I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?

This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.

However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those... (read more)

TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?

As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*r... (read more)

I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.

First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter th... (read more)

Regarding the schedule, when does the event start on friday and end of monday ? I would like to already book my trip to take advantage of low prices.

1UnplannedCauliflower2y
On Friday we begin with an optional welcome lunch at noon. About 3-4pm we give keys to the rooms and the official beginning is usually about 5pm. Any lectures/workshops/etc. begin in earnest only after dinner on Friday, so it's fairly easy to join any time during the day The official wrap up happens on Sunday afternoon but events happen until Sunday night. On Monday there might be a morning run or workout scheduled, but no more plans at the hostel. Usually on Monday afternoon, evening and even on Tuesday there are some afterparty-type events locally, e.g. rock climbing, meeting in a park with alcohol drinking and such

I would love to go, and was pondering quite hard whether to try other people interested in this endeavour in Berlin. Sadly I am not available this weekend. Can I join on saturday 30th without going to the first one ?

Thank you for the reply. I know that worry is unnecessary, I was rather asking about what you would do if you didn't know for a fact that it was indeed based on GPT-3, or that humans were effectively overseeing it, to determine whether it is an unsafe AGI trying to manipulate humans using it ?

I know that no one could detect a super intelligent AGI trying to manipulate them, but I think it's can be non-obvious that a sub human AGI is trying to manipulate you if you don't look for it.

Primarily, I think that currently, no one uses AI systems with the expectation that it could try to deceive them, so they don't apply the basic level of doubt you put in every human when you don't know their intentions.

1the gears to ascension2y
content note: I have a habit of writing english in a stream-of-consciousness way that sounds more authoritative than it should, and I don't care to try to remove that right now; please interpret this as me thinking out loud. I think it's instructive to compare it to the youtube recommender, which is trying to manipulate you, and whose algorithm is publicly unknown (but must be similar in some important ways to what it was a few years ago when they published a paper about it, for response-latency reasons). In general, an intelligent agent even well above your capability level does is not guaranteed to be successful at manipulating you, and I don't see reason to believe that the available paths for manipulating someone will be significantly different for an AI than for a human, unless the AI is many orders of magnitude smarter than the human (which is not looking likely to be a thing that happens soon). Could elicit manipulate you? yeah, for sure it could. Should you trust it not to? nope. But because its output is grounded in concrete papers and discussion, its willingness to manipulate isn't the only factor. Detecting bad behavior would still be difficult, but the usual process of mentally modeling what incentives might lead an actor to become a bad actor doesn't seem at all hopeless against powerful AIs to me. The techniques used by human abusers are in the training data, would activate first if trying to manipulate, and the process of recognizing them is known. Ultimately to reliably detect manipulation you need a model of the territory similarly good to the one you're querying. That's not always available, and overpowered search like used in AlphaZero and successors is likely to break it, but right now most ai deployments do not use that level of search, likely because capabilities practitioners know well that reward model hacking is likely if EfficientZero is aimed at real life. Maybe this thinking-out-loud is useless. Not sure. My mental sampling temperature

Thank you for the heads-up ! I joined the meetup group and i am looking forward to new events :)

Hello everyone ! My name is Lucie, and I am studying computer science. I'm fascinated by this website and started binge reading the sequences after finishing reading HPMOR. With all the information I was exposed to on this website during the last week, I am hyped and thinking frantically about how can all of this change my life goals.

However, I know that for me only reading more and more post, and getting more and more information will only sustain me for a while. When my hype die down, I think I will not be as motivated as right now into reading posts if ... (read more)

4ChristianKl2y
COVID-19 makes things harder.  Once we have spring and the temperatures outside are warm enough to meet outside again I will again run open LessWrong meetups in Berlin.  Signing up at https://www.meetup.com/LessWrong-Rationality-WaitButWhy-SlateStarCodex-Berlin/ is the most straightforward way to get the information once a new meetup is announced.