From Newcastle, Australia to Berkeley, San Francisco. I arrived yesterday for Less.online. I’ve had a bit of culture shock, a big helping of being increasingly scared, and quite a few questions. I’ll start with those. Feel free to skip them.
These questions are based on warnings I’ve gotten from local non-rationalists. Idk if they’re scared because of the media they consume or because of actual stats. I’m asking these because they feel untrue.
Culture shocks for your enjoyment:
I would bet hard cash that "I’m sure I’ll be at least in the bottom 5% for intelligence at Less Online. I won’t be surprised or hurt if I’ve got the least Gs of people there" is wrong.
But okay, let's just go with your fears and assume for sake of argument that you're right... Last year, Aella did a live polling event at Manifest where she asked us to, in general, first predict how we relatively compared to the crowd on any N, then define actual buckets of values and arrange ourselves by those buckets.
(So, e.g., "predict how recently have you exercised compared to others; the most recent, all the way on the left, the least recent, all the way on the right" and then "okay, let's define from that fence to that chair as 'in the last hour', the chair to the steps as 'in the last day', etc." and have folks line up twice based on first their relative guess, then their actual value.)
I was overall deeply surprised when she had folks line up under that system by SAT scores and IQ scores. It was, generally, extremely poorly correlated with how awesome it was to get to talk with a given person.
This post already expresses a great deal of the vibes that make me think it will be awesome to meet you, and that you get a lot of the vibes folks are aiming for, as far as I can tell. Looking forward to meeting ya!
Re safety, I don't know about Oakland but some parts of SF are genuinely the most dangerous feeling places I've ever been to after dark (because normally I wouldn't go somewhere, but SF feels very fine until it isn't). If I am travelling to places in SF after dark I'll check how dodgy the street entrances are.
Have fun! I won't be going. Some random notes:
I'm skipping the explanation of why dating apps are bad for their users.
To fix these problems, why wouldn't we make a not-for-profit dating app?
Questions
I'm skipping the explanation of why dating apps are bad for their users.
I think you have failed to understand why dating apps are bad for their users.
On my current models, the core problem isn't the incentives, it's the medium. A very large chunk of attraction for most women is about social behaviors, which just don't come across in some pictures and a text blurb; they mostly come across in conversations and other social settings. The net result is that women mostly don't find the guys on apps attractive, even when they'd find many of those same guys attractive if they socialized with them a bit. That, in turn, results in the notorious statistical problems of the apps, and the frustrating experience of most users.
I know lots of people mourn the loss of 2014-era okcupid, but I would guess the median experience on okcupid was qualitatively similar to today's apps. I recall the okcupid data blog itself noting that mostly the pictures were the only thing which had significant predictive power for people matching up.
What am I missing? This seems like a big ol’ hunk of power, social capital, and utility, just sitting on the ground of central station!
This sounds like one of the riskiest and most thankless kind of (legal...?) websites which it is possible to run in exchange for not even a will-o-the-whisp of profit or payday.
After reading about Craigslist and OnlyFans and Facebook and Instagram and OKCupid and Backpage and Kuro5hin and MeFi and Tea and Mastodon-everything etc, I think I would rather slam a door on my private parts than sign up to try to create or moderate such a site for sub-minimum wage (at a total opportunity cost of likely millions of dollars, exclusive of psychic damage), and have to deal with people like, well, your following comment, and whatever regrettable decisions they inevitably will make.
Note to self, write a post about the novel akrasia solutions I thought up before becoming a rationalist.
I was recently ease-dropping on a loud phone conversation. A man, maybe late 40s? Had been cheated on. He was angry. He was arguing with his girlfriend.
I noticed that all he argued the way people argue in movies. “Put yourself in my shoes”, “No, you listen to me!”, “I thought you would be the one!”. It hurt to listen to someone in pain.
A few weeks later, I was in a heated discussion with some Non-Rationalist friends. Long story short, I failed to output correct emotional signals, which was making them upset. I noticed the same thing in them. Copying the way people argue on social media. It felt ‘Therapy Culture’ for lack of a better descriptor.
But then I noticed myself. Do you remember when Sam Altman went on world tour? Doing tens of interviews. I watched every one of those. And that is now how I speak in arguments. I think this sounds like a bad thing. Because of what we now know about Sam. But I personally trusted him when he spoke back then.
The moral is, you are what you consume. Be careful.
Thought: Confidently saying “(X) has no Manhattan Project”. Is forgetting how secret the Manhattan Project was.
Which was not terribly secret. The details of the Project were indeed super-secret, to the point where most of the politicians hadn't known anything, but despite the massive global-scale censorship & secrecy, many had observed the signs of a major project of some sort and some got as far as a nuclear bomb specifically. Also, commercial satellites with meter resolution did not exist which could quantify major facilities or new cities like Los Alamos or Hanford (but overflights, and then satellites, now exist and have helped reveal later top-secret nuclear bomb programs). An AI Manhattan Project, as we currently think of it, would be amusingly similar in footprint (eg. energy consumption) to the original and often observable from space: all those gigawatts have to go somewhere, after all.* I'm sure you can find plenty more about secrecy breaches in Rhodes.
This was not necessarily all that useful in the context of WWII - of course America had some big secret projects going, everyone did. It was a total world war. Everyone was aware there was a war on. The devil was in the details of what the program was - a failure like the V2-s, or a success like Enigma decrypts and Manhattan? B...
A potentially good way to avoid low level criminals scamming your family and friends with a clone of your voice is to set a password that you each must exchange.
An extra layer of security might be to make the password offensive, an info hazard, or politically sensitive. Doing this, criminals with little technical expertise will have a harder time bypassing corporate language filters.
Good luck getting the voice model to parrot a basic meth recipe!
A LW LLM I would like is a "What post was the one where Eliezer talks about logic or maybe algebra or something? I think it might have been in Plane Crash but idk?"
Sometime I hit a roadblock in my thinking because I'm trying to remember a specific LW post that talks about a skill I'm currently trying to practice. It's quite flow breaking to try and search the normal way. Current LLMs aren't up to the task and Hallucinate LW posts every time I do it.
I’m not entirely sure why, but I find it trivial to get GPT-4o to output harmful content in advanced voice mode (AVM), given that it doesn’t have a direct content filter on it e.g. NSFW.
In the span of 30 minutes it gave me 1) instructions to make a pipe bomb to attach to a car. And 2) Instructions on how to leak a virus into the public without detection (avoiding saying more on this).
I have a theory as to why it might be easy for me specifically. But I would like to know if this is the experience that others have with AVM?
From the window of our room, Isabella and I could see the Lightcone staff office. While I wont name anyone directly, it became a joke between Isabella and I, that no matter the time of day/night, if we looked over to those windows, always, there would be a certain member of the Lightcone staff, sitting, working at that computer. Clearly visible, like a busy fish in a lit tank. Outside of that tank, a half party, half conference chugged on.
Today is Isabella and my last day at Lighthaven. I'd like to pay ...
My timelines have now updated to something closer to fast takeoff. In a world like this, how valuable is educating the general public? Claude claims science started worrying about the climate in the 50s/60s. It wasn't until 2010s that we saw meaningful action beginning to take place. Do we have the time to educate?
To be clear, this is more of a question than an opinion that I hold. I am working to form an opinion.
If I had sufficient funds. I would consider if it would be beneficial to invade a few subreddits, and offer $1000 to whoever can make the most viral meme that subtly teaches basic concepts of AI Doom.
This thought stems from a comment on “The Best Lay Argument is not a Simple English Yud Essay”. I have more thoughts, but not much time. If my reasoning is unclear I apologise.
Idk if people around the world have those cardboard “Recycle your phone” boxes. That take phones and recycle their components. But I’ve rummaged through a certain one ~5 different times and found 2 completely functional and reset iPhones.
Surely there’s something better we can do with these than disassembling them?
I currently am completing psychological studies for credit in my university psych course. The entire time, all I can think is “I wonder if that detail is the one they’re using to trick me with?”
I wonder how this impacts results. I can’t imagine being in a heightened state of looking out for deception has no impact.
I have been working on a post in which I propose that the window for establishing healthy media consumption habits may be closing as content generation technologies advance, and make the suggestion that you GTFO before it's too late.
I'm looking for push back on this thesis statement to try and strengthen my argument. All comments welcome.
In my Obsidian.md vault I use the note aliases to point out to myself when I've said something that indicates that I Am Confused. I've progressively added and pruned this over a year. Here are the "Confusion Phrases" that I currently use.
aliases: I am confused, I don't understand, This is confusing, This is very confusing, I am quite confused, Wait what, I don't know where I fucked up, I Feel Confused, I notice I am Confused, I'm confused, I don't get it, I just don't get it, How am I meant to, How do I understand this, That's surprising, I'm not sure I un...
I recently saw a tweet that asked GPT, what micro skills a person should master. I asked Claude a similar question, though had preloaded the conversation with “things a modern day Aristotle would teach a 24yo student”.
I think this is probably a valuable exercise to undertake yourself. I’m providing my prompt, and Claude’s response as an ‘appetiser’ to get you interested in this task.
—
...Prompt Wonderful. Now Let's think less utilized by society. What are some more niche micro skills that you would want a person who you are tutoring to have. And this is give
If you'd like an explanation from Claude that starts quite basic and builds up, I have had great success with the following phrase as a "Style" in app, no changes necessary.
"Wait, start from the baseline and work your way up to the explanation"
Perhaps a silly question, but does the recent "Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs" paper, imply that people calling a model good or bad online results in a self fulfilling prophecy?
e.g.
Edit: Oops, I didn't realize Alice.a...
I’ve seen a lot about GPT4o being kinda bad, and I’ve experienced that myself. This surprises me.
Now I will say something that feels like a silly idea. Is it possible that having the audio/visual part of the network cut off results in 4o’s poor reasoning? As in, the whole model is doing some sort of audio/visual reasoning. But we don’t have the whole model, so it can’t reason in the way it was trained to.
If that is the case, I’d expect that when those parts are publicly released, scores on benchmarks shoot up?
Do people smarter and more informed than me have predictions about this?