Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
simeon_c6337
1
Idea: Daniel Kokotajlo probably lost quite a bit of money by not signing an OpenAI NDA before leaving, which I consider a public service at this point. Could some of the funders of the AI safety landscape give some money or social reward for this? I guess reimbursing everything Daniel lost might be a bit too much for funders but providing some money, both to reward the act and incentivize future safety people to not sign NDAs would have a very high value. 
A list of some contrarian takes I have: * People are currently predictably too worried about misuse risks * What people really mean by "open source" vs "closed source" labs is actually "responsible" vs "irresponsible" labs, which is not affected by regulations targeting open source model deployment. * Neuroscience as an outer alignment[1] strategy is embarrassingly underrated. * Better information security at labs is not clearly a good thing, and if we're worried about great power conflict, probably a bad thing. * Much research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics. Not bad in itself, but IMO this places heavy asterisks on the results they can get. * ML robustness research (like FAR Labs' Go stuff) does not help with alignment, and helps moderately for capabilities. * The field of ML is a bad field to take epistemic lessons from. Note I don't talk about the results from ML. * ARC's MAD seems doomed to fail. * People in alignment put too much faith in the general factor g. It exists, and is powerful, but is not all-consuming or all-predicting. People are often very smart, but lack social skills, or agency, or strategic awareness, etc. And vice-versa. They can also be very smart in a particular area, but dumb in other areas. This is relevant for hiring & deference, but less for object-level alignment. * People are too swayed by rhetoric in general, and alignment, rationality, & EA too, but in different ways, and admittedly to a lesser extent than the general population. People should fight against this more than they seem to (which is not really at all, except for the most overt of cases). For example, I see nobody saying they don't change their minds on account of Scott Alexander because he's too powerful a rhetorician. Ditto for Eliezer, since he is also a great rhetorician. In contrast, Robin Hanson is a famously terrible rhetorician, so people should listen to him more. * There is a technocratic tendency in strategic thinking around alignment (I think partially inherited from OpenPhil, but also smart people are likely just more likely to think this way) which biases people towards more simple & brittle top-down models without recognizing how brittle those models are. ---------------------------------------- 1. A non-exact term ↩︎
RobertM5739
8
EDIT: I believe I've found the "plan" that Politico (and other news sources) managed to fail to link to, maybe because it doesn't seem to contain any affirmative commitments by the named companies to submit future models to pre-deployment testing by UK AISI. I've seen a lot of takes (on Twitter) recently suggesting that OpenAI and Anthropic (and maybe some other companies) violated commitments they made to the UK's AISI about granting them access for e.g. predeployment testing of frontier models.  Is there any concrete evidence about what commitment was made, if any?  The only thing I've seen so far is a pretty ambiguous statement by Rishi Sunak, who might have had some incentive to claim more success than was warranted at the time.  If people are going to breathe down the necks of AGI labs about keeping to their commitments, they should be careful to only do it for commitments they've actually made, lest they weaken the relevant incentives.  (This is not meant to endorse AGI labs behaving in ways which cause strategic ambiguity about what commitments they've made; that is also bad.)
Anybody know how Fathom Radiant (https://fathomradiant.co/) is doing? They’ve been working on photonics compute for a long time so I’m curious if people have any knowledge on the timelines they expect it to have practical effects on compute. Also, Sam Altman and Scott Gray at OpenAI are both investors in Fathom. Not sure when they invested. I’m guessing it’s still a long-term bet at this point. OpenAI also hired someone who worked at PsiQuantum recently. My guess is that they are hedging their bets on the compute end and generally looking for opportunities on that side of things. Here’s his bio: Ben Bartlett I'm currently a quantum computer architect at PsiQuantum working to design a scalable and fault-tolerant photonic quantum computer. I have a PhD in applied physics from Stanford University, where I worked on programmable photonics for quantum information processing and ultra high-speed machine learning. Most of my research sits at the intersection of nanophotonics, quantum physics, and machine learning, and basically consists of me designing little race tracks for photons that trick them into doing useful computations.
Quote from Cal Newport's Slow Productivity book: "Progress in theoretical computer science research is often a game of mental chicken, where the person who is able to hold out longer through the mental discomfort of working through a proof element in their mind will end up with the sharper result."

Popular Comments

Recent Discussion

Suppose Alice and Bob are two Bayesian agents in the same environment. They both basically understand how their environment works, so they generally agree on predictions about any specific directly-observable thing in the world - e.g. whenever they try to operationalize a bet, they find that their odds are roughly the same. However, their two world models might have totally different internal structure, different “latent” structures which Alice and Bob model as generating the observable world around them. As a simple toy example: maybe Alice models a bunch of numbers as having been generated by independent rolls of the same biased die, and Bob models the same numbers using some big complicated neural net. 

Now suppose Alice goes poking around inside of her world model, and somewhere in there...

4tailcalled
One thing I'd note is that AIs can learn from variables that humans can't learn much from, so I think part of what will make this useful for alignment per se is a model of what happens if one mind has learned from a superset of the variables that another mind has learned from.

This model does allow for that. :) We can use this model whenever our two agents agree predictively about some parts of the world X; it's totally fine if our two agents learned their models from different sources and/or make different predictions about other parts of the world.

I am trying to gather a list of answers/quotes from public figures to the following questions:

  • What are the chances that AI will cause human extinction?
  • Will AI automate most human labour?
  • Should advanced AI models be open source?
  • Do humans have a moral duty to build artificial superintelligence?
  • Should there be international regulation of advanced AI?
  • Will AI be used to make weapons of mass destruction (WMDs)?

I am writing them down here if you want to look/help: https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing 

Answer by teradimich10

I have already tried to collect the most complete collection of quotes here. But it is already very outdated.

Historically produce shopping was mostly in open-air markets, but in the US produce is now typically sold in buildings. Most open-air produce sales are probably at farmers markets, but these focus on the high end. I like that Boston's Haymarket more similar to the historical model: competing vendors selling conventional produce relatively cheaply.

It closes for the weekend at 7pm on Saturdays, and since food they don't sell by the end of the market is mostly going to waste they start discounting a lot. You can get very good deals, though you need to be cautious: what's left at the end is often past the end of it's human-edible life.

Today Lily was off at a scouting trip, and I asked Anna what she wanted to do. She remembered that a previous time Lily was...

I went a few times but eventually got grossed out by all the mold. (At least they don't sell live pangolins there.)

3Marthinwurer
This is a fun slice of life. I'm glad y'all had a good time!
3Dagon
So, https://en.wikipedia.org/wiki/PageRank ?

Oh right, lol, good point.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

The curious tale of how I mistook my dyslexia for stupidity - and talked, sang, and drew my way out of it. 

Sometimes I tell people I’m dyslexic and they don’t believe me. I love to read, I can mostly write without error, and I’m fluent in more than one language.

Also, I don’t actually technically know if I’m dyslectic cause I was never diagnosed. Instead I thought I was pretty dumb but if I worked really hard no one would notice. Later I felt inordinately angry about why anyone could possibly care about the exact order of letters when the gist is perfectly clear even if if if I right liike tis.

I mean, clear to me anyway.

I was 25 before it dawned on me that all the tricks...

2Lorxus
Maybe I'm just weird, but I totally do sometimes subvocalize, but incredibly quickly. Almost clipped or overlapping to an extent, in a way that can only really work inside your head? And that way it can go faster than you can physically speak. Why should your mental voice be limited by the limits of physical lips, tongue, and glottis, anyway?

Oh interesting! Maybe I'm wrong. I'm more curious about something like a survey on the topic now.

2keltan
I think it would be correct to say that therapy was effective for my reading. By the end of primary school I could read at a normal level. However, my reading out loud ability seems not to have improved too much since then. I hadn’t realised until just now. But I still have to memorise how to say new words. I can, with a small effort, look at a simple word I have never encountered and pronounce it. Though, the word has to be quite simple. I host trivia as a side gig, and any question with a name that isn’t spelled traditionally trips me up badly. It can be pretty embarrassing trying to say “Sarrah” and not realising it’s just pronounced “Sarah”. That’s the thing that leads me to think, at least with reading out loud, I have to explicitly memorise a words pronunciation before I can say it. Instead of what I assume others can do, and just look at a word and know how to say it. In writing, it was necessity and cultural pressure. By the time I was reading out loud alright I was still writing like “i fond how to Mack a YouTube account” “ken i”. That’s a real quote my mother sent me a few weeks ago. When I realised I wasn’t getting what I wanted, (Winning MC battles, Reddit upvotes, winning Facebook wars, girls would comment on my spelling and I didn’t want them to) I would look around at the way others were writing things and cargo cult type copy whatever they were doing. Actually, that’s still what I do. I don’t think it was high intelligence that caused me to notice these fixes. It took far too long to be intelligence. Instead, I think I’m really competitive and like showing off. Eventually I found methods that got the results I was going for. I also watched a lot of JacksFilms YGS https://youtu.be/NARxgXEdlzs?si=1rGyQMAnMxQo0x-2
1Shoshannah Tekofsky
This is really good! Thank you for sharing _ competition drive and wanting to achieve certain things are great motivations, and I think in any learning process the motivation one can tap into is at least as important as the actual learning technique. I'm glad you had access to that. I tend to feel a little confused about the concept of "intelligence", as I guess my post already illustrated, haha. I think the word as we use it is very imprecise for cases like this. I'd roughly expect people with higher general intelligence to be much faster and successful at finding workarounds for their language processing issues, but I'd also expect the variance in this to be so high as to make plotting your general intelligence against "how quickly did you tame your dyslexia" to not make super much sense. Then again, I do agree with a comment somewhere else here that Typical Minding is a thing, and my intuitions here may be wrong cause I'm failing to understand what it's like for other minds and I might have overcorrected due to 25 years of incorrectly concluding I was kind of dumb. Lol.

LessOnline & Manifest Summer Camp

June 3rd to June 7th

Between LessOnline and Manifest, stay for a week of experimental events, chill coworking, and cozy late night conversations.

Prices raise $100 on May 13th