People sometimes ask me if I think quantum computing will impact AGI development, and my usual answer has been that it likely won't play much of a role. However, photonics likely will.
Photonics was one of the subfields I studied and worked in while I was in academia doing physics.
In the context of deep learning, Photonics for deep learning focuses on using light (photons) for efficient computation in neural networks, while quantum computing uses quantum-mechanical properties for computation.
There is some overlap between quantum computing and photonics, which can sometimes be confusing. There's even a subfield called Quantum Photonics, which merges the two. However, they are both two distinctive approaches to computing.
I'll go into more detail later, but OpenAI recently hired someone who, at PsiQuantum, worked on "designing a...
Photonic devices can perform certain operations, such as matrix multiplications (obviously important for deep learning), more efficiently than electronic processors.
In practice, no, they can't. Optical transistors are less efficient. Analog matrix multiplies using light are less efficient. There's no recent lab-scale approach that's more efficient than semiconductor transistors either.
It might be a good on the current margin to have a norm of publicly listing any non-disclosure agreements you have signed (e.g. on one's LW profile), and the rough scope of them, so that other people can model what information you're committed to not sharing, and highlight if it is related to anything beyond the details of technical research being done (e.g. if it is about social relationships or conflicts or criticism).
I have added the one NDA that I have signed to my profile.
The curious tale of how I mistook my dyslexia for stupidity - and talked, sang, and drew my way out of it.
Sometimes I tell people I’m dyslexic and they don’t believe me. I love to read, I can mostly write without error, and I’m fluent in more than one language.
Also, I don’t actually technically know if I’m dyslectic cause I was never diagnosed. Instead I thought I was pretty dumb but if I worked really hard no one would notice. Later I felt inordinately angry about why anyone could possibly care about the exact order of letters when the gist is perfectly clear even if if if I right liike tis.
I mean, clear to me anyway.
I was 25 before it dawned on me that all the tricks...
I’ve got a few questions.
Sorry that’s a lot of questions. I’ve bee...
I hate the idea of deciding that something on my to-do list isn’t that important, and then deleting it off my to-do list without actually doing it. Because once it’s off my to-do list, then quite possibly I’ll never think about it again. And what if it’s actually worth doing? Or what if my priorities will change such that it will be worth doing at some point in the future? Gahh!
On the other hand, if I never delete anything off my to-do list, it will grow to infinity.
The solution I’ve settled on is a priority-categorized to-do list, using a kanban-style online tool (e.g. Trello). The left couple columns (“lists”) are very active—i.e., to-do list...
Yeah most of the time I’ll open my to-do list and just look at one the couple very leftmost columns, and the column has maybe 3 items, and then I’ll pick one and do it (or pick a few and schedule them for that same day).
Occasionally I’ll look at a column farther to the right, and see if any ought to be moved left or right. The further right, the less often I’m checking.
Most people avoid saying literally false things, especially if those could be audited, like making up facts or credentials. The reasons for this are both moral and pragmatic — being caught out looks really bad, and sustaining lies is quite hard, especially over time. Let’s call the habit of not saying things you know to be false ‘shallow honesty’[1].
Often when people are shallowly honest, they still choose what true things they say in a kind of locally act-consequentialist way, to try to bring about some outcome. Maybe something they want for themselves (e.g. convincing their friends to see a particular movie), or something they truly believe is good (e.g. causing their friend to vote for the candidate they think will be better for the country).
Either way, if you...
Might be an uncharitable read of what's being recommended here. In particular, it might be worth revisiting the section that details what Deep Honesty is not. There's a large contingent of folks online who self-describe as 'borderline autistic', and one of their hallmark characteristics is blunt honesty, specifically the sort that's associated with an inability to pick up on ordinary social cues. My friend group is disproportionately comprised of this sort of person. So I've had a lot of opportunity to observe a few things about how honesty works.
Speaking ...
One of the primary concerns when attempting to control an AI of human-or-greater capabilities is that it might be deceitful. It is, after all, fairly difficult for an AI to succeed in a coup against humanity if the humans can simply regularly ask it "Are you plotting a coup? If so, how can we stop it?" and be confident that it will give them non-deceitful answers!
TL;DR LLMs demonstrably learn deceit from humans. Deceit is a fairly complex behavior, especially over an extended period: you need to reliably come up with plausible lies, which preferably involves modeling the thought processes of those you wish to deceive, and also keep the lies an internally consistent counterfactual, yet separate from your real beliefs. As the quotation goes, "Oh what a...
I also wonder how much interpretability LM agents might help here, e.g. as they could make much cheaper scaling the 'search' to many different undesirable kinds of behaviors.
The following is the first in a 6 part series about humanity's own alignment problem, one we need to solve, first.
When I began exploring non-zero-sum games, I soon discovered that achieving win-win scenarios in the real world is essentially about one thing - the alignment of interests.
If you and I both want the same result, we can work together to achieve that goal more efficiently, and create something that is greater than the sum of its parts. However, if we have different interests or if we are both competing for the same finite resource then we are misaligned, and this can lead to zero-sum outcomes.
You may have heard the term "alignment" used in the current discourse around existential risk regarding...
Hi Seth,
Thanks for your kind words. It's funny, I think I naturally write in a more LW style, but have actually worked hard to make my writing accessible and short, so I cut down on a lot of the wordy detail and disclaimers that I generally begin with—nice to know the effort pays off.
The cartoons are drawn with an Apple Pencil on an iPad Pro using Procreate (the studio pen is great for cartooning if you're really interested). I set up a big canvas 1000px wide and about 5000px high, then go about drawing all of them top to bottom. Then I export to photoshop...
The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples.
But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful.
Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries.
To...
Observation of the cosmic microwave background was a simultaneous discovery, according to James Peebles' Nobel lecture. If I'm understanding this right, Bob Dicke's group at Princeton was already looking for the CMB based on a theoretical prediction of it, and were doing experiments to detect it, with relatively primitive equipment, when the Bell Labs publication came out.