LESSWRONG
LW

David James
1352800
Message
Dialogue
Subscribe

My top interest is AI safety, followed by reinforcement learning. My professional background is in software engineering, computer science, machine learning. I have degrees in electrical engineering, liberal arts, and public policy. I currently live in the Washington, DC metro area; before that, I lived in Berkeley for about five years.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
An epistemic advantage of working as a moderate
David James5d10

Consider the following numbered points:

  1. In an important sense, other people (and culture) characterize me as perhaps moderate (or something else). I could be right, wrong, anything in between, or not even wrong. I get labeled largely based on what others think and say of me.

  2. How do I decide on my policy positions? One could make a pretty compelling argument (from rationality, broadly speaking) that my best assessments of the world should determine my policy positions.

  3. Therefore, to the extent I do a good job of #2, I should end up recommending policies that I think will accomplish my desired goals even when accounting for how I will be perceived (#1).

This (obvious?) framework, executed well, might subsume various common (even clichéd) advice that gets thrown around:

  • Be yourself and do what needs to be done, then let the cards fall as they may.

  • No one will take your advice if you are perceived as crazy.

  • Many movements are born by passionate people perceived as “extreme” because important issues are often polarizing.

  • It can be difficult to rally people around a position that feels watered down.

  • Pick something doable and execute well to build momentum for the next harder thing.

  • Writing legislation can be an awful slog. Whipping votes requires a lot of negotiation, some unsavory. But all this depends on years of intellectual and cultural groundwork that softened the ground for the key ideas.

P.S. when I first came here to write this comment, I had only a rough feeling along the lines of “shouldn’t I choose my policy positions based on what I think will actually work and not worry about how I’m perceived?” But I chewed on it for a while. I hope this is a better contribution to the discussion, because I think it is quite a messy space to figure out.

Reply
Vitalik's Response to AI 2027
David James2mo*80

Daniel notes: This is a linkpost for Vitalik's post. I've copied the text below so that I can mark it up with comments.

I’m posting this comment in the spirit of reducing confusion, even if only for one other reader.

Daniel’s comments are at the bottom of the post. When I read “mark it up with comments” that suggested to me that a reader can find the comments inline with the text (which isn’t the case here). In other words, I was expecting to see an alternation between blockquotes of Vitalik’s text followed by Daniel’s comments.

Either way works, but with the current style I suggest adding a note clarifying that Daniel’s comments are below the post.

Update Saturday 9 PM ET: I see now that LessWrong’s right margin shows small icons indicating places where the main text has associated comments. I had never noticed this before. Given the intention of this post, these tiny UI elements seem rather too subtle IMO.

Reply
The best simple argument for Pausing AI?
David James2mo10

LLMs can’t reliably follow rules

I suggest rewriting this as "Present LLMs can’t reliably follow rules". Doing so is clearer and reduces potential misreading. Saying "LLM" is often ambiguous; it could be the current SoTA, but sometimes it means an entire class.

Stronger claims, such as "Vanilla LLMs (without tooling) cannot and will not be able to reliably follow rule-sets as complicated as chess, even with larger context windows, better training, etc ... and here is why." would be very interesting, if there is evidence and reasoning behind them.

Reply
The Best Reference Works for Every Subject
David James3mo11

A Schelling point is something people can pick without coordination, often because it feels natural or obvious.

Reply
Book review: Everything Is Predictable
David James4mo10

While he didn't achieve the level of eloquence needed to significantly increase the adoption of the Bayesian worldview

It seems that a lot more than eloquence or even persuasion will be required.

That said, what are some areas where Chivers could do better? How could he reach more readers?

Reply
Five Hinge‑Questions That Decide Whether AGI Is Five Years Away or Twenty
David James4mo10

Agreement in the forecasting/timelines community ends at the tempo question.

What is the "tempo question"? I don't see the word tempo anywhere else in the article.

Reply
The Great Data Integration Schlep
David James4mo10

Some organizations (e.g. financially regulated ones such as banks) are careful in granting access on a per project basis. Part of this involves keeping a chain of signs offs to ensure someone can be held accountable (in theory). This probably means someone would have to be comfortable signing off for an AI agent before giving it permission. For better or worse, companies have notions of the damage that one person can do, but they would be wise to think differently about automated intelligent systems.

Reply
The Great Data Integration Schlep
David James4mo10

those running trials are usually quite ignorant of what the process of data cleaning and analysis looks like and they have never been recipients of their own data.

Some organizations have rotation programs; this could be expanded to give people a fuller view of the data lifecycle. Perhaps use pairing or shadowing with experts in each part of the process. (I’m not personally familiar with the medical field however.)

Reply
Litany Of Occam
David James5mo30

It is more probable that A, than that A and B.

I can see the appeal here -- litanies tend to have a particular style after all -- but I wonder if we can improve it.

I see two problems:

  1. This doesn't convey that Occam's razor is about explanations of observations.
  2. In general, one explanation is not a logical "subset" of the other. So the comparison is not between A and A and B; it is between A and B.

Perhaps one way forward would involve a mention (or reference to) Minimum Description Length (MDL) or Kolmogorov complexity.

Reply
LessWrong's (first) album: I Have Been A Good Bing
David James5mo10

I'm putting many of these in a playlist along with The Geeks Were Right by The Faint: https://www.youtube.com/watch?v=TF297rN_8OY

When I saw the future - the geeks were right

Egghead boys with thin white legs
They've got modified features and software brains
But that's what the girls like - the geeks were right

Predator skills, chemical wars, plastic islands at sea
Watch what the humans ruin with machines

Reply
Load More
3Tools for decision-support, deliberation, sense-making, reasoning
5mo
0
2Inviting discussion of "Beat AI: A contest using philosophical concepts"
Q
1y
Q
1