You can definitely meet your own district's staff locally (e.g., if you're in Berkeley, Congresswoman Simon has an office in Oakland, Senator Padilla has an office in SF, and Senator Schiff's offices look not to be finalized yet but undoubtedly will include a Bay Area Office).
You can also meet most Congressional offices' staff via Zoom or phone (though some offices strongly prefer in-person meetings).
There is also indeed a meaningful rationalist presence in DC, though opinions vary as to whether the enclave is in Adams Morgan-Columbia Heights, Northern Virginia, or Silver Spring.*
*This trichotomy is funny, but hard to culturally translate unless you want a 15,000 word thesis on DC-area housing and federal office building policy since 1945 and its related cultural signifiers. Just...just trust me on this.
I think on net, there are relatively fewer risks related to getting governments more AGI-pilled vs. them continuing on their current course; governments are broadly AI-pilled even if not AGI/ASI-pilled and are doing most of the accelerating actions an AGI-accelerator would want.
The Trump administration (or, more specifically, the White House Office of Science and Technology Policy, but they are in the lead on most AI policy, it seems), are asking for comment on what their AI Action Plan should include. Literally anyone can comment on it. You should consider commenting on it, comments are due Saturday at 8:59pm PT/11:59pm ET via an email address. These comments will actually be read, and a large number of comments on an issue usually does influence any White House's policy. I encourage you to submit comments!
regulations.gov/document/NSF_FRDOC_0001-3479… (Note that all submissions are public and will be published)
(Disclosure: I am working on a submission for this for my dayjob but this particular post is in my personal capacity)
(Edit note: I originally said this was due Friday; I cannot read a calendar, it is in fact due 24 hours later. Consider this a refund that we have all received for being so good at remembering the planning fallacy all these years.)
I think there's at least one missing one, "You wake up one morning and find out that a private equity firm has bought up a company everyone knows the name of, fired 90% of the workers, and says they can replace them with AI."
This essay earns a read for the line, "It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met" alone.
I would amplify to suggest that while many things are outside the Overton Window, policymakers are also aware of the concept of slowly moving the Overton Window, and if you explicitly admit you're doing that, they're usually on board (see, e.g., the conservative legal movement, the renewable energy movement, etc.). It's mostly only if you don't realize you're proposing that that you trigger a dismissive response.
Ok, so it seems clear that we are, for better or worse, likely going to try to get AGI to do our alignment homework.
Who has thought through all the other homework we might give AGI that is as good of an idea, assuming a model that isn't an instant-game-over for us? E.G., I remember @Buck rattling off a list of other ideas that he had in his The Curve talk, but I feel like I haven't seen the list of, e.g., "here are all the ways I would like to run an automated counterintelligence sweep of my organization" ideas.
(Yes, obviously, if the AI is sneakily misaligned, you're just dead because it will trick you into firing all your researchers, etc.; this is written in a "playing to your outs" mentality, not an "I endorse this as a good plan" mentality.)
Huh? "fighting election misinformation" is not a sentence on this page as far as I can tell. And if you click through to the election page, you will see that the elections content is them praising a bipartisan bill backed by some of the biggest pro-Trump senators.
Without commenting on any strategic astronomy and neurology, it is worth noting that "bias", at least, is a major concern of the new administration (e.g., the Republican chair of the House Financial Services Committee is actually extremely worried about algorithmic bias being used for housing and financial discrimination and has given speeches about this).
I think a lot of this boils down to the fact that Sam Vimes is a copper, and sees poverty lead to precarity, and precarity lead to Bad Things Happening In Bad Neighborhoods. The most salient fact about Lady Sybil is that she never has to worry, never is on the rattling edge; she's always got more stuff, new stuff, old stuff, good stuff. Vimes (at that point in the Discworld series) isn't especially financially sophisticated, so he narrows it down to the piece he understands best, and builds a theory off of that.