indeed, many are fearful of themselves and would rather sit pretty while the effigy of their public representation burns. The courageous approach is to expose oneself without obfuscation. Speaking with vibes and vagueness is another way people employ their effigies so that they can easily escape criticism by using their uncertainty to their advantage.
Many times the case for 'vibieness' in social settings is to maintain social standing or larp it in cruder terms. It allows people to disguise their lack of knowledge while still being involved in the conversation, but how meaningful said conversation is if you are unable to meaningfully contribute is questionable.
And irregardless putting yourself out there, staying true to what you say and accepting mistakes as they come is somewhat non-trivial at least for me. It would be great if people such as myself can learn to do this!
I am often frustrated by those who promote vibes and deliver aimless soliloquies. We would often be better served by speaking specifically, more concisely, and boldly. From the average meeting room to the American political landscape, we are harming ourselves by speaking vaguely, and current roadblocks in policymaking across many facets of society are exacerbated by unspecific and unserious discourse. It is not just a political and social imperative, but instrumentally useful to speak specifically and intently.
Spend more time to speak less
Any student learns that their opening paragraphs are the most important for introducing their argument and intent. Writing this way serves two critical functions: to frame the rest of the paper for the reader and the author. A common adage is that concise writing is the product of thorough writing. A good revision process forces you to reevaluate your intent for each sentence, which reveals redundant, awkward, or dangling ideas. A good introduction and thesis force you to recursively reevaluate every idea, argument, and paragraph. By stating your intentions, you can tell yourself what's important and what can be omitted.
Speaking is a very similar process. I've had the privilege to deliver many presentations to peers or managers throughout high school, university classes, and internships. I competed in Lincoln-Douglas NSDA Debate for three years, led my Boy Scout troop for a short stint, and have presented technical projects, at separate times, to my school administration, internship managers, and corporate leadership. I am also a very nervous speaker, and despise most forms of public speaking. I still often shake when I speak, but equipping myself with speaking intuition has given me enough confidence to survive. The most important guideline for speaking is to speak with intent and announce your intent. Part of this is, like a good comedian, knowing your audience. Separating what your audience needs to know from what they don't is a vital skill.
What this looks like in practice is that when you give a presentation, announce clearly, even if it appears awkward at first, what you want the audience to take away from your presentation. This need not be the first sentence in your presentation; like in writing, you should soften this with a clean introduction. In many semi-professional or professional settings where your presentation is a part of a meeting, this should include an evaluation of what input others need to provide. Understanding why you're having a meeting instead of sending someone a message or asking someone 1-on-1 forces you to ask tough questions about the intent behind the meeting. If you know what actionable decision or question you need answered in a meeting, then you know what to ask, which provides hints at what you need to present as context. Doing this helps avoid the complaint that "This meeting could have been a[n] {email|slack message|text|teams message}."
If a meeting diverges from a topic where attendees can make actionable decision, then someone should steer it back on track. Actionable decisions are key to this work; vague goals like "getting on the same page" or anything involving the word "vibes" do not classify them as actionable decisions. Employees cannot write design documents, reports, run experiments, or engineer products based on vibes or vague agreements. In a universe where time is finite and a world where time is money, being intent with your speech is imperative for the health of an organization. Suppose a single employee can hold entire teams hostage for a sizable amount of time for no reason. In that case, it costs a business thousands if not millions of dollars, depending on the meeting and which levels of leadership are involved.
Long, purposeless meetings are thus not the grand design of a malevolent, capitalist force wanting to waste the precious time of workers, but the result of poor intentionality and planning. The good news is that this empowers anyone to right this wrong without an omnipotent force driving this corrosion.
See also Patrick Winston's lecture on How To Speak. The CIA Manual for Sabotage also reads, appropriately, like the exact opposite of the advice I've just given.
The CIA Manual for Sabotage
The CIA declassified a World War II manual on sabotaging organizations. I've copied and reformatted a few sections I think are relevant.
General Interference with Organizations and Production
Organizations and Conferences
Managers and Supervisors
...
Employees
...
Specific Policy is Necessary
I was privileged enough to attend EA: Global in New York City in October of last year. Between meeting with AI Safety researchers and policymakers and trying an assortment of vegan meals (and soylent), I sat in the basement of the Sheraton in Times Square in a sterile hotel meeting room. I listened to a longtime staffer at the Department of War (formerly Department of Defense). He gave a short lecture on the theory of change, speaking to those interested in AI Safety policymaking, and gave, for me, the most interesting speech I heard all weekend. In between in-jokes about shrimp welfare, he criticized the Rationalist/EA community for its failures to promote policy, a criticism that, I believe, extends to most, if not all, center, center-left, and progressive political groups. To him, Rationalists and EA are full of idealists and scientists, but policymaking is neither ideal nor a science; it's an art, or if you like, engineering. Policies are inherently imperfect because they operate in a fundamentally imperfect world. They compromise, they bend, and, sometimes, they break.
In communities where members tiptoe gingerly around sensitive subjects and strive for idealistic purity, attaching yourself to bold policy makes you vulnerable to criticism, often leading to their promoters shirking the responsibility altogether, or stacking on enormous qualifiers that render their promotion meaningless. This is a natural, if self-defeating, instinct via the tragedy of the commons. By not attaching oneself to imperfect, unpopular policies, you avoid the ideological litmus tests and criticism others will almost certainly throw at you. The side-effect is that this has a cultural effect of chilling the promotion of any specific and actionable policy, turning the entire movement into a giant meeting about ideas and "getting on the same page." He asked the audience, trusting it was full of well-meaning, intelligent people, to be more courageous with their advocacy, and we must take his advice to heart. AI safety, climate change, and global health require specific and actionable policy, not ideas, not buzzwords, and certainly not vibes.
While the Rationalist/EA/AI Safety communities have dedicated years to trying to prepare the world for transformative AI, we do not have definite, specific policy proposals floating around that policymakers can pick from which advance AI Safety and societal well-being. And, I have a strong suspicion that we will need specific, actionable policy which materially affects many people very soon. Based on the increasing backlash towards AI in popular culture, due to rising utility costs, rising consumer hardware prices, environmental concerns, and intellectual property concerns, I expect a major political alignment at least within the United States sometime soon (O(~1.5 years)) that might primarily revolve around these issues. While maybe not as important as the timeline until transformative AI exists, the timeline until the general public cares about these issues might be sooner. Without clear policies which can balance these concerns with AI safety concerns, we could see populist rhetoric prevent the important work that needs to be done.
I'd be hypocritical not to take a stand for a major policy, but I'll have to qualify that I only know the American political landscape well. I'm a big believer in eliminating bureaucratic inefficiencies and expanding infrastructure. A version of the green new deal that expands electric infrastructure, in conjunction with data center build-outs, would reduce the costs of electricity such that new data centers don't materially damage the average citizen via higher prices. Better electric public infrastructure would also reduce daily transportation costs, and upgraded electric infrastructure provides the opportunity to secure the electrical grid for national security purposes and provide resilience. Is the GND a perfect policy? No. More recent versions have been vague house resolutions, not actual bills. But, it's a large-scale policy that materially affects people's lives and might solve many issues we face.
The Case for Courage
The penultimate note from the policymaking talk at EA: Global was a quote from economist Milton Friedman:
We are in crisis. Our economy is not growing, our Democratic Republic is weakening, and we're on the precipice of drastic technological changes. Our policymakers are scared of policy. Like deer in the headlights, policymakers are petrified, scared of the people who chose them. The solution is clear: say what you mean. Policymaking is iterative, so let the first iterations be wrong and unpopular. Refine it, change it, and keep the idea alive. Without discourse of real, specific policy, we may find ourselves ideating about a perfect world while the opportunity to create a better one slips away.
For Democrats, AI Safety Hawks, Progressives, and anyone I know who is sane, rational, and well-meaning, courage is required. Write and promote specific actionable policy; be wrong; be bold; be courageous. Talking about vibes makes for good TV, but only policy makes leaders fit to lead.
See also: https://www.ettingermentum.news/p/against-vibes
See also: https://www.lesswrong.com/posts/CYTwRZtrhHuYf7QYu/a-case-for-courage-when-speaking-of-ai-danger