Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.
Elon Musk's Hyperloop proposal had substantial public interest. With various initial Hyperloop projects now having failed, I thought some people might be interested in a high-speed transportation system that's...perhaps not "practical" per se, but at least more-practical than the Hyperloop approach.
Hydrogen has a lower molecular mass than air, so it has a higher speed of sound and lower density. The higher speed of sound means a vehicle in hydrogen can travel at 2300 mph while remaining subsonic, and the lower density reduces drag. This paper evaluated the concept and concluded that:
the vehicle can cruise at Mach 2.8 while consuming less than half the energy per passenger of a Boeing 747 at a cruise speed of Mach 0.81
In a tube, at subsonic speeds, the gas...
Possibly of interest: the fastest rocket sled track uses similar idea, they put a helium filled tube over the final section of the track:
...Just as meteors are burned up by friction in the upper atmosphere, air friction can cause a high-speed sled to burn up, even if made of the toughest steel alloys. An engineering sleight-of-hand is used to increase those "burn-up" limits by reducing the density of the atmosphere around the track. To do this, one needs a safe, non-toxic, low-density gas such as helium. Helium is only one seventh the density of air, signific
The history of science has tons of examples of the same thing being discovered multiple time independently; wikipedia has a whole list of examples here. If your goal in studying the history of science is to extract the predictable/overdetermined component of humanity's trajectory, then it makes sense to focus on such examples.
But if your goal is to achieve high counterfactual impact in your own research, then you should probably draw inspiration from the opposite: "singular" discoveries, i.e. discoveries which nobody else was anywhere close to figuring out. After all, if someone else would have figured it out shortly after anyways, then the discovery probably wasn't very counterfactually impactful.
Alas, nobody seems to have made a list of highly counterfactual scientific discoveries, to complement wikipedia's list of multiple discoveries.
To...
Not inconceivable, I would even plausible, that surreal numbers & combinatorial game theories impact is still in the future.
Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be.
There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests.
Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives...
There is a belief among some people that our current tech level will lead to totalitarianism by default. The argument is that with 1970's tech the soviet union collapsed, however with 2020 computer tech (not needing GenAI) it would not. If a democracy goes bad, unlike before there is no coming back. For example Xinjiang - Stalin would have liked to do something like that but couldn't. When you add LLM AI on everyone's phone + Video/Speech recognition, organized protest is impossible.
Not sure if Rudi C is making this exact argument. Anyway if we get mass ce...
Wow, it's worse than I thought. Maybe the housing problem is "government-complete" and resists all lower level attempts to solve it.
A friend has spent the last three years hounding me about seed oils. Every time I thought I was safe, he’d wait a couple months and renew his attack:
“When are you going to write about seed oils?”
“Did you know that seed oils are why there’s so much {obesity, heart disease, diabetes, inflammation, cancer, dementia}?”
“Why did you write about {meth, the death penalty, consciousness, nukes, ethylene, abortion, AI, aliens, colonoscopies, Tunnel Man, Bourdieu, Assange} when you could have written about seed oils?”
“Isn’t it time to quit your silly navel-gazing and use your weird obsessive personality to make a dent in the world—by writing about seed oils?”
He’d often send screenshots of people reminding each other that Corn Oil is Murder and that it’s critical that we overturn our lives...
To me "generally avoid processed foods" would be kinda like saying "generally avoid breathing in gasses/particulates that are different from typical earth atmosphere near sea level".
People have been breathing a lot of smoke in the last million years or so, so one might think that we would have evolved to tolerate it, but it's still really bad for us. Though there are certainly lots of ways to go wrong deviating from what we are adapted to, our current unnatural environment is far better for our life expectancy than the natural one. As pointed out in other comments, some food processing can be better for us.
The following is an example of how if one assumes that an AI (in this case autoregressive LLM) has "feelings", "qualia", "emotions", whatever, it can be unclear whether it is experiencing something more like pain or something more like pleasure in some settings, even quite simple settings which already happen a lot with existing LLMs. This dilemma is part of the reason why I think AI suffering/happiness philosophy is very hard and we most probably won't be able to solve it.
Consider the two following scenarios:
Scenario A: An LLM is asked a complicated question and answers it eagerly.
Scenario B: A user insults an LLM and it responds.
For the sake of simplicity, let's say that the LLM is an autoregressive transformer with no RLHF (I personally think that the...
The American school system, grades K-12, leaves much to be desired.
While its flaws are legion, this post isn’t about that. It’s easy to complain.
This post is about how we could do better.
To be clear, I’m talking about redesigning public education, so “just use the X model” where X is “charter” or “Montessori” or “home school” or “private school” isn’t sufficient. This merits actual thought and discussion.
One of the biggest problems facing public schools is that they’re asked to do several very different kinds of tasks.
On the one hand, the primary purpose of school is to educate children.
On whatever hand happens to be the case in real life, school is often more a source of social services for children and parents alike, providing food and safety...
What if you build your school-as-social-service, and then one day find that the kids are selling drugs to each other inside the school?
Or simply that the kids are constantly interfering with each other so much that the minority who want to follow their interests can't?
I think any theory of school that doesn't mention discipline is a theory of dry water. What powers and duties would the 1-supervisor-per-12-kids have? Can they remove disruptive kids from rooms? From the building entirely? Give detentions?
This is part 7 of 30 in the Hammertime Sequence. Click here for the intro.
As we move into the introspective segment of Hammertime, I want to frame our approach around the set of (unoriginal) ideas I laid out in The Solitaire Principle. The main idea was that a human being is best thought of as a medley of loosely-related, semi-independent agents across time, and also as governed by a panel of relatively antagonistic sub-personalities à la Inside Out.
An enormous amount of progress can therefore be made simply by articulating the viewpoints of one’s sub-personalities so as to build empathy and trust between them. This is the aim of the remainder of the first cycle.
Goal factoring is a CFAR technique with a lot of parts. The most...
I really can't get the point from the "3. Solve or Reduce Aversions", specifically:
> Meanwhile, un-endorsed aversions should be targeted with exposure therapy or CoZE.
As I can see, here we should get rid of bad aversions. But the rest part of the text sounds like we should... reinforce them?..
> To apply exposure therapy, build a path of incremental steps towards the aversion
This is a write-up of Neel’s and my experience and opinions on best practices for doing Activation Patching. A arXiv PDF version of this post is available here (easier to cite). A previous version was shared with MATS Program scholars in July 2023 under the title "Everything Activation Patching".
Pre-requisites: This post is mainly aimed at people who are familiar with the basic ideas behind activation patching. For background see this ARENA tutorial or this post by Neel.
Tl,DR: