Samuel Shadrach's Shortform

37 comments, sorted by Highlighting new comments since Today at 2:28 PM
New Comment

Any web dev here wanna host a tool that lets you export your account data from this site? I've mostly figured it out, I'm just being lazy. Need to use graphQL queries, then write to files, then I guess upload the files to db and zip, and let the user download the zip.

(graphQL tutorial: https://www.lesswrong.com/posts/LJiGhpq8w4Badr5KJ/graphql-tutorial-for-lesswrong-and-effective-altruism-forum)

First graphQL query to get the user's id from the slug

  {
     user(input: {selector: {slug: "eliezer_yudkowsky"}}) {
       result {
         _id
         slug
       }
     }
   }   

For posts, need graphQL query, then dump each htmlBody into a separate file. No parsing required, I hope.
   {
     posts(input: {
       terms: {
         view: "userPosts"
         userId: "nmk3nLpQE89dMRzzN"
         limit: 50
         meta: null  # this seems to get both meta and non-meta posts
       }
     }) {
       results {
         _id
         title
         pageUrl
         postedAt
         htmlBody
         voteCount
         baseScore
         slug
       }
     }
   }
   

For comments, need to graphQL query, then dump html body into individual files. (Although I'm not entirely sure what one will do with thousands of comment files)


   {
     comments(input: {
       terms: {
         view: "userComments",
         userId: "KPEajTss7fsccBEgJ",
         limit: 500,
       }
     }) {
       results {
         _id
         post {
           title
           slug
         }
         user {
           username
           slug
           displayName
         }
         userId
         postId
         postedAt
         pageUrl
         htmlBody
         baseScore
         voteCount
       }
     }
   }
   

Plus some loops to iterate if the limit is too large. And handle errors. Plus some way to share the credentials securely - or make it into a browser plugin.

HYPOTHETICAL (possibly relevant to AI safety)

Assume you could use magic that could progressively increase the intelligence of exactly one human in a "natural way". * You need to pick one person (not yourself) who you trust a lot and give them some amount of intelligence. Your hope is they use this capability to solve useful problems that will increase human wellbeing broadly.

What is the maximum amount of intelligence you'd trust them with?

*when I say natural way I mean their neural circuits grow, maintain and operate in a manner similar to how they already naturally work for humans, biologically. Maybe the person just has more neurons that make them more intelligent, instead of a fundamentally different structure.


I guess the two key considerations are:

  1. whether there exists any natural form of cognitive enhancement that doesn't also cause signficant value drift
  2. whether you'd trust a superpowerful human even if their values seem mostly good and they don't drift

On innovations too big for capitalism

 

Consider any innovation so world-changing, that goverments are not willing to let the creator have complete control over how it is used. For instance, a new super-cheap energy source such as fusion reactors.

Maximising profit from fusion reactors for instance could mean selling electricity at a price slightly lower than the current market price, waiting to monopolise the global power grid, waiting for all other power companies to shut down, then raising prices again, thereby not letting anyone actually reap the benefits of super-cheap electricity. It is unlikely that govts however will let the creator do this.

As someone funding early-stage fusion research, would you have to account for this in your investment thesis? That there is some upper limit on how large a company can legally grow. So far it seems the cap is higher than $2.5 trillion atleast, looking at Apple's market cap. Although it is possible that even a company smaller than $2.5 trillion monopolises a sector and is then prevented from price-fixing by the government.

I don't think, at this scale, that "the government" is a useful model.  There are MANY governments, and many non-government coalitions that will impact any large-scale system.  The trick is delivering incremental value at each stage of your path, to enough of the selectorate of each group who can destroy you.

Thanks, this makes sense. 

Am completely in love with Yudkowsky's posts in the Complexity of Value sequence.

https://www.lesswrong.com/tag/complexity-of-value

Would recommend the four major posts to everyone.

Are there people who have read these four posts and still self-identify as either consequentialists or utilitarians? Is yes, why so?

Because the impression I got from these posts (which also matches my independent explorations) is that humans have deontological rules wired into them due to evolution - this is just observable fact. And that you don't really get to change that even if you want to.

Considering the trivial example - would you kill one person to save two. No one gets to know your decision, so there are no future implications on anyone else, no precedent being set. Even on this site there is a ton of deflection from honestly answering the question as is. Either "Yes I will murder one person in cold blood" or "No I will let the two people die". Assume LCPW.

I believe that in the LCPW it would be the right decision to kill one person to save two, and I also predict that I wouldn't do it anyway, mainly because I couldn't bring myself to do it.

In general, I understood the Complexity of Value sequence to be saying "The right way to look at ethics is consequentialism, but utilitarianism specifically is too narrow, and we want to find a more complex utility function that matches our values better."

Thanks for replying.

Why do you feel it would be the right decision to kill one? Who defines "right"?

I personally understood it differently. Thou art godshatter says (to me) that evolution depends on consequences but evolving consequentialism into a brain is hard and therefore human desires are not wired consequentially. Also that evolution only cares about consequences that actually happen, not the ones it predicts happens - because it cannot predict.

Why do you feel it would be the right decision to kill one? Who defines "right"?

I define "right" to be what I want, or, more exactly, what I would want if I knew more, thought faster and was more the person I wish I could be. This is of course mediated by considerations on ethical injunctions, when I know that the computations my brain carries out are not the ones I would consciously endorse, and refrain from acting since I'm running on corrupted hardware. (You asked about the LCPW, so I didn't take these into account and assumed that I could know that I was being rational enough).

It's been a while since I read Thou Art Godshatter and the related posts, so maybe I'm conflating the message in there with things I took from other LW sources.

I read some of the articles. Happy to get on a voice call if you prefer. My thoughts so far boil down to:

 - Corrupted hardware seems to imply a clear distinction between goals (/ends/terminal goals) and actions towards goals (/means/instrumental goals), and that only actions are computed imperfectly. I say firstly we don't have as sharp a distinction between the two in our brain's wiring. (Instrumental goals often become terminal if you focus on them hard enough.) Secondly that it's not actions but terminal goals themselves that are in conflict.

 - We have multiple conflicting values. There's no "rational" way to always decide what trumps what - sometimes it's just two sections of the brain firing neurochemicals and one side winning, that's it. System-2 is somewhat rational, System-1 not so much, and System-1 has more powerful rewards and penalties. System-1 preferences admit circular preferences, and there's nothing you can do about it.

 - "what I would want if I knew more, thought faster etc" doesn't necessarily lead to one coherent place. You have multiple conflicting values, which of those you end up deleting if you had the brain of a supercomputer could be arbitrary. You could become Murder Gandhi or a some extreme happiness utilitarian, I don't see either of these as necessarily desirable places to be relative to my current state. Basically I want to run on corrupted hardware. I don't want my irrational System-1 module deleted.

Sorry if I'm going off-topic but yeah.

The sequence on ethical injunctions looks cool. I'll read it first before properly replying.

Just FYI, I've become convinced that most online communication through comments with a lot of context are much better settled through conversations, so if you want, we could also talk about this over audio call.

Thanks, I will let you know!

Anyone has any good resources on linguistic analysis to doxx people online?

Both automated and manual, although I'm more keen on learning about automated.

What is the state-of-the-art in such capabilities today? Are there forecasts on future capabilities?

Trying to figure out whether defending online anonymity is worth doing or a lost cause.

MESSY INTUITIONS ABOUT AGI, MIGHT TYPE THEM OUT PROPERLY LATER

OR NOT

I'm sure we're a finite number of voluntary neurosurgeries away from worshipping paperclip maximisers. I tend to feel we're a hodge-podge of quick heuristic modules and deep strategic modules, and until you delete the heuristic modules via neurosurgery our notion of alignment will always be confused. Our notion of superintelligence / super-rationality is an agent that doesn't use the bad heuristics we do, people have even tried formalising this with Solomonoff / Turing machines / AIXI. But when actually coming face to face with one:

 - Either we are informed of the consequences of the agent's thinking and we dislike those, because those don't match our heuristics

 - Or the AGI can convince us to become more like them to the point we can actually agree with their values. The fastest way to get there is neurosurgery but if we initially feel neurosurgery is too invasive I'm sure there exists another much more subtle path that the AGI can take. Namely one where we want our values to be influenced in ways, that eventually end up with us getting closer to the neurosurgery table.

 - Or ofcourse the AGI doesn't bother to even get our approval (the default case) but I'm ignoring that and considering far more favourable situations.


We don't actually have "values" in an absolute sense, we have behaviours. Plenty of Turing machines have no notion of "values", they just have behaviour given a certain input. "Values" are this fake variable we create when trying to model ourselves and each other. In other words the turing machine has a model of itself inside itself, that's how we think about ourselves (metacognition). So a mini-Turing machine inside a Turing machine. Ofcourse the mini-machine has some portions deleted, it is a model. First of all this is physically necessitated. But more importantly, you need a simple model to do high-level reasoning on it in short amounts of time. So we create this singular variable called "values" to point to essentially a cluser in thingspace.  Let's say the Turing machine tends to increment it's 58th register on only 0.1% of all possible 24-bitstring inputs, else it tends to decrement a lot more. The mini-Turing machine inside the machine modelling itself will just have some equivalent of the 58th register never incrementing at all, and decrementing instead. So now the Turing machine incorrectly thinks its 58th register never increments. So it thinks that decrementing the 58th register is a "value" of the machine.


[Meta note: When I say "value" here, I'd like to still stick to a viewpoint where concepts like "free will", "choice", "desire" and "consciousness" are taboo. Basically I have put on my reductionist hat. If you believe free will and determinism are compatible, you should be okay with this - as I'm just consciously restricting the number of tools/concepts/intuitions I wish to use for this particular discussion - not adopting any incorrect ones. You can certainly reintroduce your intuitions in a different discussion, but in a compabilist world, both our discussions should generally lead to true statements.

Hence in this case, when the machine thinks of "decrementing its 58th register" as its own value, I'm not referring to concepts like "I am driven to decrement my 58th register" or "I desire to decrement my 58th register" but rather "Decrementing 58th register is something I do a lot." and since "value" is a fake variable that the Turing machine has full liberty to define, it says " "Value" is defined by the things I tend to do." When I say "fake" I mean it exists in the Turing machine's model of itself, the mini-machine.

"Why do I do the things I do?" or "Can I choose what I actually do?" are not questions I'm considering, and I'm for now let's assume the machine doesn't bother itself with such questions (although in practice it certainly may end up asking itself such terribly confused questions, if it is anything like human beings. This doesn't really matter rn.)

End note]


I'm gonna assume a single scale called "intelligence" along which all Turing machines can be graded. I'm not sure this scale actually even exists, but I'm gonna assume it anyway. On this scale:

Humans <<< AGI <<< AIXI-like ideal

<<< means "much less intelligent than" or "much further away from reasoning like the AIXI-like ideal", these two are the same thing for now, by definition.


An AGI trying to model human beings won't use such a simple fake variable called "values", it'll be able to build a far richer model of human behaviour. It'll know all about the bad human heuristics that prevent humans from becoming like an AGI or AIXI-like.

Even if the AGI wants us to be aligned, it's just going to do the stuff in the first para. There are different notions of aligned:

Notion 1: "I look at another agent superficially and feel we want the same things." In other words, my fake variable called "my values" is sufficiently close to my fake variable called "this agent's values. I will necessarily be creating such simple fake variables if I'm stupid, i.e., I'm human, cause all humans are stupid relative to the AIXI-like ideal.

An AGI that optimises to satisfy notion 1 can hide what it plans to do to humans and maintain a good appearance until it kills us without telling us.

Notion 2: "I get a full picture of what the agent intends to do and want the same things"

This is very hard because my heuristics tell me all the consequences the AGI plans to do are bad. Problem is my heuristics. If I didn't have those heuristics, if was closer to the AIXI-like ideal, I wouldn't mind. Again "I wouldn't mind" is from a perspective of machines not consciousness or desires, so translate it to " the interaction of my outputs will be favourable towards the AGI's outputs in the real world".

So the AGI will find ways to convince us to want our values neurosurgically altered. Eventually we will both be clones and hence perfectly aligned.


Now let's bring stuff like "consciousness" and "desires" and "free will" back into the picture. All this stuff interacts very strongly with the heuristics, exactly the things that make us further away from the AIXI-like ideal.

Simply stated, we don't naturally want to be ideal rational agents. We can't want them, in the sense we can't get ourselves to truly consistently want to be ideal rational agents by sheer willpower, we need physical intervention like neurosurgey. Even if free will exists and is a useful concept, it has finite power. I can't delete sections of my brain using free will alone.

So now if intelligence is defined as closer to AIXI-like ideal in internal structure, then intelligence by definition leads to misalignment.


P.S. I should probably also throw some colour on what kinds of "bad heuristics" I am referring to here. In simplest terms "low kolmogrow complexity behaviours that are very not AIXI-like"

1/ For starters, the entirety of System 1 and sensory processing (see Daniel Kahlmann thinking fast and slow). We aren't designed to maximise our intelligence, we just happen to have an intelligence module (aka somewhat AIXI-like). Things we care about sufficiently strongly are designed to override System 2 which is more AIXI-like, insofar has evolution has any design for us. So maybe it's not even "bad heuristics" here, it's entire modules in our brain that are not meant for thinking in the first place. It's just neurochemicals firing and one side winning and the other side losing, this system looks nothing like AIXI. And it's how we deal with most life-and-death situations.

This stuff is beyond the reach of free will, I can't stop reacting to snakes out of sheerwill, or hate chocolate or love to eat shit. Maybe I can train myself on snakes, but I can't train myself to love to eat shit. The closer you are to sensory apparatus and the further away from the brain, the less the system looks like AIXI. And simultaneously the less free will you seem to have.

(P.S. Is that a coincidence or is free will / consciousness really an emergent property of being AIXI-like? I have no clue, it's again the messy free will debates that might get nowhere)

2/ Then ofcourse at the place where System 1 and 2 interact you can actually observe behaviour that makes us further away from AIXI-like. Things like why we find it difficult to have independent thoughts that go away from the crowd. Even if we do have independent thoughts we need to spend a lot more energy actually developing them further (versus thinking of hypothetical arguments to defend those ideas in society).

This stuff is barely within the reach of our "free will". Large portions of LessWrong attempt at training to make us more AIXI-like. By reducing so-called cognitive biases and socially-induced biases.

3/ Then we have proper deep thinking (System 2 and such-like) which seems a lot closer to AIXI. This is where we move beyond "bad heuristics" aka "heuristics that AIXI-like agents won't use". But maybe an AGI will find these modules of ours horribly flawed too, who knows.

Anyone proposing that building AGI should be banned by governments?

 

Cause it seems like even if AGI alignment is possible (I'm skeptical), there's no guarantee the person who happens to create AGI also happens to want to follow this perfect solution. Or, even if they want to, that they should have a moral right to decide on the behalf of their country or humanity as a whole. Nation states pre-committing to "building AGI is evil" seems a better solution. It might also slow down rate of progress in AI capabilities, which I'm guessing is also desirable to some alignment theorists.

I haven't seen any government, let alone the set of governments, demonstrate any capability of commitment on this kind of topic.  States (especially semi-representative ones like modern democracies) just don't operate with a model that makes this effective.

I also wonder what you feel about the ban of human cloning. Is it effectively implemented?

I don't know if it is or not.  Human cloning seems both less useful and less harmful (just less impactful overall), so simultaneously easier to implement and not a good comparison to AGI.

I see cloning-based research as very impactful, it's also a route to getting more intelligent beings to exist. I'd be hard-pressed to find something as impactful as AGI though.

Also I'm not sure about "less useful". Given a world where AI researchers know that alignment is hard or impossible, they might see human cloning as more useful than AGI. Unless you mean AGI's perceived usefulness is higher, which may be true today but maybe not in the future.

I'm not following the connection between human cloning and AGI.  Are you talking about something different from https://en.wikipedia.org/wiki/Human_cloning , where a baby is created with only one parent's genetic material?

To me, human cloning is just an expensive way to make normal babies.  

Yep referring to that only. You can keep cloning the most intelligent people. At enough scale you'll be increasing the collective intelligence of mankind, and scientific output. Since these clones will hopefully retain basic human values, you now have more intelligence with alignment.

You can keep cloning the most intelligent people.

Do you have any reason to believe that this is happening AT ALL?  I'd think the selection of who gets cloned (especially when it's illicit, but probably even if it were common) would follow wealth more than intelligence.

Selective embryo implantation based on genetic examination of two-parent IVF would seem more effective, and even that's not likely to do much unless it becomes a whole lot more common, and if intelligence were valued more highly in the general population.

Since these clones will hopefully retain basic human values

Huh?  Why these any more than the general population?  The range of values and behaviors found in humans is very wide, and "basic human values" is a pretty thin set.

Most importantly, a 25-year improvement cycle, with a mandatory 15-20 year socialization among many many humans of each new instance is just not as scary as an AGI with an improvement cycle under a year (perhaps much much faster), and with direct transmission of models/beliefs from previous generations.  Just not comparable.

Do you have any reason to believe that this is happening AT ALL?

Wasn't talking about today, just an arbitrary point in the future.

I'd think the selection of who gets cloned (especially when it's illicit, but probably even if it were common) would follow wealth more than intelligence.

I was commenting that it has a lot of power and potential benefits if groups of people weild it, whether they actually do is a different question.

On latter question, you're right ofcourse - different groups of people will select for different traits. I would assume there will exist atleast some groups of intelligent people will want to select for intelligent people further. There is competitive advantage to nations who legalise this.

re: last 2 paras, I'm not sure we understood each other. I'll try again. Intelligence is valuable towards building stable paths to survival, happiness, prosperity. AGI will be much more intelligent than selected humans. However, AGI will almost certainly kill us because of lack of alignment. (Assume a world where AGI researchers have accepted this as fact.) This makes AGI not very useful, on the balance of it.

Humans selected for intelligence are also valuable. They will be a lot less intelligent than AGI ofcourse. But they will (hopefully) be aligned enough with the rest of humanity to work for its welfare and prosperity. This makes selected humans very useful.

Hence selected humans could be more useful than AGI.

That's a valid intuition - I'd be happy to learn why you feel that if you have time (no worries if not).

Would a non-democratic state like China or Russia fare better in this regard then? If one of them takes the issue seriously enough they could force other states to also take it seriously via carrot and stick.

I still don't get stuff like TDT, EDT, FDT, LDT

Article on LDT: https://arbital.com/p/logical_dt/?l=58f

I get the article on How an algorithm feels from the inside - if you assume a deterministic universe and consciousness as something that emerges out but has no causal implications on the universe.

Now if I try drawing the causal arrows

Outside view:

Brain <- -> Human body <- -> Environment

In the outside view, "you" don't exist as a coherent singular force of causation. Questions about free will and choice cease to exist.

Inside view (CDT):

Coherent singular "soul" (using CDT) <- -> Brain <- ->  Human body <- -> Environment

Notably Yudkowsky would call this singular soul as something that only exists in the inside view itself, and not the outside view

Now we replace this with ...

Inside view (LDT):

Coherent singular "soul" (using CDT) <- -> All instantiations of this cognitive algorithm across space and time <- -> Brain(s) <- ->  Human body(s) <- -> Environment

The new decision theories don't seem to eliminate the illusion of the soul - they just now assert that the soul not only interacts with this particular instantion of the algorithm the brain is running, but all instantiations of this algorithm across space (and time?). Why is this more reasonable than assuming the soul only interacts with this particular instantation? Note that the soul is a fake property here, one that only exists in the inside view. And, to our best understanding of physical laws, the universe is made of atoms* in its casual chain, not algorithms. Two soulless algorithms don't causally interact by virtue of the fact that they're similar algorithms, they interact by virtue of the atoms they causally impact, and the causal impacts of those atoms on each other. Why do algorithms with the fake property of a soul suddenly get causally bound through space and time?

*well technically its QM waves or strings or whatever, but that doesn't matter to this discussion

atoms they causally impact

This doesn't help. In a counterfactual, atoms are not where they are in actuality. Worse, they are not even where the physical laws say they must be in the counterfactual, the intervention makes the future contradict the past before the intervention.

Do I assume "counterfactual" is just the english word as used here?

If so, it should only exists in the inside view, right? (If I understand you)

The sentence I wrote on soulless algorithms is about the outside view. Say two robots are playing football. The outside view is - one kicks the football, other sees football (light emitted by football), then kicks it. So the only causal interaction between the two robots is via atoms. This is independent of what decision theory either robot is using (if any), and it is independent of whether the robots are capable of creating an internal mental model of themselves or the other robot. So it applies to both robots with dumb microcontrollers like those in a refrigerator and smart robots that could even be AGI or have some ideal decision theory. Atleast assuming the universe follows the deterministic physical laws we know about.

(edited)

The point is that the weirdness with counterfactuals breaking physical laws is the same for controlling the world through one agent (as in orthodox CDT) and for doing the same through multiple copies of an agent in concert (as in FDT). Similarly, in actuality neither one-agent intervention nor coordinated many-agent intervention breaks physical laws. So this doesn't seem relevant for comparing the two, that's what I meant by "doesn't help".

By "outside view" you seem to be referring to actuality. I don't know what you mean by "inside view". Counterfactuals are not actuality as normally presented, though to the extent they can be constructed out of data that also defines actuality, they can aspire to be found in some nonstandard semantics of actuality.

counterfactuals breaking physical laws

Do you mean the counterfactual may require more time to compute than the situation playing out in real time? If so, yep makes a ton of sense, they should probably focus on algorithms or decision theories that can (atleast in theory) be implemented in real life on physical hardware. But please confirm.

Could you please define "actuality" just so I know we're on the same page? I'm happy to read any material if it'll help.

Inside view and outside view I'm just borrowing from Yudkowsky's How an algorithm feels from the inside. Basically assumes deterministic universe following elegant physical laws, and tries to dissolve questions of free will / choice / consciousness. So the outside view is just a state of the universe or a state of the Turing machine. This object doesn't get to "choose" what computation it is going to do or what decision theory it is going to execute, that is already determined by current state. So the future states of the object are calculable*.

*by an oracle that can observe the universe without interacting, with sufficient but finite time.

Only in the inside view does a question like "Which decision theory should I pick?" even make sense. In the inside view, free will and choice are difficult to reason about (as humans have observed over centuries) - if you really wanna reason about those you can go to the outside view where they cease to exist.

Open-ended versus close-ended questions

I've found people generally find it harder to answer open-ended questions. Not just in terms of giving a good answer but giving any answer at all. It's almost as if they lack the cognitive module needed for such search.

Has anyone else noticed this? Is there any research on it? Any post on LessWrong or elsewhere?

Post-note: Now that I've finished writing, a lot of this post feels kinda "stupid" - or more accurately, not written using a reasoning process I personally find appealing. Nevertheless I'm going to post it just in case someone finds it valuable.

-----
 

I don't see a lot of shortform posts here so I'm unsure of the format. But I'm in general thinking a lot about how you cannot entirely use reasoning to reason about usefulness of various reasoning processes amongst each other. In other words, rationality is not closed.

 

Consider a theory in first-order logic. This has a specific set of axioms and a set of deductive rules. Consider a first-order theory with axioms “Socrates is mortal” and “Socrates is immortal”. A first-order theory which is inconsistent is obviously a bad reasoning process. A first-order theory which is consistent but whose axioms don't map to real world assumptions, is also a bad reasoning process for different reasons. Lastly, someone can argue that first-order logic is a bad reasoning process no matter what axioms it is instantiated with, because axioms + rigid conclusions is a bad way of reasoning about the world. And that humans are not wired to do pure FOL and are instead capable of reaching meaningful conclusions without resorting to FOL.

 

All these three ways of calling a particular FOL theory bad are different, but none of them are expressible in FOL itself. You can't prove why inconsistent FOL theories are "bad" inside of the very same FOL theory (although ofcourse you can prove it inside of a different theory, perhaps one that has "Inconsistent FOL theories are bad" as an axiom). You can't prove that the axioms don't map to real world conditions (let alone prove why axioms not mapping to real world conditions makes the axioms "bad"). You can't prove that deductive rules don't map to real world reasoning capacities of the human mind. If an agent was rigidly coded with this FOL theory, you’d never get anywhere with them on these topics, there’d be a communication failure between the two agents – you and them.

 

All these three arguments however can be framed as appeals to reality and what is observable. A statement and its negation being simultaneously proven true is bad because such phenomena are not typically observed in practice, and because there are no statements provable from this system that are useful to achieve objectives in the real world. Axioms not mapping to physical world is obviously an appeal to the observable. FOL being a bad framework for human reasoning is also an appeal to observation, in this case an observation you’ve made after observing yourself, and you’re hoping the other person has also made.

 

It seems intuitive that someone who uses “what maps to the observable is correct” will not admit any other axioms, if they wish to be consistent – because most such axioms will conflict with this appeal to the observable. But in a world with multiple agents, we can’t take this as axiom, lest we get get stuck in our own communication bubble. We need to be able to reason about the superiority of “what maps to the observable is correct” as a reasoning process, using some other reasoning process. And in fact, I have seemingly been using this post so far to do exactly that – use some reasoning process to argue in favour of “what maps to observable” reasoning processes over FOL theories instantiated with simple axioms such as “Socrates is mortal”.

 

And if you notice further, my argument for why “what maps to observable is good” in this post doesn’t seem very logical. I still seemingly am appealing to “what maps to observable is good” in order to prove “what maps to observable is good” – which is obviously a no-go when using FOL. But to your human mind, the first half of this post still sounded like it was saying something useful, despite not being written in FOL, nor having a clear separation between axioms and deduced statements. You could at this point appeal to Wittgensteinian “word clouds” or “language games” and say that some sequences of words referring to each other are perceived to be more meaningful than other sequences of words, and that I have hit upon one of the more meaningful sequences of words.

 

But how will you justify Wittgensteinian discourse as a meta-reasoning process to understand reasoning processes. More specifically, what reasoning process is wittgensteinian discourse using to prove that “witgensteinian reasoning processes are good”? I could at this point self-close it and say that Wittgensteinian reasoning processes are being used to reason and reach the conclusion that Wittgensteinian reasoning processes are good. But do you see the problem here?

Firstly, this kind of self-closure can be easily done by most systems. An FOL theory can assert that axiom A is being used to prove axiom A because A=A. A system based on “what is observable is correct” can appeal to observation and to argue the superiority of “what is observable is correct”.

 

And secondly, reasoning this self-closure only tends to look meaningful inside of the system itself. An FOL prover will say that the empiricist and the Wittgensteinian have not done anything meaningful when trying to analyse themselves (the empiricist and the Wittgensteinian respectively), they have just applied A=A. The empiricist will say that the FOL prover and the Wittgensteinian have not done anything meaningful to analyse themselves (the FOL prover and the Wittgensteinian), they have just observed their own thoughts and realised what they think is true. And similarly the Wittgensteinian will assert that everyone else is using Wittgensteinian reasoning to (wrongly) argue the superiority of their non-Wittgensteinian process.

 

So if someone else uses their reasoning process to prove their own reasoning process as superior, you’ll not only easily disagree with the conclusion – you might also disagree about whether they actually even used their own reasoning process to do it.

You can't prove why inconsistent FOL theories are "bad" inside of the very same FOL theory

If the theory is inconsistent, you can prove anything in it, can't you? So you should also be able to prove that inconsistent theories are "bad".

If you define bad = inconsistent as an axiom then yes, trivial proof. If you don't define bad you can't provie anything. You can't capture the intuitive notion of bad using FOL.

[+][comment deleted]21d 1