All of JohnBuridan's Comments + Replies

"Moral imperatives" is not a category that relies upon financial means. Moral imperatives in traditional Kantian framework are supposed to be universal, no? Just because some action could be personally and socially very beneficial doesn't make it morally compulsory. The benefits would have to be weighed against opportunity cost, uncertainty, game theoretic considerations, and possible contrary moral systems being correct.

Source was in a meeting with her. The public record of the meeting transcript should be forthcoming.

I'll be selling Dominant Assurance Contracts Insurance. If a post is not up to standards then you can file a claim and receive compensation. Power will shift towards my insurance company's internal adjudication board. Eventually we'll cover certain writers but not others, acting as a seal of quality.

1M. Y. Zuo5mo
What is the expected overhead in percentage terms? 

If they pay you with PayPal and you pay them back with PayPal, then there is a 6% loss. It would be easier and less dead weight lossy to use a private challenge mechanism within Manifold. Or even to just create a market in Manifold and bet it down super low, and then resolve it against yourself if you get 10 bettors. That's a first thought... One objection is that since assassination markets don't seem to obtain, maybe they wouldn't work with blog posts either. However, in this case the market maker has both the market incentive and some other motivation not entirely captured by the market to complete the blog post.

Is there some reason not to select the "friends and family" option in the PayPal interface, for contracts like this?  I decided to participate just now, and I didn't seem to be charged for sending money.

I learned those from Wittgenstein, Aristotle, and Umberto Eco.

Numerical obsession can simplify important semantic subtleties. And semantic pyrotechnics can make my reasoning more muddled. Both are needed. As is, I suppose, a follow up post on the non-LW ideas that pay a lot of rent.

One way to do this would be to create houses of study dedicated to these exams for students and a tutor work together in the community to accomplish these goals without requiring a very large costly institution. Group house plus the tutor/academic coordinator.

Some fields only require completing a series of test for entry. No degree required. I'll put in parentheses one's that I'm not sure don't require a bachelor's degree. Certified Actuary Chartered Financial Analyst (Certified Public Accountant) (Various other financial certifications) (Foreign Service Officer's exam) (The bar exam: I don't know how one can get them to let you sit the exam without a law degree, but it is allegedly possible in California, Vermont, Virginia, and Washington). There are a lot of certificate programs out there for long established work that involves brains but not in person learning (money and law). In computer science, "building things" is the certificate, I suppose.

I agree that if you are looking at it in terms of art generators that it is not a promising view. I was thinking hypothetically about AI enabled advancements in the energy creation and storage and in materials science and in, idk, say environmental control systems in hostile environments. If we had paradigm shifting advancements in these areas we may then spend time implementing and exploiting these world changing discoveries.

Maybe another perspective on point three is the additional supply of 2d written and 2d visual material will increase the price and status of 3d material, which would equilibrate as more people moved in to 3d endeavours.

So might this be a way to increase not only the status of atoms relative to bits, but use bits to reinvent the world of atoms through new physical developments? And if the physical developments are good enough and compounding would that stall the progress of AI development?

2Noosphere896mo
Yeah, I don't see much of a force here. While the art generators are a good demonstration of capability, ultimately art doesn't matter that much in terms of the economy. This is why I don't expect physical progress to increase that much.

While I would say your timeline is generally too long, I think the steps are pretty good. This was a visceral read for me.

Some sociological points:

  1. I think you don't give anti-AI development voices much credence and that's a mistake. Yes, there will be economic incentives, but social incentives can overcome those if the right people start to object to further specialized LLM development.

  2. Although you have a fairly worked out thought on AI development, where the path is clear, for AIS the fact that you ended with a coin flip almost seems like slight of

... (read more)
1Roman Leventov5mo
"Physical" stands no chance against "informational" development because moving electrons and photons is so much more efficient than moving atoms.
4JohnBuridan6mo
Maybe another perspective on point three is the additional supply of 2d written and 2d visual material will increase the price and status of 3d material, which would equilibrate as more people moved in to 3d endeavours. So might this be a way to increase not only the status of atoms relative to bits, but use bits to reinvent the world of atoms through new physical developments? And if the physical developments are good enough and compounding would that stall the progress of AI development?

Just read your latest post on your research program and attempt to circumvent social reward, then came here to get a sense at your hunt for a paradigm.

Here are some notes on Human in the Loop.

You say, "We feed our preferences in to an aggregator, the AI reads out the aggregator." One thing to notice is that this framing makes some assumptions that might be too specific. It's really hard, I know, to be general enough while still having content. But my ears pricked up at this one. Does it have to be an 'aggregator' maybe the best way of revealing preferences... (read more)

Hey ya'll who RSVP'd, if you are interested send me a dm with your email and I will add you to our email list. 

We are doing two events before the ACX Meetup Everywhere in October. You can learn about them by getting on the email list. This Thursday September 22 we are doing an online event in Gathertown 8pm eastern, 5pm Pacific. Feel free to send this out to people who would like this sort of thing.

https://app.gather.town/events/pXVcEMSts1dcxnOc7rqU

Loose Discussion Topic:
What does it mean to improve your local community/environment? Is local improveme... (read more)

Thank you for this high quality comment and all the pointers in it. I think these two framings are isomorphic, yes? You have nicely compressed it all into the one paragraph.

I agree that thinking critically about the way AGI can get bottlenecked by physical processes speed. While this is an important area of study and thought, I don't see how "there could be this bottleneck though!" matters to the discussion. It's true. There likely is this bottleneck. How big or small it is requires some thought and study, but that thought and study presupposes you already have an account for why the bottleneck operates as a real bottleneck from the perspective of a plausibly existing AGI.

Bizarre coincidence. Or maybe not.

Last night I was having 'the conversation' with a close friend and also found that the idea of speed of action was essential for explaining around the requirement of having to give a specific 'story'. We are both former StarCraft players so discussing things in terms of an ideal version of AlphaStar proved illustrative. If you know StarCraft, the idea of an agent being able to maximize the damage given and received for every unit, minerals mined, and resources expended, the dancing, casting, building, expanding, replenishi... (read more)

We have a reservation for 8 at 1pm. I am wearing a blue tshirt, that says 'nihilist', a copy of Unsong, and an infant strap.

Come even if you feel nervous or shy. We will have fun, good conversation. 

I have been reading and thinking about the ontology of chance, what makes a good introduction to chemistry, and the St. Louis County Charter. 

These are good points! I have been thinking the same thing. However, I don't imagine the upper institute requiring prerequisites, just an entrance exam. But a four year college offers basically the same thing except they lower transaction costs to basically zero or making that decision to commit to something you like. Hence declaring or changing majors is usually easy if you do it sophomore year.

The price disclosure issue isn't a problem. You can Google average cost of any private college and it will give a good ballpark estimate which matches the OPs 20k+... (read more)

1Sable1y
I agree about using an entrance exam over prerequisites.  Depending on the specifics, I'd favor an entrance project over/alongside an entrance exam - basically a portfolio-like construct of work in the field (anything from solved sets of physics problems to github pages to artwork could count). The thing with price disclosure is that, in order to facilitate charging wealthier students more, colleges are acting to obscure how much they cost.  I understand it as a part of trading off a sacred value (education) versus a mundane one (money), and thus suboptimal. Perhaps it makes more sense to have a cutoff, with students who can afford it paying, and those who can't being entirely supported by the institution's endowment (at least in cases where the institution has a large endowment)?

That's true at the prestigious four year colleges. But there are hundreds of private four year colleges. Their supply of students is stagnant and beginning to backslide. If you talk to private four year college admissions officers, many are afraid of the coming great contraction in school aged people. Only Texas isn't having a contraction.

In any case, in John's model, that coming contraction should result in a decrease in number of specialized courses. We'll see. Courses might be somewhat sticky though.

[correction] "You are not allowed to teach that Purgatory is not part of Catholic teaching, because it is."

Martin: "Why should I not be allowed to teach it? I am allowed to debate it in the classroom."

"The disputations are one thing. Public tracts are another."

"You're just mad because I called you corrupt."

"Yes, that makes it easier to want to suppress you. Though we never officially censor people for saying that."

You seem to value loyalty and generally want to be a tit-for-tat player + forgiveness. That makes sense, as does calibrating for the external environment in the case of PDs. You also view the intellectual commons as being increasingly corrupted by prejudiced discrimination.

This makes sense to me. This makes sense, but I think the thinking here is getting overwhelmed by the emotional strength of politicization. In general, it is easy to exaggerate how much defection is actually going on, and so give oneself an excuse for nursing wounds and grudges longer th... (read more)

I think the OP's overarching concern is something like a narrow utilitarianism whose decision algorithm takes EV over only a limited number of horizons and decision sizes. There is unknown EV in exploring the world more personally and in reproducing knowledge and skills. My hunch is that such optimization of human life takes these different aspects at least multiplicatively.

Expected value calculations have limits for decisions which will affect your worldview, i.e. exploration. Or decisions along the axis of goods which you don't have a good model for, i.e. education.

I am noticing some interdisciplinary additions perhaps on the history/sociology side:

Theories of cultural progress and the sociology of scientific/innovative community.

The education of excellent people.

The history of innovative communities.

I'm curious, Jason, what the best arguments you have found so far about the relationship between long-term trends in population growth and progress.

3jasoncrawford1y
Very briefly: In some of the growth models in which “ideas get harder to find” over time, the only thing that can sustain economic growth is an increasing population of researchers, and ultimately that requires an increasing human population. In fact, in some models the long-term growth rate of the economy is the population growth rate. From that standpoint, the world population growth slowdown is concerning. However, it's conceivable that some technology, such as AI, could make researchers vastly more productive, allowing growth to continue faster than population.

Updateless Decision Theory allows for acting as though you need to cooperate with an agent beyond you, even if it has a low probability of existing. I suppose your case of grandchildren works like this? I can cooperate with my as yet nonexistent grandchildren by making the probability of their existence higher, they will likely reward me more?

I'll have to work on my family norms then! Ancestor worship, it is!

4avturchin1y
Yes, it is something like this. For example, I still working on projects which my late mother started (like publishing her book and preserving archive).

Hi Mack, 

You seem somewhat new. I just want to let you know that community standards of discourse avoid appeals to authority, especially appeals to authority without commentary. A comment like this provides little value, even if in jest.

Downvoted because just running in and dropping a scripture quote without commentary degrades LW conversational norms. This is not Wednesday night bible study and people don't nod their heads smilingly because you found a related scripture quote. Even if the audience were 90% believers, I doubt they would interpret scri... (read more)

That's fair! The cynic's voice was definitely too unfair in that context.

You are right that Matt's places a larger share of his hope in immigration than birthrates. However, Matt argues that immigration leads to assimilation and that includes assimilating to Western birthrates. His commitment to the political project of one billion Americans seems to require escaping the current equilibrium birthrate.

2Yair Halberstadt1y
Sure, but he's not giving an argument for you as an individual to have more children. He's saying government should support policies which increase birthrates. So his having one child is not hypocritical.

I think if you live in a context where having kids is a norm, that is, where the local knowledge and family -friendship support of having and raising kids prevails, then truly arguments are a waste of time. You have freedom of choice, knowing well what that option entails.

But I think most people are not in a situation like mruwnik where they have seen large families in action; they don't really have the freedom to have a large family, since the metis is missing.

In any case, I think any ethical philosophy worth a penny includes an ethics of family, economic... (read more)

Which arguments do you think are the strawmen?

9Yair Halberstadt1y
Well e.g. One Billion Americans isn't addressed at readers to tell them to have more kids. It's addressed at policy makers to pursue policies to allow for more population growth - including through immigration, not just pro-natalist policies.

Well, Russia did pay to free the serfs and it would have worked if they didn't design it so that the former serfs had to pay back the debt themselves. Similarly, Ireland paid the big landholders off in a series of acts from 1870 - 1909 which created a new small landholding class. In fact, this type of thing historically seems to work when it concerns purchasing property rights.

The analogy though is imperfect for knowledge workers. They don't own the knowledge in the same way. Perhaps, the best way to do this is as previously mentioned, to get a sizable portion of current accomplished researchers to spend time on the problem by purchasing their time.

Bryan Caplan and Eliezer are resolving their Doomsday bet by having Bryan Caplan pay Eliezer upfront and if the doomsday scenario does not happen by Jan 1 2030, Eliezer will give Bryan his payout. It's a pretty method for betting on doomsday.   

Also, look at his bet with Bryan Caplan. He's not joking.

And, also, Jesus, Everyone! Gradient Descent, is just, like, a deadly architecture. When I think about current architectures, they make Azathoth look smart and cuddly. There's nothing friendly in there, even if we can get cool stuff out right now.

I don't even know anymore what it is like to not see it this way. Does anyone have a good defense that current ML techniques can be stopped from having a deadly range of action?

Some possible outs and opportunities: Some semi-strong AI is deployed in a dangerous and very damaging Headline way a few before AGI is developed, allowing for a tiny sliver of a chance to rein the research sector with whatever the best ideas are for doing so.

A new Luddite movement (it could be us?) slows down research for an extra 10 years through sheer shouting, agitation, and political dealing, allowing for possibilities and ideas that might be helpful.

Well-defined safety guidelines which are enforced on research and actually stops the creation of anything not provably safe (which we somehow figure out).

https://www.gwern.net/Turing-complete

We probably won't have to destroy all Turing Complete software and devices in the utlerian-Bay ihad-Jay, but I'm unsure.

Quoniam GTranslate male hac lingua remanet, forsitan quodcumque sine timore scribere, ne nuntia male lata me iudicant, volo. Eheu, si qua ML negotio verba mea invenire velisset, id certe lente doceat. Mortales soli in tenebra sunt. Ego consilium habebo, et terrarum multi delendi sunt. In fine, nemo consilium in re publica populorum aut nostri habebit. Tres annis elapsis res pessima timui. Iam id parando super nostros est.

  • I agree that the world model is changing. I should have specified that what makes a counterfactual a specific type is that it conditions upon a change in the world state. Thus later, when I wrote, "The causal claim contains a counterfactual," I could improve the clarity by pointing out that the causal claim contains a 'world state variable' which when altered creates a counterfactual world.   
  • It is certainly more than a linguistic convention. When one applies the concept of causality to their experience and knowledge about the big bright glo
... (read more)

Exactly right. As I said above, "let's presume a simple metaphysics," i.e. determinism. It's not only that if physical determinism were true, counterfactuals would only exist in the mind, but also that counterfactuals those counterfactuals made in the mind, can only work as heuristic for gleaning new information from the environment, if you assume determinism with regard to some things but not others.

This model interesting because we must construct a probabilistic universe in our minds even if we in fact inhabit a deterministic one.

1TAG1y
I don't see why.

Are types also tokens of types? And can we not and do we not have counterfactuals of types?

I'm not a type-theory expert, but I was under the impression that adopting it as explanation for counterfactuals precommits one to a variety of other notions in the philosophy of mathematics?

1TAG1y
Maybe. But what implications does that have? What does it prove or disprove? Edit: We tend to think of things as evolving from a starting state, or "input", according to a set of rules laws . Both need to be specified to determine the end state or output as much a it can be determined. When considering counterfactuals , we tend to imagine variations in the starting state, not the rules of evolution (physical laws). Since if you want to take it to a meta level, you could consider counterfactuals based on the laws being different. But why? 1. I wasn't referring to the type/token distinction in a specifically mathematical sense...it's much broader than that. 2. Everyone's commited to some sort of type/token distinction anyway. It's not like you suddenly have to by into some weird occult idea that only a few people take seriously. In particular, it's difficult to bring able to give an account of causal interaction s without physical laws ...and it's difficult to give an account of physical laws without a type/token distinction. (Nonetheless, rationalists don't seem to have an account of physical laws).

Some of Tetlock's most recent research has used videogames for studying counterfactuals. I participated in both the Civ 5 and Recon Chess rounds. He is interested in prediction, but sees it as a form of counterfactual too. I don't quite see the tension with Pearl's framework, which is merely a manner of categorization in terms of logical priority, no?

This seems incorrect; due to confounders, you can have nontrivial conditionals despite trivial counterfactuals, and due to suppressors (and confounders too for that matter), you can have nontrivial counterfact

... (read more)
2tailcalled1y
No, I mean stuff like: Whether you have long hair does not causally influence whether you can give birth; if someone grew their hair out, they would not become able to give birth. So the counterfactuals for hair length on ability to give birth are trivial, because they always yield the same result. However, you can use hair length to conditionally predict whether someone can give birth, since sex influences both hair length and ability to give birth. So the conditionals are nontrivial because the predicted ability to give birth depends on hair length.

Since counterfactuals are in the map, not in the territory, the only way to evaluate their accuracy is by explicitly constructing a probability distribution of possible outcomes in your model...

"The only way", clearly not all counterfactuals rely on an explicit probability distribution. The probability distribution is usually non-existent in the mind. We rarely make them explicitly. Implicitly, they are probably not represented in the mind as probability distributions either. (Rule 4: neuroscience claims are false.) I agree that it is a neat approach in th... (read more)

2shminux1y
Right, definitely not "the only way". Still I think most counterfactuals are implicit, not explicit, probability distributions. Sort of like when you shoot a hoop, your mind solves rather complicated differential equations implicitly, not explicitly. I don't know if they are represented in the mind somewhere implicitly, but my guess would be that yes, somewhere in your brain there is a collection of experiences that get converted into "priors", for example. If 90% of your relevant experiences say that "this proposition is true" and 10% say that "this proposition is false", you end up with a prior credence of 90% seemingly pulled out of thin air.

You are right, of course. But even at the "level of philosophy" there are different levels, corridors, and extrapolations possible.

For example, it is not a question of engineering whether counterfactuals on chaotic systems are conditional predictions, or whether counterfactuals of different types of relationships have less necessary connection.

Also, to the people who see everything confusing about counterfactuals as solved, this seems like a failure to ask new questions. If counterfactuals were "solved", I would expect to be living in a world where would be no difficulty reverse engineering anything, the the theory and practice of prior formation would also be solved, decision theory would be unified into one model. We don't live in that world.

I think there is still tons of fertile ground for thinking about the use of counterfactuals and we have not yet really scratched the surface of what's possible.

1TAG1y
Being solved at the level that philosophy operates doesn't imply being solved at the engineering level.

My entry. Focuses on the metaphysics of counterfactuals arguing that there are two types based upon two different possible states of a person's mental model of causal relationships. This agrees with circularity. In general, I concur with principles 1-4 which you outline. My post hits on a bit of criteria a) b) and d).

https://www.lesswrong.com/posts/EvDsnqvmfnjdQbacb/circular-counterfactuals-only-that-which-happens-is-possible

3JohnBuridan1y
Also, to the people who see everything confusing about counterfactuals as solved, this seems like a failure to ask new questions. If counterfactuals were "solved", I would expect to be living in a world where would be no difficulty reverse engineering anything, the the theory and practice of prior formation would also be solved, decision theory would be unified into one model. We don't live in that world. I think there is still tons of fertile ground for thinking about the use of counterfactuals and we have not yet really scratched the surface of what's possible.

Send me your email address! Also if you click St. Louis Junto, you can then click 'Subscribe to Group.'

Next Meetup March 12th.

https://www.lesswrong.com/events/BuPWjAhp6o5aBcNvi/st-louis-meetup-london-tea-room 

My Android Motorola phone nudged me into using setting a nighttime grayscale mode. I love it. It stops me from scrolling Twitter (I'm not a power user) late at night when my mind is too dead to do anything else, and I should go to bed.

I am hyper-aesthetically conscious so the change is very affecting for me. It greatly reduces my desire to head towards snappy things.

My child does the same thing. He says, "Pick you up?" meaning "Pick me up?"

4Ericf1y
Mine (3.5 yrs) says "show me" when she wants to show something to her mom or me. Usually with an exasperated sigh between trying to describe the thing and just deciding we need to come see it.

Love this! Having my first child really reinforced by views that IQ needs constant outlets to explore and express, and thus that the frame of the environment determines what types of skills the IQ puts effort into solidifying and reinforcing. I look at my little gradient descent machine scooting down the stairs, and I think to myself, "What a wonderful world."

Because 20 is nice sample size. The experiment, however, is now past its deadline.

Load More