Wiki Contributions

Comments

Terms I don't know: inferential gaps, general intelligence factor g, object-level thing, opinion-structure. There are other terms I can figure but I have to stop a moment: medical grade mental differences, baseline assumptions. I think that's most of it.

At the risk of going too far, I'll paraphrase one section with hopes that it'll say the same thing and be more accessible. (Since my day job is teaching college freshmen, I think about clarity a lot!)

--

"Can't I just assume my interlocutor is intelligent?"

No.

People have different basic assumptions. People have different intuitions that generated those assumptions. This community in particular attracts people with very unbalanced skills (great at reasoning, not always great at communicating). Some have autism, or ADHD, or OCD, or depression, or chronic fatigue, or ASPD, or low working memory, or emotional reactions to thinking about certain things, or multiple issues at once. 

Everyone's read different things in the past, and interpreted them in different ways. Good luck finding 2 people who have the same opinion regarding what they've read.

Doesn't this advice contradict the above point to "read charitably," to try to assume the writer means well? No. Explain things like you would to a child: assume they're not trying to hurt you, but don't assume they know what you're talking about.

In a field as new as this, in which nobody really gets it yet, we're like a group of elite, hypercompetent, clever, deranged... children. You are not an adult talking to adults, you are a child who needs to write very clearly to talk to other children. That's what "pre-paradigmatic" really means.

-- 

What I tried to do here was replace words and phrases that required more thought ("writings" -> "what they've read"), and to explain those that took a little thought ("read charitably"). IDK if others would consider this clearer, but at least that's the direction I hope to go in. Apologies if I took this too far.

Astynax1y0-3

Actually, I wonder if we might try something more formal. How about if we get a principle that if a poster sees a comment saying "what's that term?" the poster edits the post to define it where it's used?

Apparently clarity is hard. Because although I agree that it's essential to communicate clearly, it took significant wrapping my head around it to digest this post, to identify its thrust. I thought I had it eventually, but looking at comments it seems I wasn't the only one not sure.

I am not saying this to be snarky. I find this to be one of the clearer posts on LessWrong; I am usually lost in jargon I don't know. (Inferential gaps? General intelligence factor g?) But despite its relative clarity, it's still a slog.

I still admire the effort, and hope everyone will listen.

Astynax1y610

Conservatives are already suspicious of AI, based on ChatGPT3's political bias. AI skeptics shd target the left (which has less political reason to be suspicious) and not target the right (because if the succeed, the left will reject AI skepticism as a right-wing delusion).

(IDK what most people think abt just abt anything, so I'll content myself with many aren't ready to accept.)

Secularism is unstable. Partly because it gets its values from the religion it abandoned, so that the values no longer have foundation, but also empirically because it stops people from reproducing at replacement rate.

Overpopulation is at worst a temporary problem now; the tide has turned.

Identifying someone with lots of letters after his name and accepting his opinions is not following the science, but the opposite. Science takes no one's word, but uses data.

If A says B thinks something and B says, No, I think that's crazy, B is right. That is, mind reading isnt a thing.

What matters about the 2020 US election isnt Trump. It's whether we know how to get away with fraud in future elections and whether we've taken steps to prevent it. Uh-oh.

Rage at people on the other team who want to join yours is a baaaad idea.

Answer by AstynaxJan 17, 202341

I speak as someone who teaches college freshmen.

On the one hand, I see AI writers as a disaster for classes involving writing. I tried ChatGP3 last night and gave it an assignment like one I might assign in a general studies class; it involved two dead philosophers. I would definitely have given the paper an A. It was partly wrong, but the writing was perfect and the conclusion correct and well argued.

This isn't like Grammarly, where you write and the computer suggests ways to write better. I didn't write my paper; I wrote a query. Crafting the query took me almost no time to learn, and here it is: cut-and-paste the assignment into the prompt, and add the phrase "with relevant citations, and a reference section using MLA format." OK, now you can do it too!

The reason I think this matters is that both effective communication and rational thinking in the relevant field -- the things you practice if you write a paper for a class -- are things we want the next generation to know. 

On the other hand, a legal ban feels preposterous. (I know, I'm on LW, I'm trying to rein in my passion here.) A site that generates fake papers can exist anywhere on the globe, outside EU or US law. Its owners can reasonably argue that its real purpose isn't to write your paper for you, but for research purposes (like ChatGP3!), or for generating content for professional web sites (I've seen this advertised on Facebook), or to help write speeches, or as souped-up web searches, which is essentially what they are anyway. 

What has worked so far as technology changed was to find technical solutions to technical problems. For a long time colleagues have used TurnItIn.com to be sure students weren't buying or sharing papers. Now they can use a tool that detects whether a paper was written by ChatGP3. I don't know if eventually large language models (LLMs) will outpace associated detectors, but that's what's up now.

I'd guess it's overabundance of working-class workers relative to the need. But recently I'm seeing claims that the elite are overabundant: for example, there aren't enough elite slots for the next generation, so Harvard's acceptance rate has gone from 30-odd% to around 1%; and would-be middle-class young people are having to stay with mom and dad to save on rent while working long hours. How can there be an oversupply of all different classes of workers? If it's that automation makes us all way too efficient, shouldn't that make us rich and leisured rather than overworked and desperate?

Astynax1y262

There is also a tremendous amount of make-work. 

My uni has 2 new layers of management between professor and president (was 2, now it's 4) since 1998. Recently we noticed a scary budget shortfall. They decided to reorganize. After reorgnization...kept those extra 2 layers.

My doc's office joined a big corporation. It was ACA (Obamacare). They would have had to hire another clerical worker to handle the extra paperwork.

This blog post is about something else, but buried in it is the number of clergy for various US denominations. Whether the denomination is growing or shrinking, the administrators' numbers explode. https://www.wmbriggs.com/post/5910/

Many of us have produced dissertations or technical papers that will likely never be used for anything. And there are way more people whose job it is to produce science. Yet scientific innovation continues to decline (https://www.nature.com/articles/d41586-022-04577-5). I think it's easy to miss how shocking this is. We have way more people working with way better equipment, with previously unavailable computer support, not just in the West but all over the world. We should have invented flying cars, the Terminator, and the flux capacitor by now. How much of a researcher's time is spent on innovation, and how much on grants, paperwork, and publicizing?

I don't know why we have so much work that didn't need to be done before. My guess is there was always pressure to do this but now we're rich enough we can afford to pour money into things that don't produce benefit. But it's just a guess.

Astynax1y122

IDK where else to say this, so I'll say it here. I find many LW articles hard to follow because they use terms I don't know. I assume everyone else knows, but I'm a newbie. Ergo I request a kindness: if your article uses a term that's not common English use (GT3, alignment, etc.), define it the first time you use it.

I miss the old forums. (LW is on the way to this, but the format is a little more social-media.) When I moved from reading novels, and discussing things on threads, to social media posts, my concentration was shot. Maybe coincidence, but when I dumped FB (I never did Twitter) my concentration improved slightly as I recall. Point is that it seems that reading longer things helps me concentrate longer, and reading 5-second things does the opposite. FWIW.

Load More