Posts

Sorted by New

Wiki Contributions

Comments

ashen1y20

As I understand it, there is a psychological (Mahowald et al.) and philosophical (Shanahan) that machines can't "think" (and do related stuff).

I don't find Mahowald et al. always convincing because it suffers from straw manning LLM's - much of the claims about limitations of LLM's were based on old work which predates GPT-3.5/ChatGPT. Clearly the bulk of the paper was written before ChatGPT launched, and I suspect they didn't want to make substantial changes to it, because it would undermine their arguments. And I find the OP good at taking down a range of arguments that they provide.

I find the Shanahan argument stronger, or at least my take on the philosophical argument. This is something like we take words like "think" as folk psychological theories, which are subject to philosophical analysis and reflection. And based on these definitions of thinking, it is a category error to describe these machines as thinking (and other folk psychological constructs such as belief, desire etc...).

This seems correct to me, but like much of philosophy it comes down to how you define words. As Bill says in a comment here: "the concepts and terms we use for talking about human behavior are the best we have at the moment. I think we need new terms and concepts." . From the philosophical point of view a lot depends on how you view the link between the folk psychology terms and the underlying substrate, and philosophers have fully mapped out the terrain of possible views, e.g. eliminativist (which might result in new terms), reductionist, non-reductionist. And how you view the relationship between folk psychology and the brain will influence how you view folk psychological terms and AI substrates of intelligent behaviour.

ashen1y32

One consequence of all this is a hit to consensus reality.

As you say, an author can modify a text to communicate based on a particular value function (i.e. "a customized message that would be most effective".)

But the recipient of a message can also modify that message depending on their own (or rather their personalised LLMs) value function.

Generative content and generative interpretation.

Eventually this won't be just text but essentially all media - particularly through use of AR goggles.

Interesting times?!

ashen1y10

Also to add of interest "Creating a large language model of a philosopher"

http://faculty.ucr.edu/~eschwitz/SchwitzAbs/GPT3Dennett.htm

One interesting quote "Therefore, we conclude that GPT-3 is not simply “plagiarizing” Dennett, and rather is generating conceptually novel (even if stylistically similar) content."

Answer by ashenJan 02, 202320

The first question is hardest to answer because their a lot of different ways that an LLM will help in writing a paper. Yes, there will be some people who don't, but over time they will become a minority.

The other questions are easier.

The straightforward answer is that right now, openAI have said that you should acknowledge its use in publication. If you acknowledge a source, then it is not plagiarism. So currently a practice for some journals is you have an author contribution list, where you list the different parts of an article and which author contributed to them. e.g. AB contributed to the design and writing, GM contributed to the writing and analysis etc... One can imagine then you would add a LLM (and its version etc...) to the contribution part to make it clear its involvement. If this became common practice then it would be seen as unethical not to state its involvement.

ashen1y10

One recent advancement in science writing (stemming from psychology through spreading) has been the pre-registered format and pre-registration.

Pre-registration often takes the form of a form - which effectively is a dialogue - where you have to answer a set of questions about your design. This forces a kind of thinking that otherwise might not happen before you run a study, which has positive outcomes in the clarity and openness of the thought processes that go into designing a study.

One consequence it can highlight that often we very unclear about how we might actually properly test a theory. In the standard paper format one can get away with this more - such as through HARKING or a review process where this is not found out.

This is relevant to philosophy but in psychology/science the format of running and reporting on an experiment is very standard.

I was thinking of a test of a good methods and results section - it would be of sufficient clarity and detail that a LLM could take your data and description and run your analysis. Of course, one should also provide your code anyway, but it is a good test even so.

So in the methods and results, then an avatar does not seem particularly helpful, unless it is effectively a more advanced version of a form.

For the introduction and discussion, a different type of thinking occurs. The trend over time has been for shorter introduction and discussion sections, even though page limits have ceased to be a limiting factor. There are a few reasons for this. But I don't see this trend getting reversed.

Now, interesting you say you can use an avatar to get feedback on your work and so on. You don't explicitly raise the fact that already now scientists will be using LLM's to help them write papers. So instead of framing it as an avatar helping clarify the authors thinking, what inevitably will happen in many cases is that LLM's will fill in thinking, and create novel thinking - in other words, a paper will have LLM's a co-author. In terms of argument then, I think one could create a custom LLM with avatar interface designed to help authors write papers - which will do the things you suggest - give feedback, suggest ideas, along with fixing problems. And the best avatar interfaces will be personalised to the author e.g. discipline specific, and some knowledge of the author (such as learning all their past text to better predict).

And so yes, I think you are a right that will use avatars to help write text similar to what you suggest, and then readers will use avatars to help them read text. I suppose in the medium term I still see the journal article as a publication format that is going to be resistant to change. But LLM's/avatars will be interfaces for production and consumption of them.

ashen1y20

Some interesting ideas - a few comments:

My sense you are writing this as someone without lots of experience in writing and publishing scientific articles (correct me if I am wrong).

A question to me is whether you can predict what someone would say on a topic instead of writing about it. I would argue that the act of linearly presenting ideas on paper - "writing" - is a form of extended creative cognitive creation that is difficult to replicate. It woudn't be replicated by a avatar just talking to a human to understand their views. People don't write to convert what's in their heads to communicate it - instead writing creates thinking.

My other comment is that most of the advantages can be gained by AI interpretations and re-imagining of a text e.g. you can ask ChatGPT to take a paper and explain it in more detail by expanding points, or make it simpler. So points 2 and 3 of your advantage can be achieved today and post writing.

Point 4 of the advantages "positive spin" is an incentive issue so not really about effective communication.

Point 1 also could be achieved by the AI reading a text. Of course though the AI can only offer interpretations - which would be true with or without an AI interogating an author (e.g. an AI could read all that authors works to get a better sense of what they might say).

So in sum, I can see avatars/agents as a means of assisting humans to read texts. We already have this is in principle possible today. For example, I am already asking ChatGPT to explain parts of text to me and summarise papers - it will just get better. But I don't see in the near term the avatar being a publication format - rather an interface to publications.

The interesting question for me though which is what might be the optimal publication format to allow LLM's to progress science - where LLM's are able to write papers themselves e.g. review articles. Would it be much different from what we already have? Would we need to provide results in a more machine readable way? (probably)

ashen3y20

Vao highlights Ryan's journey as a prototype loser/sociopath-in-waiting to sociopath ascendency. In the academic world, both Ryan as loser and Ryan as sociopath don't exist. So is one of many ways the corporate america > academic mapping doesn't fit.

Partly because academic signals are hard to fake by pure posers or pure sociopaths.

Though going with your flow, I think the analysis is right in that academics are essentially clueless. But, within academics you can have the subdivisions, clueless-loser, clueless-clueless, clueless-sociopath.

I disagree on sociopath faculty - my experience is that senior academics are much more likely to be sociopaths than non-senior academics, because they have figured out the rules and manipulate them and break them to their advantage. And so they are more likely to have dark-triad personality traits.

The way I would see it in academia, is that clueless play the game (and play by the rules) because they enjoy it. Sociopaths play the game in order to win (by any means necessary) and losers have given up the game - and often drop out of academia altogether when it gets really bad.

The clueless "game" in academia is one of traditional academic values - advancing knowledge for humankind. And all academics to become academics in the first place must have bought into that game to a fair degree (as they start out clueless). But then the trajectories for some can diverge in more of the directions of loserdom and sociopathy depending on career trajectory, environment and pre-dispositions.

ashen3y50

In the American system its hard to get tenure being a loser, so it selects against losers.

But once tenured, you can easily turn into a loser.

A lot depends on the institution. At high prestige institutions its hard to manage as a loser, and you are going to select for more sociopaths and clueless. Top ranking institutions are going to have more sociopaths.

But at low ranking institutions you are going to find a different distribution - relatively more losers than clueless.

ashen4y10

Also saw this on hacker news today: https://news.ycombinator.com/item?id=21660718

one comment "Lighting is a really hard business (especially residential)"

ashen4y50

In terms of consumer product, I think something like this might be ideal:

https://www.amazon.co.uk/Ceiling-Dimming-Bathroom-Corridor-6M5252TYQ/dp/B07MCTVH5V/ref=sr_1_fkmr0_2?keywords=lED+ceiling+lamp+office+lamp+with+remote+control%2C+28W+2800lm+3000k-6000K+dimmable+LED+circuit+board+ceiling+light%2C+splashproof%2C+round+bedroom+light%2C+for&qid=1574930392&sr=8-2-fkmr0

But more like 300W instead of 28W. So taking the enclosure and remote control of the bathroom type light, with the LED setup of a floodlight. As for CRI, I guess it would be bad. How important is CRI though? Does this relate to subjective sense of "harshness"?

I also found this: https://nofilmschool.com/diy-light-panel-busted-lcd-tv

Stresses importance of size, diffusion pads and fresnel lens for creating a soft, diffuse light.

Was thinking a good DIY project would be to take a LED floodlight and wire in behind a busted 50" display as a side panel (artificial window).

So the ideal consumer product might need to have a pretty wide surface area and fresnel lens which would drive up costs.

Load More