ashen
ashen has not written any posts yet.

ashen has not written any posts yet.

One consequence of all this is a hit to consensus reality.
As you say, an author can modify a text to communicate based on a particular value function (i.e. "a customized message that would be most effective".)
But the recipient of a message can also modify that message depending on their own (or rather their personalised LLMs) value function.
Generative content and generative interpretation.
Eventually this won't be just text but essentially all media - particularly through use of AR goggles.
Interesting times?!
Also to add of interest "Creating a large language model of a philosopher"
http://faculty.ucr.edu/~eschwitz/SchwitzAbs/GPT3Dennett.htm
One interesting quote "Therefore, we conclude that GPT-3 is not simply “plagiarizing” Dennett, and rather is generating conceptually novel (even if stylistically similar) content."
The first question is hardest to answer because their a lot of different ways that an LLM will help in writing a paper. Yes, there will be some people who don't, but over time they will become a minority.
The other questions are easier.
The straightforward answer is that right now, openAI have said that you should acknowledge its use in publication. If you acknowledge a source, then it is not plagiarism. So currently a practice for some journals is you have an author contribution list, where you list the different parts of an article and which author contributed to them. e.g. AB contributed to the design and writing, GM contributed to the writing and analysis etc... One can imagine then you would add a LLM (and its version etc...) to the contribution part to make it clear its involvement. If this became common practice then it would be seen as unethical not to state its involvement.
One recent advancement in science writing (stemming from psychology through spreading) has been the pre-registered format and pre-registration.
Pre-registration often takes the form of a form - which effectively is a dialogue - where you have to answer a set of questions about your design. This forces a kind of thinking that otherwise might not happen before you run a study, which has positive outcomes in the clarity and openness of the thought processes that go into designing a study.
One consequence it can highlight that often we very unclear about how we might actually properly test a theory. In the standard paper format one can get away with this more - such as... (read more)
Some interesting ideas - a few comments:
My sense you are writing this as someone without lots of experience in writing and publishing scientific articles (correct me if I am wrong).
A question to me is whether you can predict what someone would say on a topic instead of writing about it. I would argue that the act of linearly presenting ideas on paper - "writing" - is a form of extended creative cognitive creation that is difficult to replicate. It woudn't be replicated by a avatar just talking to a human to understand their views. People don't write to convert what's in their heads to communicate it - instead writing creates thinking.
My other... (read more)
Vao highlights Ryan's journey as a prototype loser/sociopath-in-waiting to sociopath ascendency. In the academic world, both Ryan as loser and Ryan as sociopath don't exist. So is one of many ways the corporate america > academic mapping doesn't fit.
Partly because academic signals are hard to fake by pure posers or pure sociopaths.
Though going with your flow, I think the analysis is right in that academics are essentially clueless. But, within academics you can have the subdivisions, clueless-loser, clueless-clueless, clueless-sociopath.
I disagree on sociopath faculty - my experience is that senior academics are much more likely to be sociopaths than non-senior academics, because they have figured out the rules and manipulate them and break them... (read more)
In the American system its hard to get tenure being a loser, so it selects against losers.
But once tenured, you can easily turn into a loser.
A lot depends on the institution. At high prestige institutions its hard to manage as a loser, and you are going to select for more sociopaths and clueless. Top ranking institutions are going to have more sociopaths.
But at low ranking institutions you are going to find a different distribution - relatively more losers than clueless.
Also saw this on hacker news today: https://news.ycombinator.com/item?id=21660718
one comment "Lighting is a really hard business (especially residential)"
In terms of consumer product, I think something like this might be ideal:
But more like 300W instead of 28W. So taking the enclosure and remote control of the bathroom type light, with the LED setup of a floodlight. As for CRI, I guess it would be bad. How important is CRI though? Does this relate to subjective sense of "harshness"?
I also found this: https://nofilmschool.com/diy-light-panel-busted-lcd-tv
Stresses importance of size, diffusion pads and fresnel lens for creating a soft, diffuse light.
Was thinking a good DIY project would be to take a LED floodlight and wire in behind a busted 50" display as a side panel (artificial window).
So the ideal consumer product might need to have a pretty wide surface area and fresnel lens which would drive up costs.
As I understand it, there is a psychological (Mahowald et al.) and philosophical (Shanahan) that machines can't "think" (and do related stuff).
I don't find Mahowald et al. always convincing because it suffers from straw manning LLM's - much of the claims about limitations of LLM's were based on old work which predates GPT-3.5/ChatGPT. Clearly the bulk of the paper was written before ChatGPT launched, and I suspect they didn't want to make substantial changes to it, because it would undermine their arguments. And I find the OP good at taking down a range of arguments that they provide.
I find the Shanahan argument stronger, or at least my take on the philosophical argument.... (read more)