LESSWRONG
LW

1460
Kaustubh Kislay
30270
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Kaustubh Kislay's Shortform
3mo
1
4On AI Detectors Regarding College Applications
10mo
2
2Kaustubh Kislay's Shortform
3mo
1
Kaustubh Kislay's Shortform
Kaustubh Kislay3mo32

After listening to many techbros I often hear the notion that AI, more specifically LLM's like chatgpt or claude are "Just tools to be used". This notion is usually followed by some pro AI or AI accelerationist discussion but thats besides the point I will make.

I feel as though AI has exceeded the capabilities of a "tool" and its probably harmful to call it so. Take for example a hammer, or even a gun, both reasonably classified as tools. This is because by definition they are used to complete a task, whether it be hammering a nail or subduing a criminal. An LLM falls under this definition as well but I noticed stops when it comes to distributing responsibility. For example when tasked to hammer a nail, the responsibility of the completed task is distributed wholly to the person hammering rather than the hammer itself. Same way when someone is shot the responsibility falls onto the shooter rather than the gun. 

But this effect changes when it comes to LLM's. For example if someone is to vibecode(using AI to program) an app, the distribution of responsibility becomes much more murky. The user prompts the llm, whereas the llm does the 'heavy lifting' of actually coding, and therefore we are more likely to distribute responsibility or contribution of a tasks completion less one-sidedly when dealing with LLM's. 

In this scenario the LLM clearly exceeds the classification of 'tool', assuming firearms or hammers are also considered tools, and should not be considered equivalent. The issue with making this false equivalence is that the LLM is underestimated or dropped to the level of a hammer, leading to further proliferation of anti Ai safety sentiment at the consumer level. In order to solve this issue either AI should be completely disconnected from being a 'tool', or the definition of tool must evolve as well.

Reply
The Case Against AI Control Research
Kaustubh Kislay8mo30

I feel as though the case for control of mentioned early transformative AGI still passes through a different line of reasoning. As you mentioned before there are some issue with, should labs solve the ASI alignment barriers using ET AGI, it is likely the solution is somehow working on surface level but has clear flaws which we may not be able to detect. Applying alignment onto the ET AGI, in order to safeguard against said solutions specific to those which will leave humanity vulnerable, may be a route to pursue which still follows control principles. Obviously your point in focusing on actually solving the problem of ASI alignment rather than focusing on control still passes, but the thought process I mentioned may allow both ideas to work in tandem. I am not hyper knowledgeable so please correct me if I'm misunderstanding anything.

Reply
On AI Detectors Regarding College Applications
Kaustubh Kislay10mo10

inputing some variance into text goes along with exactly what makes human text human. Though there are some safeguards LLMs have in regards to tokenization in order to handle prompting typos, so in the same way there will be some ways to handle typos in actual writing. But inputing uncommon variance into text as to make it more 'human' would be the best way to avoid AI detectors.

Reply
Why Large Bureaucratic Organizations?
Kaustubh Kislay1y30

This makes a lot of sense. I guess that the most successful bureaucratic companies have optimized for a balance of dominance/status and profit margins. For example a bureaucratic company although not mainly motivated by profits must increase them in order to hire more people, but people cost money, so in order to maximize the dominance hierarchy they must minimize employee cost while maximizing profit. 

Reply
Would catching your AIs trying to escape convince AI developers to slow down or undeploy?
Kaustubh Kislay1y2213

It seems to me that the accelerationist argument mainly relies on the fact that international competition is apparent, especially with China who are the main "antagonists" as expressed in the post. 

I would like to mention that China is not purely accelerationist, as they have a significant decelerationist backing themselves, that are making their voices heard at the highest level in the Chinese government. So should the US end up slowing down it is not necessarily true that China will for sure continue making progress and overtake us. 

We are all human in the end of the day, so automatically assuming China is willing to incur unforgivable damages to the world is unfair to say the least.

Reply
Perplexity wins my AI race
Kaustubh Kislay1y20

Perplexity seems to be significantly more effective than other competitive models when it comes to acting as a research device/answer engine. This is mainly because that is its main use-case whereas other models such as Claude by itself and ChatGPT excel in other areas. I do believe that Perplexity's citation techniques could be some of the first baby steps to far(possibly near)future automated ai research.

Reply
Liability regimes for AI
Kaustubh Kislay1y10

This may be a reason as to why Meta/LLama is making their models open source. In the future where coasean bargaining comes into play for larger companies, which it mostly likely will, META may have a cop out by making their models open source. Obviously like you said there will then have to be some restrictions on open source in regards to AI models, but "open source-esque" models may be the solution for companies such as OpenAI and Anthropic to avoid liability in the future.

Reply