LESSWRONG
LW

Vale
814200
Message
Dialogue
Subscribe

Front-end developer, designer, writer, and avid user of the superpowered information superhighway.

✧ https://vale.rocks

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Vale's Shortform
5mo
20
Vale's Shortform
Vale10d10

I just saw the term 'Synthetic Intelligence' thrown forward, which I quite like.

https://front-end.social/@heydon/115071424831331716

Reply
Vale's Shortform
Vale1mo30

Many people agree that 'artificial intelligence' is a poor term that is vague and has existing connotations. People use it to refer to a whole range of different technologies.

However, I struggle to come up with any better terminology. If not 'artificial intelligence', what term would be ideal for describing the capabilities of multi-modal tools like Claude, Gemini, and ChatGPT?

Reply
Vale's Shortform
Vale1mo*10

We talk and think a lot about echo chambers with social media. People view what they're aligned with, which snowballs as algorithms feed them more content of that type, which pushes their views to the extreme.

I wonder how tailor-made AI-generated content will feed into that. It's my thinking and worry that AI systems can produce content perfectly aligned with a user in all ways, creating a flawless self-feeding ideological silo.

Reply
Vale's Shortform
Vale2mo10

I was thinking a little bit about the bystander effect in the context of AI safety, alignment, and regulation.

With many independent actors working on and around AI – each operating with safety intentions regarding their own project – is there worrying potential for a collective bystander effect to emerge? Each regulatory body might assume that AI companies, or other regulatory bodies, or the wider AI safety community are sufficiently addressing the overall problems and ensuring collective safety.

This could lead to a situation where no single entity feels the full weight of responsibility for the holistic safety of the global AI ecosystem, resulting in an overall landscape that is flawed, unsafe, and/or dangerous.

Reply
Vale's Shortform
Vale2mo120

Taking time away from something and then returning to it later often reveals flaws otherwise unseen. I've been thinking about how to gain the same benefit without needing to take time away.

Changing perspective is the obvious approach.

In art and design, flipping a canvas often forces a reevaluation and reveals much that the eye has grown blind to. Inverting colours, switching to greyscale, obscuring, etc, can have a similar effect.

When writing, speaking written words aloud often helps in identifying flaws.

Similarly, explaining why you've done something – à la rubber duck debugging – can weed out things that don't make sense.

Reply
Vale's Shortform
Vale3mo10

I don't necessarily believe or disbelieve in the final 1% taking the longest in this case – there are too many variables to make a confident prediction. However, it does tend to be a common occurrence.

It could very well be that the 1% before the final 1% takes the longest. Based on the past few years, progress in the AI space has been made fairly steadily, so it could also be that it continues at just this pace until that last 1% is hit, and then exponential takeoff occurs.

You could also have a takeoff event that carries from now till 99%, which is then followed by the final 1% taking a long period.

A typical exponential takeoff is, of course, very possible as well.

Reply
Matthew Khoriaty's Shortform
Vale3mo220

Extremely quickly thrown together concept.

If-Anyone.png

Reply
Vale's Shortform
Vale3mo90

There is a tendency for the last 1% to take the longest time.

I wonder if that long last 1% will be before AGI, or ASI, or both.

Reply
How people use LLMs
Vale4mo10

A great collection of posts there. Plenty of useful stuff.

This prompted me to write down and keep track of my own usage:
https://vale.rocks/posts/ai-usage

Reply
Vale's Shortform
Vale4mo10

Predicting AGI/ASI timelines is highly speculative and unviable. Ultimately, there are too many unknowns and complex variables at play. Any timeline must deal with systems and consequences multiple steps out, where tiny initial errors compound dramatically. A range can be somewhat reasonable, a more specific figure less so, and accurately predicting the consequences of the final event when it comes to pass even further improbable. It is simply impractical to come up with an accurate timeline with the knowledge we currently have.

Despite this, timelines are popular – both with the general AI hype crowd and those more informed. People don't seem to penalise incorrect timelines – as evidenced by the many predicted dates we've seen pass without event. Thus, there's little downside to proposing a timeline, even an outrageous one. If it's wrong, it's largely forgotten. If it's right, you're lauded a prophet. The nebulous definitions of "AGI" and "ASI" also offer an out. One can always argue the achieved system doesn't meet their specific definition or point to the AI Effect.

I suppose @gwern's fantastic work on The Scaling Hypothesis is evidence of how an accurate prediction can significantly boost credibility and personal notoriety. Proposing timelines gets attention. Anyone noteworthy with a timeline becomes the centre of discussion, especially if their proposal is on the extremes of the spectrum.

The incentives for making timeline predictions seem heavily weighted towards upside, regardless of the actual predictive power or accuracy. Plenty to gain; not much to lose.

Reply
Load More
3OpenAI's Jig May Be Up
4mo
2
2Vale's Shortform
5mo
20
19AI Model History is Being Lost
6mo
1
9My Experience With A Magnet Implant
8mo
2