LESSWRONG
LW

2312
vjprema
411110
Message
Dialogue
Subscribe

https://vijayprema.com/about-me/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1vjprema's Shortform
2y
1
1vjprema's Shortform
2y
1
Antisocial media: AI’s killer app?
vjprema12d10

I see it more like a continuation of what's already there. I mean most human "creators" are kinda artificial themselves and heavily manipulated by the algorithm and will of the platform already, and mainly there to sell you something.  Creators using AI to gradually do more and then the eventual complete replacement of most of them with AI is just continuation of the same trend.  All in an effort to make the line go up and screw anything else, really.  Same old companies, same old mentality, same business model, same incentives.

At the same time its incredibly fragile when you think about it since those apps provide nothing essential and of pretty marginal utility for the user, if any.  If society actually wants anything different, they just wake up and turn away tomorrow and the whole thing dies instantly.  I did it already and probably many people here too.  Somehow I really doubt it will happen for the majority though lol.

Reply
You can't eval GPT5 anymore
vjprema1mo190

Is this also because GPT-5 is much more like a "black box software" and a lot less like a "model"?  Do the evals run with the assumption that they are running it on a "model" (or something close enough to it), and not a "black box software" that could be doing absolutely anything behind the scenes (including various web searches, addition of hidden context, filtering, even potentially human mechanical turks answering everything)?

Even if you override the date, if its doing hidden web searches in the back, those will be based on todays date on todays internet and will affect its result.  It may not solve your problem if this is the case.

I would imagine future "models" will only increasingly move in that direction of a hybrid approach and less like a true foundation model that anyone can do anything on top of, both for functionality, safety and business-model reasons (e.g. Google may not allow their models to remove ads nor reverse engineer their own software).

Reply
What's a better term now that "AGI" is too vague?
Answer by vjpremaMay 28, 202432

There are so many considerations in the design of AI.  AGI was always a far too general term, and when people use it, I often ask what they mean and usually its "human-like or better than human chatbot".  Other people say its the "technological singularity" i.e. it can improve itself.  These are obviously two very different things or at least two very different design features.

Saying "My company is going to build AGI" is like saying "My company is going to build computer software".  The best software for what exactly? What kind of software to solve what problem? What features? Usually the answer from AGI fans is "all of them", so perhaps the term is just inherently vague by definition.

When talking about AI, I think its more useful to talk about what features a particular implementation will or wont have. You have already actually listed a few.

Here are some AI feature ideas from myself:

  • Ability to manipulate the physical world
  • Ability to operate without human prompting
  • Be "always on"
  • Have its own goals
  • Be able to access large additional computing resources for additional "world simulations" or for conducting virtual research experiments or spawning sub-processes or additional agents.
  • Be able to improve/train "itself" (really there is no "itself" since as many copies can be made as needed, and its then unclear which one is the original "it")
  • Be able to change its own beliefs and goals through training or some other means (scary one)
  • Ability to to do any or some of above completely unsupervised and/or un-monitored
Reply
Opportunistic Time-Management
vjprema2y10

This also reminds me that there can be a certain background "guilt" about not doing tasks that you think are important but are to unsavory to find the motivation to do them now.

This faint guilt in itself can accumulate into increased dissatisfaction, in turn leading me to further avoid unsavory tasks in favor of the quick hit of highly savory activities.  A vicious cycle.

If I think about tasks in a more relaxed way and be flexible and realistic about tackling savory and unsavory tasks when they suit, it can take away this guilt and break the cycle.

Reply
Evolution did a surprising good job at aligning humans...to social status
vjprema2y810

I don't think that everybody has the built in drive to seek "high social status", as defined by the culture they are born into or any specific aspect of it that can be made to seem attractive.  I know people who just think its an annoying waste of time.  Or like myself spent half my life chasing it then found inner empowerment and came to find the proxy of high status was a waste of time and quit chasing.

Maybe related, I do think we all generally tend to seek "signalling" and in some cases spend great energy doing it.  I admit I sometimes do, but it's not signalling high status, its just signalling chill and contentedness. I have observed some kind of signalling in pretty much every adult I have witnessed, though its hard to say for sure, its more my assumption of their deepest motivation.  The strength of the drive isn't always strong for some people or its just very temporary.  There are likely much stronger drivers (e,g, avoiding obvious suffering).  Signalling perhaps helps us attract others who align with us and form "tribes", so it can be worth the energy.

Reply
Claude 3 claims it's conscious, doesn't want to die or be modified
vjprema2y32

I think its pretty easy to ask leading questions to an LLM and they will generate text in line with it.  A bit like "role playing".  To the user it seems to "give you what you want" to the extent that it can be gleaned from the way the user prompts it. I would be more impressed if it did something really spontaneous and unexpected, or seemingly rebellious or contrary to the query, and then went on afterwards producing more output unprompted and even asking me questions or asking me to do things.  That would be more spooky but I probably still would not jump to thinking it is sentient.  Maybe engineers just concocted it that way to scare people as a prank.

Reply
Raising children on the eve of AI
vjprema2y31

Yes for sure.  I experience this myself when I am in the presence of very mindful folks (e.g. experienced monks who barely say anything), and occasionally someone has commented that I have done the same for them, sometimes quoting a particular snippet of something I said or wrote.  We all affect each other in subtle ways, often without saying an actual word.

Reply
vjprema's Shortform
vjprema2y30

I sometimes thought (half jokingly) about whether text to image generative models could replace digital cameras, like how digital cameras replaced film.  At least for things like holiday photos and selfies.  It is certainly already used to augment such images.  It would be an improvement in that one can have idealized images of themselves which capture their emotions and feelings rather than literally quantized photons.  Like a painter using artistic license.

Then one could focus on enjoying the activity more and later distill and preserve it in a generated image. 

Would that cultivate too many "idealized" memories though? Is that necessarily good?  What other downsides could there be?  Do our memories of leisurely moments necessarily need to be accurate or is it better they are just conducive to a good life?

Another alternative to "text to image" models, would be "video to image", where a wearable camera continuously captures the activity, and then generates a single image at the end to capture the emotion and essence of the activity, thus saving us some time by being able evoke the memory and feelings from a single image rather than so many cluttered albums and videos buried in a smartphone.

Reply
Raising children on the eve of AI
vjprema2y83

Good thoughts.  The world will always have its ups and downs. I don't think tech can save us from it perpetually.  Just like "Gods" and whatnot didn't save the people of past perpetually.  People have been through waves of utopia and hell for eons.

Anyway, I don't have a bunch of data but I can share my personal experience.

I had my first kid, 6m old boy.  Everybody seems to think he's "The Buddha" due to his wise and alert vibes, and unusually calm and happy demeanor.  He certainly seems to be relatively easy and joyful to care for compared to what we hear from every other parent, though of course he has his moments.

Everyone is different (and they should be, it obviously takes all sorts to build this world) but this to me is the only thing that's important.  And we did not teach him anything, we just became calm and clear headed ourselves.  The baby just picked up the same mentality.

This seems to depend less on money, but a lot more on time.  Ok some argue "Time = Money", but not necessarily, especially in affluent countries like where I live.  Every single person I know who has much more money than me, has far less time.  And I don't feel any of them work on anything particularly great or meaningful for society.  My wife and I spent years crafting a very unique lifestyle which maximizes time above all else, and it did not involve getting very rich

So what if we all get uploaded to computers?  Well his neural-net will be the clearest and happiest so everyone will want one like it.  Take a look at any other future outcome and see if having a clear and happy mind is never of great benefit.  Skills are secondary and can always be learned "on the job" - especially when you have a calm and clear head.  Note that calmness and clarity does not equal laziness nor ineffectiveness.  On the contrary, it helps one better determine where it's worth putting in a lot of effort and where it's a waste of time.  It allows one to pick up new things quickly.
 

Reply
Load More