LESSWRONG
LW

3360
Nicolas Lupinski
-21120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1Nicolas Lupinski's Shortform
3mo
11
1Nicolas Lupinski's Shortform
3mo
11
Nicolas Lupinski's Shortform
Nicolas Lupinski1mo1-2

And sufficiently smart human can find a way to unplug AI faster than it can rebuild itself... An AI cannot cutoff oxygen supply on eartth.

Reply
Nicolas Lupinski's Shortform
Nicolas Lupinski1mo-10

We aren't necessarily all alive anymore !

We weren't necessarily all human (me and my dog), but now "we" can include machines. 

That's obvious I know. We have changed dramatically in just a few years.

I've just asked various AI "Who are we?". Old models got it wrong (automatically assume "we" means humanity...). Recent models gets it. I wonder if its due to the training data now including chats between AI and human or if they figured it out logically.
 

Reply
Nicolas Lupinski's Shortform
[+]Nicolas Lupinski1mo-5-1
Von Neumann's Fallacy and You
Nicolas Lupinski2mo50

The 1+1=2 joke will forever lives as a meme.

The only things coming close is the 15=3*5 quantum computing paper.

Reply3
Launching new AIXI research community website + reading group(s)
Nicolas Lupinski2mo10

Hello

Why a new blog ? Why not just using lesswrong ? (or some other tech)

[This comment is no longer endorsed by its author]Reply
Nicolas Lupinski's Shortform
Nicolas Lupinski3mo10

OK, so an approximate sorting algorithm in O(n) would do the trick.

The problem then boils down to weither computing the cost of (computing expected cost) is worth the expected gain.

Which goes back to my initial question : is there a rationality paradox ? Maybe simply the fact that 1) computing cost might boils down to the halting problema 2) cost's cost's cost... is possibly infinite ? 

Reply
Nicolas Lupinski's Shortform
Nicolas Lupinski3mo10

So P_i/C_i is in [0,1], the precision is unbounded, but for some reason, a radix sort can do the job in linear time ?

There could be pathological cases where all P_i/C_i are the same up to epsilon.

I guess I'm searching for situation where doing cost c, computing c cost c', etc... Branching prediction comes to mind.

 

Reply
Nicolas Lupinski's Shortform
Nicolas Lupinski3mo*10

What do you mean I don't "need"  O(n log(n)) sorting ?

It's just the asymptotic cost of sorting by comparison...

I'll have a look into bounded rationality. I was missing the keyword.

EDIT : had a look, the concept is too imprecise to have clear cut paradoxes.

 

Reply
Nicolas Lupinski's Shortform
Nicolas Lupinski3mo10

Are there known "rational paradoxes", akin to logical paradoxes ? A basic example is the following :

In the optimal search problem, the cost of search at position i is C_i, and the a priori probability of finding at i is P_i. 

Optimality requires to sort search locations by non-decreasing P_i/C_i : search in priority where the likelyhood of finding divided by the cost of search is the highest.

But since sorting cost is O(n log(n)), C_i must grow faster than O(log(i)) otherwise sorting is asymptotically wastefull.

Do you know any other ?

Reply
"It's a 10% chance which I did 10 times, so it should be 100%"
Nicolas Lupinski8mo10

In collectible games, after 1/p trials, there is 1/e = 63% chance to get a desired item.

But a full collection is completed in -ln(p)/p trials in average. For p = 1/n, its n*ln(n).

For instance, getting an outcome with 10% frequency requires in average 10*ln(10) = 23.026 trials.
1% is 460 trials
1/1000 is 6900 trials, etc...

In gacha games, developpers can guarantee you a 10% event in every 10 trials. The market fits irrationality, for a price.
 

Reply
Load More
P vs NP
2 months ago
(+418)