Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.
This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work
Is there a reason that random synthetic cells will not be mirror cells?
We have here some scientists making cells. Looks like a dangerous direction
Humans seem way more energy and resource efficient in general, paying for top talent is an exception not the rule- usually it's not worth paying for top talent.
Likely to see many areas where better economically to save on compute/energy by having human do some of the work.
Split information workers vs physical too, I expect them to have very different distributions of what the most useful configuration is.
This post ignores likely scientific advances in bioengineering and cyborg surgeries, I expect humans to be way more efficient for tons of jobs once the standard is 180 IQ with a massive working memory
I do things like this at times with my teams.
Important things:
Don't think you need to solve the actual problem for them
Do solve 'friction' for them as much as possible
Do feel free to look up other sources so you can offer more perspective and to take off the load of having to find relevant info
positive energy, attentive etc
if they're functioning well just watch and listen while being interested and unobtrusive, at most very minor inputs if you're pretty sure it'll be helpful
If stuck at a crossroads ask them how long they think each path will take/ how hard it'll be, and give them feedback if you think they're wrong. Help them start working on one, people can get stuck for longer than it would take to actually do one option.
A message from Claude:
'''This has been a fascinating and clarifying discussion. A few key insights I'll take away:
The distinction between bounded and unbounded optimization is more fundamental than specific value differences between AIs. The real existential threat comes from unbounded optimizers. The immune system/cancer metaphor provides a useful framework - it's about maintaining a stable system that can identify and prevent destructive unbounded growth, not about enforcing a single value set. The timing challenge is critical but more specific than I initially thought - we don't necessarily need the "first" AGI to be perfect, but we need bounded optimizers to establish themselves before any unbounded ones emerge.
Some questions this raises for further exploration:
What makes a Schelling fence truly stable under recursive self-improvement? Could bounded optimizers coordinate even with different base values, united by shared meta-level constraints? Are there ways to detect early if an AI system will maintain bounds during capability gain?
The framing of "cancer prevention" versus "value enforcement" feels like an important shift in how we think about AI governance and safety. Instead of trying to perfectly specify values, perhaps we should focus more on creating robust self-limiting mechanisms that can persist through capability gains.'''
A few thoughts.
Have you checked what happens when you throw physic postdocs at the core issues - do they actually get traction or just stare at the sheer cliff for longer while thinking? Did anything come out of the Illiad meeting half a year later? Is there a reason that more standard STEMs aren't given an intro into some of the routes currently thought possibly workable, so they can feel some traction? I think either could be true- that intelligence and skills aren't actually useful right now, the problem is not tractable, or better onboarding could let the current talent pool get traction - and either way it might not be very cost effective to get physics postdocs involved.
Humans are generally better at doing things when they have more tools available. While the 'hard bits' might be intractable now, they could well be easier to deal with in a few years after other technical and conceptual advances in AI, and even other fields. (Something something about prompt engineering and Anthropic's mechanistic interpretability from inside the field and practical quantum computing outside).
This would mean squeezing every drop of usefulness out of AI at each level of capability, to improve general understanding and to leverage it into breakthroughs in other fields before capabilities increase further. In fact, it might be best to sabotage semiconductor/chip production once the models one gen before super-intelligence/extinction/ whatever, giving maximum time to leverage maximum capabilities and tackle alignment before the AIs get too smart.
The point was more about creating your own data being easy, just generate code then check it by running it. Save this code, and later use it for training.
If we wanted to go the way of AlphaZero it doesn't seem crazy.
De-enforce commands, functions, programs which output errors, for a start.
I didn't think of the pm as being trained by these games, that's interesting. Maybe have two instances competing to get closer on some test cases the pm can prepare to go with the task, and have them competing on time, compute, memory, and accuracy. You can de-enforce the less accurate, and if fully accurate they can compete on time, memory, cpu.
I'm not sure "hard but possible" is the bar - you want lots of examples of what doesn't work along with what does, and you want it for easy problems and hard ones so the model learns everything
Product manager, non-technical counterpart to a team lead in a development team
Reading novels with ancient powerful beings is probably the best direction you have for imagining how status games amongst creatures which are only loosely human look.
Resources being bounded, there will tend to always be larger numbers of smaller objects (given that those objects are stable).
There will be tiers of creatures. (In a society where this is all relevant)
While a romantic relationship skipping multiple tiers wouldn't make sense, a single tier might.
The rest of this is my imagination :)
Base humans will be F tier, the lowest category while being fully sentient. (I suppose dolphins and similar would get a special G tier).
Basic AGIs (capable of everything a standard human is, plus all the spikey capabilities) and enhanced humans E tier.
Most creatures will be here.
D tier:
Basic ASIs and super enhanced humans (gene modding for 180+ IQ plus SOTA cyborg implants) will be the next tier, there will be a bunch of these in absolute terms but relative to the earlier tier rarer.
C tier:
Then come Alien Intelligence, massive compute resources supporting ASIs trained on immense amounts of ground reality data, biological creatures that have been redesigned fundamentally to function at higher levels and optimally synergize with neural connections (whether with other carbon based or silicon based lifeforms)
B tier:
Planet sized clusters running ASIs will be a higher tier.
A, S tiers:
Then you might get entire stars, then galaxies.
There will be much less at each level.
Most tiers will have a -, neutral or +.
- : prototype, first or early version. Qualitatively smarter than the tier below, but non-optimized use of resources, often not the largest gap from the + of the earlier tier
Neutral: most low hanging optimizations and improvements and some harder ones at this tier are implemented
+: highly optimized by iteratively improved intelligences or groups of intelligences at this level, perhaps even by a tier above.