Yep, these conclusions intersect with the prognosis I made for myself at the end of 2024:
- There will be no technological singularity.
- Neural networks won't change conceptually over the next 10 years.
- We won't build strong AI based on just one or a few neural networks.
- Neural networks on their own won’t take jobs away.
- Robots will not rapidly and massively replace manual labor.
- Works of proactive professionals are safe for the next 10 years.
My predictions are based on:
- The view on the current neural networks as on one-more-building-block-from-many, essentially a probabilistic database — not more but not less.
- The assumption that businesses/governments/money will focus on optimizing the current achievements, rather than on continuing risky experiments.
A much longer post with explanation is in my blog: https://tiendil.org/en/posts/ai-notes-2024-prognosis
>I'll have an example that I personally experienced recently, omitting some details related to my PII.
Based on this example, could you compare your expected outcome with the real outcome? In numbers. For example, "I expected to finish this refactoring in two weeks, but it took two months" or "I expected to finish this refactoring in two weeks, but my colleague completed the same task in two days." Something like that.
I have more than 15 years of experience in software development, not precisely in data processing but in gamedev and backend (and sometimes in data processing), which is pretty close.
From my experience, most people are struggling with the task you described. Even senior developers with 5-10 years of experience. I saw only a few people, who could do such refactoring naturally and from the first try.
I may suggest a few things:
- Try to reevaluate your expectations. Maybe you are too hard on yourself. You may overestimate the required result and try to do more than you need.
- Research a few methodologies of development. I may suggest Test Driven Development and Pair Programming. The first may help you control changes easier, and the second may help you learn how other people solve similar tasks.
- Try to pay more attention to code decomposition and structuring when you work on "pipeline prototype". Aka, add some "fake" functions/objects that will be required later, but for now, do nothing or contain trivial logic. Sometimes, it is easier to see what you need to do when you are fully in the task's context rather than when you are refactoring your code.
- Do not try to implement all features at the same time. Choose one, implement it, cover it with some tests or types or both, and then move to another.
Hi!
Why do you think the problem you are talking about is specific to you and not for everyone?
Without more details, the described problem looks like a common problem for everyone: most people struggle to formalize their thoughts (and programming is a formalization). No one can translate their knowledge into code without mistakes. People make mistakes; it is okay.
Are you comparing yourself to someone else? Or to some "ideal" image of "how you should work"?
Maybe you could provide more concrete examples of your problem, like "I need to do X, but I struggle here and here and make mistakes here and here."
My statement about the lack of huge investments in risky experiments may really be too strong. In the end, we speak about people, and they are always unpredictable. Partially, I formulated it that way to acquire a strong validation point for multiple of my personal models of how the world and society work. However, I still believe it is more probable than the opposite statement.
Speaking about strong AI.
The analogy between child-parent relations is the simplest I found for the post. The history of humanity has a long record of communication between different societies on different levels of maturity and with different cultures, of course. Those contacts didn't always go well, but they also didn't lead to the full extinction of one of the parties (in most cases).
Since most likely a superintelligent will be produced on the basis of the information humanity produced (we don't have any other source of data), I believe it will operate in a similar way => a struggle is possible, maybe a serious one even, but in the end we will adapt to each other.
However, this logic is relevant in situations where a strong AI appears instantly.
I don't expect that, given there are no historical precedents for something complex appearing instantly without a long process of increasing complexity.
What I expect, assuming that strong AI will appear, is a long process of evolving AI tools with increasing complexity to each of which humanity will adapt, like it already adapted to LLMs. At some point, those tools began uniting into a kind of smaller AIs, and we'll adapt to them, and they will adapt to us. And so on, until we reach a point when the complexity of those tools will be incomparably higher than the complexity of humanity. But by that time, we will have already adapted to them.
I.e., if such a thing ever happens, it will be a long enough co-evolution, rather than an instant rise of a superintelligent being and obliteration of humanity.