Aliaksei Yaletski (Tiendil)
Aliaksei Yaletski (Tiendil) has not written any posts yet.

Aliaksei Yaletski (Tiendil) has not written any posts yet.

Yep, these conclusions intersect with the prognosis I made for myself at the end of 2024:
- There will be no technological singularity.
- Neural networks won't change conceptually over the next 10 years.
- We won't build strong AI based on just one or a few neural networks.
- Neural networks on their own won’t take jobs away.
- Robots will not rapidly and massively replace manual labor.
- Works of proactive professionals are safe for the next 10 years.
My predictions are based on:
- The view on the current neural networks as on one-more-building-block-from-many, essentially a probabilistic database — not more but not less.
- The assumption that businesses/governments/money will focus on optimizing the current achievements, rather than on continuing risky experiments.
A much longer post with explanation is in my blog: https://tiendil.org/en/posts/ai-notes-2024-prognosis
>I'll have an example that I personally experienced recently, omitting some details related to my PII.
Based on this example, could you compare your expected outcome with the real outcome? In numbers. For example, "I expected to finish this refactoring in two weeks, but it took two months" or "I expected to finish this refactoring in two weeks, but my colleague completed the same task in two days." Something like that.
I have more than 15 years of experience in software development, not precisely in data processing but in gamedev and backend (and sometimes in data processing), which is pretty close.
From my experience, most people are struggling with the task you described. Even senior... (read more)
Hi!
Why do you think the problem you are talking about is specific to you and not for everyone?
Without more details, the described problem looks like a common problem for everyone: most people struggle to formalize their thoughts (and programming is a formalization). No one can translate their knowledge into code without mistakes. People make mistakes; it is okay.
Are you comparing yourself to someone else? Or to some "ideal" image of "how you should work"?
Maybe you could provide more concrete examples of your problem, like "I need to do X, but I struggle here and here and make mistakes here and here."
My statement about the lack of huge investments in risky experiments may really be too strong. In the end, we speak about people, and they are always unpredictable. Partially, I formulated it that way to acquire a strong validation point for multiple of my personal models of how the world and society work. However, I still believe it is more probable than the opposite statement.
Speaking about strong AI.
The analogy between child-parent relations is the simplest I found for the post. The history of humanity has a long record of communication between different societies on different levels of maturity and with different cultures, of course. Those contacts didn't always go well, but they... (read more)