Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to be worried, but not so much anymore.

It used to be the case that the labs were prioritising replacing human creativity, e.g. generative art, video, writing. However, they are coming to realise that just isn't a profitable approach. The most profitable goal is actually the most human-oriented one: the AI becomes an extraordinarily powerful tool that may be able to one-shot particular tasks. But the design of the task itself is still very human, and there is no incentive to replace that part. Researchers talk a bit less about AGI now because it's a pointless goal. Alignment is more lucrative.

Basically, executives want to replace workers, not themselves.



On the contrary the depth and breadth we're becoming able to handle agentically now in software is growing very rapidly, to the point where in the last 3 months the industry has undergone a big transformation and our job functions are fundamentally starting to change. As a software engineer I feel increasingly like AGI will be a real thing within the next few years, and it's going to affect everyone.


I don’t write code anymore. I don’t use ide’s anymore. The agent writes code. My job is to manage ai now.

The paradigm shift has already happened to me and there will be more shifts to come.


"to the point where in the last 3 months the industry has undergone a big transformation "

Oh... this again.


If you look at those operating at the bleeding edge, it doesn't look anything like yesteryear. It's a real step change. Fully autonomous agentic software engineering is becoming a reality. While still in its infancy, some results are starting to be made public, and it's mind boggling. We're transitioning to a full agent-only workflow in my team at work. The engineering task has shifted from writing code to harness engineering, and essentially building a system that can safely build itself to a high quality given business requirements.

Up until recently I kinda feel like the scepticism was warranted, but after building my own harness that can autonomously produce decent quality software (at least for toy problem scale, granted), and getting hands on with autoresearch via writing a set of skills for it https://github.com/james-s-tayler/lazy-developer, I feel fundamentally different about software engineering than I did until relatively recently.

If you look at the step change from Sonnet 4.5 to Opus 4.5 and what that unlocked, and consider the rumoured Mythos model is apparently not just an incremental improvement, but another step change. Then pair it with infrastructure for operating agents at scale like https://github.com/paperclipai/paperclip and SOTA harnesses like the ones being written about on the blogs of the frontier labs... I mean... you tell me what you think is coming down the pipe?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: