Rendered at 00:14:17 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
pards 13 hours ago [-]
In my large enterprise world, AI adoption hasn't made it outside of the development teams - only developers have access to Github Copilot.
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
SlinkyOnStairs 12 hours ago [-]
> Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
So much of Management (both mid and executive) still considers Software as if it were an assembly line; "We make software just like how Ford makes cars". Code as a product.
Which isn't to say that most software development isn't woefully inefficient, but the important bits aren't even considered. "The Work" is seen as being writing code, not the research that goes into knowing what code has to be written.
And for AI marketing, this is almost a videogame-esque weakspot. Microsoft proclaims "50% faster code!" and every management fool thinks "50% faster product; 50% faster money!"
> Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend.
It's going to be a disaster once ROI is demanded. Right now everyone is fine with not measuring it; Investors are drunk on hype and nobody within the company actually wants to admit that properly measuring software development productivity is almost impossible.
But the hype won't last forever. Sooner or later investors will see the "$2M spend" and demand "$4M net profit", and that's not going to materialize.
Copilot and Claude won't be tackling the real bottlenecks. They're not going to dredge up decade old institutional knowledge, they won't figure out whether code looks bad because it is bad or because it solves a specific undocumented problem, they won't anticipate future uses.
Code just isn't the product. Not the real work. Really, if your codebase is in a healthy state, it's often a literally free output of the design and research processes. By the time you've refined "our procurement team finds the search hard to use" into a practical ticket, the React component for the appropriate search filters has basically already been written, writing up the code is just a short formality. Asking Copilot would turn a 10 minute job into a 5 minute job. Real impressive, were it not for the 6 hours of meetings and phone calls that went into it.
SoftTalker 7 hours ago [-]
"We make software just like how Ford makes cars".
People who say this kind of thing probably have no idea how Ford makes cars either. The assembly line is the last step. All the research, design, engineering, and testing happens before any sheet metal is stamped out. So the comparison might be more true than not, but unknowingly.
Ma8ee 4 hours ago [-]
Exactly. It's just that they mix up the steps. The last step, the assembly, is highly automated and usually very fast in software production, since it is done by a compiler (and the aptly named, assembler). The people involved are doing engineering and design, which is much harder to control.
AnimalMuppet 4 hours ago [-]
Not even that. It's done by the CD duplicator (or, these days, by the web server).
AngryData 1 hours ago [-]
Atleast 90% of the time whenever someone mentions Ford it is to spew out Ford PR garbage they once heard that they took as real history instead of the marketing it really is. It really should be held up as an example of how powerful well designed PR is and the myths it generates.
Brian_K_White 7 hours ago [-]
It's not true at all. In software, the factory line is nothing but cp or httpd and neither costs nor produces any value. In cars, the factory line both costs and produces all the value.
jldugger 6 hours ago [-]
> the factory line both costs and produces all the value.
I think the point OP is trying to make is that manufacturing and design are seperate steps with different workflows and expectations. And that the design step does have value, as without it your factory line has nothing particular to make or sell.
Nobody is sitting around Ford trying to make the clay modeling step faster or more error free, it's a design function. But there are hundreds of software execs out there trying to do exactly that. In part because cp and git and make and your other build tools that make up the factory line function are pretty much rock solid and cost optimized to nearly free.
saltcured 5 hours ago [-]
Wait, I thought it was the auto company financial services division that produces all the value.
The design, factory, supply chain, etc. is just the marketing arm for the loans...
B1FF_PSUVM 5 hours ago [-]
> In cars, the factory line both costs and produces all the value.
Does that apply to phones?
ambicapter 6 hours ago [-]
Really? So all the designers and engineers at Ford who don't have an iota of a car built by the time they're done with their work aren't producing any value?
thephyber 4 hours ago [-]
Marketing and finance are also very large components of cost and value, respectively.
It was a short pithy sentence, but it does have a kernel of truth to it.
Brian_K_White 6 hours ago [-]
The design is like the electricity to the factory. You need it or else you get no cars at all, but it's a small percentage of the total resources consumed and produces no value at all directly.
thfuran 12 hours ago [-]
>Sooner or later investors will see the "$2M spend" and demand "$4M net profit", and that's not going to materialize.
I think this is probably going to happen at the same time that the providers start really jacking up token prices to extract all the value they can.
daheza 8 hours ago [-]
I'm a manager and the VPs are starting to ask - how many story points are we getting with AI now. Now we do story points = number of days to implement. (I know this is not real agile but just assume you are in the same position)
I can't answer that question but plenty of other managers are fully ready to just give bogus numbers.
For my team, use of AI has indeed lowered the story point cost. The coding part of the story takes less work so we have started to lower the story point cost for stories that would previously cost more. Think of a 5SP to 3SP reduction.
We have increased the number of features being delivered but our number of story points delivered has remained static.
nradov 7 hours ago [-]
When management starts tracking improvements in story point productivity then the agile teams inflate their story point estimates. Sometimes this involves splitting user stories in ways that don't really make sense from a customer perspective just for the sake of having a place to tack on more points.
And I'm not opposed to using story points, they have some utility within an agile team or program. They just aren't a valid way of quantifying productivity changes.
at-fates-hands 2 hours ago [-]
A few years ago I was on a massive project to rebuild and redesign a major public facing portal. Our dev team was nails, cranking out features and components on a very tight time table. Several other teams were inflating their story point estimates so when the higher ups would get their weekly reports, our team was always in dead last for completing story points.
Our manager brought us into an all hands meeting and kind of read us the riot act because now we were on "Bob the executive" radar because it looked like we really weren't delivering much week by week. Had anybody actually looked at the amount of work we were doing and what we were shipping, it wouldn't be close.
Exactly as you predicted, we started over inflating our stories, creating Epics when they weren't needed, breaking out a single feature into a dozen or more stories. Over the next few weeks, we were all getting pats on the back for "really picking up the pace". When in reality, we were just doing the same thing we always did.
It just reinforced the idea that Agile had turned into a system that was easy to manipulate to create the illusion you were doing more than you really were. I imagine we're going to see a lot more of this as C-Suite folks start clamoring for ROI on the millions they're spending on tokens.
SoftTalker 7 hours ago [-]
> story points = number of days to implement
Some variant of this has been the case in every agile team I've ever worked on.
recursive 3 hours ago [-]
No one's ever been able to explain story points to me without saying something like "Story points are kind of like how long it will take to implement, except it's not that".
So what is it then? All the explanations and examples are in units of time, but with a disclaimer saying that the true nature of story points is not time-based, except for the fact that they can only be explained in terms of time.
tweetle_beetle 3 hours ago [-]
I once met someone who refused to engage with leadership using his team's story points as a direct measure of productivity. To make it harder to extract the data and compare against other teams, they moved to using names of animals to represent types of task associated with differing amounts of uncertainty.
I've also seen a supplier who was asked to provide some kind of tracking, where literally nothing existed. Their delivery team produced reports with story points per person, per task, per sprint. Every sprint, every person hit their target month after month after month. They were asked to stop.
Terr_ 3 hours ago [-]
IANAScrumlord, but ideally story points are like a foreign currency: It's both normal and healthy for exchange rates to constantly fluctuate, and every country (team) has it's own units for capturing guesswork and confidence and quality/speed mix.
The managerial goal is to take near-past moving average rates (from completed tickets) and use them to forecast near-future expectations. 1.0 of Team Alpha's points might mean 4 hours this week... but anybody who shows up six months later expecting exactly the same rate is foolish, doubly-so if they expect it to be the same across teams, or after a big change in staff or tooling or project.
______
Other musings: Whenever a manager says "my current estimate of the rate is X pts/hr, use that when sizing", I feel it's a mistake. I kills off the intuition you really want to capture. Team members ought to be comparing expected tasks to past tasks.
Also, the goal of "accurate scheduling predictions" exists in conflict with "measure employee output". Trying to use your point-system for one generally harms the other.
koyote 2 hours ago [-]
I always see SP a combination of time and risk. I think a lot of people do not include risk in the estimate.
So a story might be estimated at 3SP to implement but there's a high risk that it would blow out (e.g. idea was not fully proven in a PoC, work is in an area that is historically underestimated, reliance on a different team, etc.), so we set it to 5SP to include that risk. Maybe 50% of the time it does get finished in what a normal 3SP would finish in, but at least we've covered the 50% of time it blows out.
tardedmeme 4 hours ago [-]
I've always asked for it when joining a team to calibrate my story point estimates. At some teams 1 point is about a half hour task and at other teams it's a full day.
pydry 3 hours ago [-]
If you earnestly believe story points are a good measure of productivity then Im afraid you have a lot to learn.
SlinkyOnStairs 11 hours ago [-]
Almost certainly. Software firms are pretty bad at self-evaluation and they're profitable enough that Capitalism won't force them to do it either.
Right now the subscriptions are still in the range of reasonable business expenses, but pretty soon they'll have to jump and $200/month/seat subscriptions turning into $2000/month/seat subscriptions is going to get even very badly ran companies to re-evaluate.
busterarm 11 hours ago [-]
It's worse than that. Developers themselves are drunk. They'll be cut off from tools right when they no longer understand the underlying code they're responsible for.
We're already here even. I know of a company that was doubling their Codex spend and hitting the cap week over week and finally they had enough and stopped increasing. Then they maxed out on credits and had a week of no Codex. A large percentage of the engineers loudly refused to work for the rest of the week. They were managing the Codex managing the codebase and were totally incapable of dealing with its output without it.
jimmypk 9 hours ago [-]
[flagged]
cdud3 8 hours ago [-]
Amen. We are still running highly unoptimized workflows in AWS and nobody reviews why we spend so much $ on that now while it was peanuts when we did it all ourself.
danaris 8 hours ago [-]
> They're not going to dredge up decade old institutional knowledge
Worse—they'll get the people who hold that knowledge laid off, and at least 50% of the institutional knowledge won't be documented anywhere that even could be fed to the LLM.
palmotea 8 hours ago [-]
> So much of Management (both mid and executive) still considers Software as if it were an assembly line; "We make software just like how Ford makes cars". Code as a product.
Which, it should be noted, is the dumbest idea ever. The Ford assembly line makes more-or-less identical copies of the same design. How do you do that with software? The cp command.
If someone thinks like that, they probably read some business book and either didn't understand the book, don't understand their own business, or is following some guru who has one of those problems.
DoctorOetker 8 hours ago [-]
Precisely, cars are more-or-less identical copies, at each position along the assembly line its just one of a handful of variants of the step that needs to be executed.
Software is less like an assembly line and more like plumbing:
Some people design which type of pipe needs to be routed from here to there.
The implementor actually pipes the outputs of one function, in a variable, and then taps it off as an argument to another function.
Software development is like plumbing really, so a good manager of a pipeworks and plumbing company might actually make a good manager for software companies as well.
This is also why its actually not so surprising that LLM's are mastering programming skills, it's essentially just being a plumber, and a lot of people are happy they no longer need to be a plumber. Physicists, engineers, scientists, ... they have much more complicated tasks compared to plumbers, programmers and code monkeys.
AlotOfReading 6 hours ago [-]
An assembly line is plumbing too. And just like software, while there are a finite number of variants at any specific time, the plumbing is constantly being rearranged for various reasons. A line won't look the same in 6 months as it does today.
Physicists, engineers, scientists, ... they have much more complicated tasks compared to plumbers, programmers and code monkeys.
I've sat next to the industrial engineers designing the lines and MechEs working in CAD. My software job wasn't all that different at a high level. We all wrote requirements, made bugfixes, and complained about the tier 1s. They usually spent more time visiting the lines in Asia/Mexico/Michigan/Canada. I just emailed the factory when I needed to fix something.
palmotea 6 hours ago [-]
> Software development is like plumbing really, so a good manager of a pipeworks and plumbing company might actually make a good manager for software companies as well.
No, wrong again. Some software development tasks are like plumbing, but that misses a lot. Your claim in sort of like saying since the Wendelstein_7-X has wiring, the manager an electrical contractor would be good to lead that project.
Plumbers and electricians more or less solve the same problems over and over with slightly different parameters, and because of the repetitiveness, they can do a good job by following (a hefty number) of rules of thumb (the building code). A software developer isn't going to go far just throwing design patterns at a problem (though many bad ones try).
QuercusMax 4 hours ago [-]
The plumber who has a robot who can make perfectly measured custom one-off tools and specially constructed piping runs inside your walls is going to have super powers compared to somebody who has to go to home depot and assemble a bunch of PVC pipes or whatever.
Just the other day I needed to make a calibration interface for a home automation app (pointing a dumb webcam at my washer and dryer so I can tell if they're done without running up and down two flights or stairs). I just wanted to be able to look at the whole scene and manually pick the ROI to extract and display on my home dashboard. So I asked the AI to build me a stupid little web UI where I can just click to select the ROI center, and what it built me in 10 seconds was perfect for my needs.
Was it pretty? Not really. Was it what I would have built myself? Not quite - but it solved the problem I had without me needing to remember or look up how to do all the specifics.
ninjagoo 1 hours ago [-]
> pointing a dumb webcam at my washer and dryer so I can tell if they're done without running up and down two flights or stairs). I just wanted to be able to look at the whole scene and manually pick the ROI to extract and display on my home dashboard. So I asked the AI to build me a stupid little web UI where I can just click to select the ROI center, and what it built me in 10 seconds was perfect for my needs.
The machines all beep when they're done.
A baby monitor in the machine area will accomplish the same thing :-)
But of course the project is much more fun ...
tardedmeme 4 hours ago [-]
Have you considered taping a sensor to the light, or measuring the electrical current flowing through the power cord? Both should be a bit more reliable. The idea of messing with mains power is scary at first but with basic precautions it's fine, and I think you can buy current meters with various interfaces if you aren't comfortable.
QuercusMax 4 hours ago [-]
Why would I buy power meters and mess around with indirect signals that don't measure what I want (how much time is left on the cycle) and instead just tell me whether it's running?
I already have an old webcam and raspi I'm not using, and they measure exactly what I care about.
ninjagoo 58 minutes ago [-]
> how much time is left on the cycle
Ooh, new requirement!
Set timer on phone when setting timer on machine...
Henchman21 8 hours ago [-]
> If someone thinks like that
So like 95% of business school graduates?
embedding-shape 13 hours ago [-]
> Large enterprises need to learn how to ship software faster
They haven't even learned that "less code is better" yet, I wouldn't hold my breathe waiting for them to suddenly learn "more advanced" things like that before they learn the basics.
dawnerd 8 hours ago [-]
Exactly. Everyone keeps talking about how much velocity they have with AI but no ones talking about quality for bugs that come from it. Just how fast they can ship.
Ekaros 8 hours ago [-]
Or building what is actually needed or makes sense. Most efficient investment is one you do not have to make...
forinti 12 hours ago [-]
More code means more support and more maintenance. If your team is already overloaded or if it's going to be reduced because of AI, things are going to get tough.
lxgr 12 hours ago [-]
My bigger concern is actually that, if a company isn't careful, the bloat (complexity, amount of code, other artifacts etc.) will just balloon and largely cancel out any gains.
Feedback is often only considered once something is already on fire (financially, functionally, or literally).
nkrisc 11 hours ago [-]
That’s the game plan for the AI companies: once companies have massive codebases of critical AI generated code and a skeleton crew of prompt engineers they’re going to be locked in to the AI product to develop anything new.
They’re not even selling shovels, they’re selling subscriptions for shovels.
lxgr 9 hours ago [-]
I don't think there needs to be any intentional evil scheme for this dynamic to be worth considering and mitigating.
For example, there's also no cabal behind memory prices dropping (ignoring the development of the past months, of course), which in turn enabled web and game developers to use more memory and make their software non-viable on older devices.
darth_avocado 6 hours ago [-]
From the article:
> The whole thing dies if it turns into employee scoring
All ICs know this and some management does, but I’m willing to put money that it will 100% become employee scoring.
arkh 9 hours ago [-]
With the factory allegory: code is stock. Stock costs and is a liability hence Lean manufacturing.
_pdp_ 13 hours ago [-]
Yep.
I would argue that any sufficiently large system reaches a point where more code is in fact the opposite of what it needs.
Nutrition and calories are only useful up-to a point and then we have diminishing and later on negative returns.
Even-tough it is not the best analogy because we are describing two different system, it helps put a mental model around the fact that churning more is often less.
Side Note: A got a feedback from a customer today that while our documentation is complete and very detailed, they find it to be too overwhelming. It turns out having a few bullet points to get the idea across it better than 5 page document. Now it is obvious.
WorldMaker 7 hours ago [-]
> I would argue that any sufficiently large system reaches a point where more code is in fact the opposite of what it needs.
I have absolutely worked on code bases I would describe as "marbleized bricks" where the best thing I can do is carve out the statue they already contain. There's a great satisfaction in making PRs that mostly delete things, but the later result is a program that works faster, has fewer bugs/edge cases, is easier for the next person to debug.
The LLMs certainly can add more layers of marble. Companies don't often know how much more they need an artist with sculpting tools more than a bricklayer.
razodactyl 12 hours ago [-]
Seeing this too. Machines are great at pumping out content.
Tl;dr's, quick references / QuickStarts / cheat sheets and FAQs are also some things they're great at generating.
yetihehe 12 hours ago [-]
Like in that comic strip[0], where one side uses AI to inflate his bullet points to make it look better and have more content in the email, then other side uses AI to summarize it to bullet points.
This happens a probably a billion times a day. I shudder to think of the cost of it. Especially after knowing how LLMs aren't great at summarizing nor are they flawless at expanding information
Yep, it's 100% a theory of constraints issue. Any optimization not done at the bottleneck (post-development) only serves to worsen the bottleneck.
thisisit 8 hours ago [-]
My larger enterprise world today AI adoption seems to have taken a turn for the worse.
Finance folks reached out asking if they could vibe code their own app using Copilot/Cursor/Claude for finance planning purpose. And because they know my management freezes whenever there are whispers of "our CFO said so" they even paraded that reasoning - "our CFO "tested" Lovable and he is convinced and asking us to vibe code the app".
If that is not enough they ended with a nicely wrapped reasoning of "we need to try this to be sure that using vibe coded app can exist in enterprise finance with appropriate data security and maintainability".
And mind you this is a reasoning at a company with more than 20+ billion in revenue.
highfrequency 8 hours ago [-]
What’s wrong with the finance team (vibe) coding a janky prototype for planning?
thisisit 7 hours ago [-]
Because the next email is going to be "we demoed the app to our CEO and he loved it and he wants it in production". I have never seen the team back down from their ideas.
And also, I have been in similar sounding scenarios multiple times. They talk big when everything is going smoothly and nothing is on the line. The day shit hits the fan, they will furiously message me on Teams and insist that I support them in finding out issues. So far it has been about mostly about shitty design choices. This is at whole new level. They want to vibe code an app which will be used to plan and guide company's direction for the next year.
JohnMakin 8 hours ago [-]
prototypes/mvp's often become the production version
auspiv 6 hours ago [-]
And their spreadsheets don't?
JohnMakin 6 hours ago [-]
[dead]
sandeepkd 7 hours ago [-]
This seems to be the default path which is encouraged/suggested lately, only happy path until you acquire customers
JohnMakin 7 hours ago [-]
I think it’s fairly normal at least in my career - rush to ship something, lots of “we’ll polish this later,” two years go by, get called into vp/cto/whoever’s office when the debt comes calling like “what the fuck why is this like this???” and I have to say “that ‘later’ we decided is now I guess”
sandeepkd 4 hours ago [-]
The script I have fairly seen being played is where the one doing MVP gets rewarded and moves on with a promotion. The weight of completing and stabilizing the MVP falls on some one else who is not vocal enough in terms of influence. Ironically the flashy MVP does not includes monitoring, logging, security, edge-cases, CI-CD, DR, scaling which is why vibe coding is getting so popular and everyone seems to be under the impression that engineers are not needed anymore.
KellyCriterion 5 hours ago [-]
...and are often still in place when the "magic guy who built it" left long time ago...
redsocksfan45 8 hours ago [-]
[dead]
kjkjadksj 8 hours ago [-]
Run for the hills. These sorts of businesses can get by with an old dos app running in an emulator to manage sales and inventory. They don’t care about maintainability or anything we care about. If it works it works and they will squeeze it to work as long as they possibly can. Which could very well be three decades or more.
nazcan 7 hours ago [-]
It may not be the biggest bottleneck, but if you can have a similar amount of time, but reduce the number of engineers by 30%, that's a huge win.
And having less people involved means there is much less communication and alignment.
Not to say it's a panacea.
soks86 7 hours ago [-]
That's an interesting take I don't see anyone else bringing up.
It would also, I would think, make it easier for the 30% fewer engineers to earn a better living in the long run and reduce human management effort.
This makes the most sense to me. So far AI, being fallible, can only augment humans so you can have less humans to do the same work (or tasks where accuracy can be less than 100%, like lower level support calls/questions). Next comes the task of re-balancing the distribution of labor or teaching other departments to utilize AI.
To me that rings the most true because where AI saves me the most time is in never having a bug that takes more than a few hours to pinpoint, even if I'm looking in the wrong place, because with enough clues the AI will look in the right place before I think of doing so. Like finding a needle in a haystack. It doesn't suddenly make me 100x more productive, but it saves a lot of time on some time consuming tasks.
nazcan 6 hours ago [-]
The debugging improvements have been huge for me too. I was debugging some financial software, and while it took a few shots, just with access to my code and not to the database that showed the issue, it found a fairly complex problem.
mattmcknight 12 hours ago [-]
"release train" ... "learn how to ship software faster"
SAFe is poison.
zbentley 4 hours ago [-]
What do release trains have to do with SAFe? I've worked in a few places that have the notion of a release train (roughly a mutex for "changes being shipped and a burn-in window afterwards" plus some optional batching of changes based on risk level or urgency). Only one of them even gave lip service to SAFe; others had different methodologies/levels of devops for releases/"Agile"-ness (whatever that means).
Release trains seemed like a mechanical release implementation detail in all cases; not the product or requirement of a given SDLC or process brand.
dgellow 11 hours ago [-]
The Mythical Man Month should really be a mandatory reading for anyone working in software… and I don’t mean reading a Claude summary
Henchman21 8 hours ago [-]
I asked who hadn’t read that in an engineering meeting recently. No one had heard of it. Now, I have been in tech for my entire life. That book was given to me in 1993 as I was just starting out in college.
NO ONE in my meeting was familiar with the title or author.
I felt a little more impending doom upon realizing that.
Mashimo 10 hours ago [-]
Same here, but instead of the developers having access to Github Copilot, some selected few devs have access to some internal proxy, that goes to Amazon bedrock, where we have "400 request" per week to Claude Sonet :))))
kj4211cash 12 hours ago [-]
We have a "two timelines" approach going on and I'm curious if others are seeing the same. There are official "Engineering-supported" services. There development speed is not the bottleneck. Engineers demand clean requirements that take forever to show up. Testing and deployment scheduling also take forever post-development. Important people are so fed-up that they've started hiring people to vibe code and develop services without going through Engineering. Code is shipped much faster here but technical debt accumulates rapidly. The important people are beginning to hire Data Scientists who sit outside of the Tech org to manage the AI code. It's all very interesting.
samothrace 7 hours ago [-]
Sounds like your company has bad or missing business analysts and/or product owners. Someone's supposed to be working between the stakeholders and engineering to develop those requirements and commit resources for testing. These "important people" are re-inventing the wheel and will be mired just as bad or worse until they figure this out.
kj4211cash 47 minutes ago [-]
I agree with you in general. And have a good chuckle when the vibe coders get derailed by some roadblock that would've been obvious to a professional engineer. But it's a bit one-sided to say that we have bad or missing analysts and product. Even good product can't keep up with how fast AI allows you to go. At an established company, maybe you shouldn't go as fast as AI allows you to go, but try telling that to the "important people."
giancarlostoro 9 hours ago [-]
> Unshipped code is a liability, not an asset.
"Hey we found this bug and-"
"We already found it with Claude, we're still waiting for out next release."
Or worse, its a bug that doesn't exist in prod, since the code keeps changing, and you wont know about the bug until its out there because there's one niche user with a niche use scenario everyone forgot or didn't even know existed, and he's going to somehow crash the entire system with your next deployment.
agloe_dreams 6 hours ago [-]
Meanwhile in private equity world, they have realized that code is "piling up from 10Xing everyone's performance" and as a result they have solved it by just firing all QA, focusing only on speed to prod, killing signoffs, and scheduling and code review. We are probably going to bankrupt ourselves from an idiotic mistake somewhere here. But nobody will ever know until it happens. Don't take those gates for granted.
dev360 11 hours ago [-]
Its trickling in slowly to non dev teams. Im consulting with a large fin-tech on Enterprise-wide AI adoption at the moment, and I'm seeing the same parallels though: you have power users that reap disproportionate rewards from it, and then you have the "tab complete" crowd that copy paste things into the prompt.
This was a huge motivation behind me trying to design an AI automation platform that comes "batteries included". I also think a lot of orgs, even engineering orgs do not know how to configure basic things like Claude plugin repositories into their installs.
ge96 8 hours ago [-]
> Code takes 6-12 months to make it from commit to production.
That seems wild, niche/highly specialized field?
zbentley 4 hours ago [-]
Incredibly common outside of startups and low-risk small-to-medium sized webtech shops.
pards 8 hours ago [-]
Nope. Banking.
chrisss395 12 hours ago [-]
It's good to know your experience mirrors mine. Developers are moving faster, but the rest of the organization is holding them back because processes and decisions still rely on other parts of the org. Has anyone else observed the same?
Organizations "born in AI" appear to buck this trend for obvious reasons (no legacy org. to deal with). My two cents.
impjohn 9 hours ago [-]
I'd say there's bottlenecks within the developer processes without even considering org processes. Code review, release, post release ceremonies. Feels like they absorb much of the gained productivity in the coding phase.
wildrhythms 10 hours ago [-]
But then how will all of the know-nothing management types get their fingers in the pie?
razodactyl 12 hours ago [-]
Especially when it waits a month and all the effort is either irrelevant or incompatible with latest changes that finally got through. So much token wastage to top off the recent chaos. Hopefully it improves just as fast as it materialised.
perarneng 3 hours ago [-]
This will eventually bubble up and get exposed and then things will start to roll fast.
TrackerFF 13 hours ago [-]
Which is why there's currently a gold rush of "Enterprise AI" startups which implement / offer agents to enterprise businesses.
butlike 10 hours ago [-]
Unshipped code can't break. How is it a liability and not an asset? The money maker is making money and the changes that potentially would interfere with that are held up at the gate. Seems like a good thing from a business perspective.
pards 9 hours ago [-]
It's an investment that is not generating returns. It's inventory sitting on the shelf requiring ongoing maintenance costs but generating no income.
ambicapter 5 hours ago [-]
You spent money making it, and now it's not making any money. It's pure liability (in the accounting sense, not the legal liability sense).
impjohn 9 hours ago [-]
Code gets stale really fast. Re-gathering context and re-aligning on old code is sometimes more painful than starting from scratch.
dawnerd 8 hours ago [-]
Also more chance for scope creep.
ericmcer 7 hours ago [-]
That's enterprise tech also?
I wonder what adoption is like at older non-tech companies.
The office next to mine is being used to teach a bunch of 20-30 year olds how to be insurance brokers using a powerpoint presentation. Copy paste the presentation into an LLM and you just replaced them all. It feels like... things might be kind of dark in 10-20 years if we just keep barreling down this road.
pards 6 hours ago [-]
It's scary. Our society is not set up to deal with mass unemployment.
QuercusMax 6 hours ago [-]
A lot of Bullshit Jobs[1] are going to be replaced by AI - it's not clear whether these "brokers" who can be trained with a powerpoint presentation are actually doing anything useful for society or even for the economy.
My dog gets excited and barks to let me know whenever someone is at the door. It might crush his spirit if he knew he is just a useless pet and his contribution is meaningless.
badc0ffee 5 hours ago [-]
If nothing else, it signals to people approaching your door that there is a dog they may not want to have to deal with.
__loam 2 hours ago [-]
Every line of code is a liability
themafia 5 hours ago [-]
> lock in ROI
On code you cannot possibly copyright. Yea they're all on the verge of "Locking in."
reactordev 11 hours ago [-]
Sounds like the typical ServiceNow paralysis. The “Mother May I” model.
nonameiguess 3 hours ago [-]
There are so many elements to this. I've worked in nearly every part of software orgs. Development, ops, professional services, pre-sales. There are bottlenecks everywhere. Faster shipping gets you nothing if your customers procurement budgets don't increase and they're not buying anything. You can wow new customers and lure them in with shit you get out the door super quick that appears to work but falls flat after six months of usage, but you damage your reputation in the long run. So how do you guarantee your software will actually work after six months of usage? You have to test it by running it yourself for six months. There is no other way. No automated suite can exercise every single possible customer use case over a long period of time. It's a combinatorial problem.
Just yesterday I was in a meeting with a customer asking if we could make our FOSS virtualization platform work such that if you yank the root disk out of a server and put it in another one, everything will work with no hiccups. Well, provided it's exactly the same model and you're going to put it on the same network with all the same IP assignments, you've got a shot. I've actually tried to do this before for the hell of it and I only needed to account for the MAC addresses of the NICs being different, as long as you have no other drives and everything else is exactly the same. I'm sure I could whip up something that scans for the predictable interface name and changes the old MAC stored in the NetworkManager configuration files (and wherever else they might happen to be) and change them to the newly discovered one before making a DHCP request, and maybe that will work, but how certain can I really be? I can test on servers I have and I don't have every possible combination of data center equipment all of our customers have. There is no feasible way to test every possibility. Having an LLM whip up the code for me instead of writing it myself doesn't change that.
Ironically enough, that customer is making software for another customer and their own requirement is that it has to run on very hardware on an airplane, which they don't have. So they're working on little NUC clusters in their cubes and at their houses instead, because their company doesn't have extra true server racks for them to use and no budget to acquire them, which probably won't change any time soon given the spike in hardware prices. They're all using AI but what good is it doing? They're spinning their wheels because they're targeting a runtime environment that doesn't exist that they can't test on.
It's a weird folly of the Internet age that the largest companies in the software world are all web companies. Mostly, they're media companies in disguise. Their only real product is human attention and they sell it to advertisers. Tech is just the vehicle that allows them to deliver it. We've valorized their "ship as fast as possible" ethic, which maybe matters, maybe doesn't, but it was never the source of their value. Nobody spends ad money on Facebook and Google because of the quality or delivery speed of their software. It's the human users and data they've captured, which to be clear, software plays a huge role in, but it's not a model all software companies can follow. We don't earn revenue from half braindead doomscrollers wasting most of their day with a background drip of vaguely dopamine-boosting noise blasting into their senses while they leak every fact about their lives to media companies. Our customers have to make intentional decisions to spend money out of finite budgets.
There's another story on the frontpage right now of Coinbase laying off a bunch of its employees and using AI to write more code. Okay, great, but the best that can do is reduce labor expense. They only earn more revenue if consumers decide they want to buy more Crypto and hold it in Coinbase. If Coinbase is using AI to write their software, so is everyone else, so that doesn't give them any kind of edge on quality or shipping speed. Their success is going to be determined overwhelmingly by whether or not people want to buy crypto, a broad market trend completely out of their control. No one in any business ever wants to admit this, but we're all at the mercy of these broader trends.
People are all over this thread citing Ford. Ford didn't decline because they couldn't ship fast enough. They declined because the market stopped wanting what they were making except their full-size pickups, and it's largely just Americans that want that. I don't blame them or think they did anything wrong exactly. People love to do these post-mortems contemplating a world in which someone like Ford accurately predicts every single shift in consumer sentiment that will ever happens and always stays ahead of the curve. It'll never happen. Everything that goes into style eventually goes out of style, and your ability to ship out of style shit faster won't help you.
You said you work for a bank and I'm honestly curious. What causes a customer to choose your bank over another? Do you think it has anything to do with software features? I'm lucky I even got a meeting with the customer I was with yesterday. He told me he loves our product and fought hard for it over a chief architect who wanted something else and made them do a long comparison study to prove our product met their needs better. Why did that chief architect prefer the other product? He plays golf with their CTO.
kakacik 10 hours ago [-]
Do you work in my company? :)
I kept saying this since Day 1 of llms - even 99% of development reduction means almost nothing in our company in speed of delivery of whole projects. And we are introducing generator of code that semi-randomly has poor performance when they have perf bottlenecks and fills the codebase with... sometimes questionable solutions. Sure, one has to check the results all the time, but then time is spent on code reviews, not much less than actual (way more fulfilling, rewarding and career-boosting) development.
Now I understand there are many more scenarios where gains are more realistic and sometimes huge, but it certainly ain't my current working place. So I use it sparingly to not atrophy my skillset but work estimates are so far the same and nobody questions that.
redsocksfan45 11 hours ago [-]
[dead]
dakiol 8 hours ago [-]
If you're a regular engineer like me, there's no real upside to using AI in a company setting. They're boiling us. Of course, the HN elite (investors, execs, celebrities, and top-tier engineers) will say otherwise because "how can you be against innovation man?"
AI/LLMs aren't innovation the way TCP/IP, linux, or postgres were. To be clear: claude/codex/gemini/grok/whatever exist for profit, to squeeze the last drop of productivity out of you until there's nothing left, and then you're disposable (laid off).
If you like AI, use open source models, use them in your side projects.
asdfman123 7 hours ago [-]
1) The game is not ending, it's changing. AI can sling a lot of code but we still need engineers that actually understand what the hell is going on. That's always been the bottleneck. It could eliminate junior positions, but seniors are fine for now.
2) It's been a hard lesson for me to learn because I'm naturally a contrarian, but you are hired to do what management wants you to do. If you resist, your best bet is to hope they don't notice or care, but it's not going to change much.
herpderperator 5 hours ago [-]
If you eliminate juniors over the next few years, there will be no seniors for the future when the current seniors retire.
testudovictoria 4 hours ago [-]
The people that's a problem for don't understand this fact. Of the ones that do, there's upper management and/or shareholder pressure for profits now. It's a can that infinitely gets kicked down the road until they reach a dead end.
asdfman123 4 hours ago [-]
No, it's a "tragedy of the commons" problem, for lack of a less dramatic phrase.
If you ran a software company, would you want to train juniors who are slower than AI and much more expensive? Who would just jump ship in two years?
It wouldn't make sense to. "Someone" should do it: someone other than you.
patch_dev 54 minutes ago [-]
I see this take all the time, but hiring a junior/intern has never been great ROI, so I hear. Why did we ever do it in the past? Its not like it was ever likely that hiring a junior means getting an employee for life. Could it be that the economic and shareholder pressures are requiring this rather than it being a logical thing?
ttoinou 1 hours ago [-]
Please go ahead and train juniors on your free time. Put your resources where your mouth is.
root_axis 3 hours ago [-]
They're making the bet that seniors won't be needed by then. I think it's a bad bet, but it makes sense to follow through if 40% of the economy is already being occupied by this tech.
sanderjd 5 hours ago [-]
Yep, I'm a fan of the current crop of AI tools because they are incredibly useful to me, but I have deep concerns about the pipeline problem.
5 hours ago [-]
Our_Benefactors 5 hours ago [-]
Then pay will go up again for those mid-level developers who still remain, and companies will again overhire and overtrain like we saw during COVID years. “We won’t have any seniors in ten years!!1!” is a handwringy problem that self solves by the free market.
sanderjd 5 hours ago [-]
Seems like the upside is that it makes the job way easier? What am I missing?
sdevonoes 4 hours ago [-]
Im your CEO. I see you and the rest of your peers have doubled your productivity in the last 2 months because of claude. Good job! Now since we don’t really need to go that faster, ill fire half of you so I and my investors friends can make more profit.
Now of course, you may think you are such a good engineer that companies will kill for you… perhaps that’s true now, but its not true for 90% of the engineers out there. And as the pool of engineers gets reduced, the chances of you being not as good as you thought go up. So the real question is: can you (we all) still make a good living by not using llms. You know support each other and fuck the higher ups? No, we cant. Wwe are full of ourselves, full of elitism (this is HN). We are rational folks, we believe in numbers, in data; we know what we deserve. fuck the rest. The ones who win are the higher ups, of course, not us.
saxelsen 3 hours ago [-]
In reality, I think it's more likely that the lay-offs will be when the marginal rate of growth slows down. Once executives see that growth doesn't change much when hiring, they stop hiring, and once they see that growth doesn't decrease much when firing, they start firing.
There's still an opportunity for engineers to eat their bosses lunch and just start their own company. It's never been easier to start a lower cost competitor.
Employment isn't a social law of nature: it's a transaction of money for "units of work", just like the business might have with other vendors. Governments should be making it easier to become a vendor.
turblety 3 hours ago [-]
> Good job! Now since we don’t really need to go that faster, ill fire half of you so I and my investors friends can make more profit.
Is this a thing? Are there companies out there that don't want to go faster?
Andrex 18 minutes ago [-]
The market can sneeze and suddenly there's a wave of hiring freezes, sounds plausible to me.
koonsolo 2 hours ago [-]
So now your competitors go twice as fast as you. Good luck with that.
recursive 2 hours ago [-]
Some of us still haven't figured out how to hold it right. So on average it doesn't make anything easier. Sometimes it works, and sometimes it just fails. Net effort change for me is about a wash. I know this is different from most peoples' experience, and I don't know I suck at using it. But I'm not generally inclined to use it much as a result.
cj 5 hours ago [-]
It seems like a lot of developers have philosophical disagreements with the direction of AI combined with fear of change and fear that AI makes them less competitive in the job market. I see people regularly boycotting or rejecting AI for a variation of these reasons, and it feels a lot like self-sabotage.
renegade-otter 3 hours ago [-]
My biggest challenge is to look productive while still having some time and focus left to be a good expert. After all - we are just code reviewers now, and you are a no good reviewer if you never get any shovel time yourself.
The juniors are eliminated and the seniors indulge in cognitive surrender because it feels good.
zbentley 4 hours ago [-]
What do you do all day at your job?
Serious question. I think the reason that there's such a disconnect among AI-for-work users about whether it's a panacea or bullshit accelerator is that different software developers have massively different duties and conceptions about what their job is or should be.
bonesss 5 hours ago [-]
There’s an engineering story of being abused by capitalists, but from an Executive perspective the whole thing strikes me as insane except for ‘next quarters bonus’.
Anyone remember what SCO did to the industry as it went under?
The part I still don’t get is where Enterprises are dumping internal ‘secrets’ (code, processes, customer needs, internal politics, leadership dreams), into the hands of startups and untrustworthy conglomerates. MS used to be famous for NDA and deal abuse.
I don’t believe for a second the LLM giants would be shy about training on corporate materials and lying about it. And if they start going under? This gold rush might have a long, ugly, tail.
graphememes 2 hours ago [-]
This is a really bad take, many on hackernews have a very skewed idea of what a CEO thinks about its employees it seems, or why firings happen in the first place.
Quite honestly the firings that are happening are the ones who are not adopting the technologies, if you're doing this you're quite literally just putting yourself in scope.
Just read coinbase today. They are culling those who are not adopting the future because they get in the way of progress. They don't help, they don't push things forward and they hold back those who do.
olsondv 13 hours ago [-]
The post hits the nail on the head with the messy middle. There is simply no motivation to develop this sort of intelligence loop as a dev who has their own responsibilities which their job depend on. Management can ask as nicely as they want, but I’m not going to selflessly share my productivity gains with the broader company for free. I might share a tool if it’s useful. All the learning of how to wrangle AI or set up agents is better kept to myself if there is no recognition for sharing.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
ravenstine 12 hours ago [-]
It kinda racks my brain how a lot of people don't think this way. For example, way before the current state of AI, I wrote my own CLI to make aspects of my job easier and easier to write scripts to automate; some colleagues have noticed my tool and said I should share it, and my diplomatically worded answer is no. I don't share it with anyone because of the negative return in both supporting it and everyone else being able to be as productive as I am. Moreover, leadership will not recognize my ingenuity as an asset, hence no added job security. No way am I going to help my company out of the goodness of my heart to be potentially let go anyway in the near future.
If developers are worried about their jobs with the way the market currently is, they should treat their personal workflows as trade secrets. My example was not specific to AI, but it applies just as much to AI workflows. In a worker's market, it was sometimes fun to share that kind of knowledge with an organization. In an employer's market, they can pay me if they want access to my personal choices.
stronglikedan 5 hours ago [-]
> I don't share it with anyone because of the negative return in both supporting it and everyone else being able to be as productive as I am.
That sounds like a toxic environment. Sharing those types of things is how I got the recognition to get ahead in my career and I have never once regretted it.
lobb-deep 5 hours ago [-]
At least at the Fortune 500 level, there are only toxic environments. And job security has never been weaker.
AngryData 3 hours ago [-]
Yeah, if there is no gain then employees shouldn't be giving any more than exactly what they were hired for. Most big companies are and should be treated as adversarial, because they won't think twice about dropping your ass, you are just a name in the HR departments computers to anyone you don't directly work with every single day. I think a lot of tech employees bought into all the bullshit because they made such good money and were for awhile uncommonly skilled. But their uncommon skill sets have become more and more common while the actual knowledge needed by individual employees has dropped. All the garbage conditions many game programmers and artists have to deal with? Yeah that is coming for the entire tech industry, and that isn't the low point, that is the shit pile just picking up speed. It should be obvious looking at almost every other industry after a few decades.
c-linkage 9 hours ago [-]
In my place of employment, anything I create while on company time or using company resources is the property of my employer.
So while it might be nice to say I won't share, boss-man can certainly make it so I must share.
mediaman 7 hours ago [-]
Ownership of the IP, as it were, is certainly true, but usually with these tools, most of the battle is documenting it, training people, answering questions, etc., and if you aren't motivated to do that it's very hard to make it happen.
Boss-man actually has a very difficult time turning legal theoretic right into actual deliverables.
AngryData 3 hours ago [-]
They can't force you to share what they don't know about or don't understand.
netrap 7 hours ago [-]
It's not about that, it's about the incentive...
MyHonestOpinon 10 hours ago [-]
Over the years, I too have developed ad hoc tools to make my job easier or faster. I don't hide them, but I do not share any since the tools are not really ready for that. I don't have them properly documented, other people would not understand how to use them, why and all the quirks. I suppose a lot of developers do the same.
alaudet 12 hours ago [-]
I sadly have to agree with this. In a collaborative "give and take" world sharing is good. In an environment that takes only, all you have left is your own intellectual property. It is your own most vital asset worth protecting. Shouldn't be like this, but it is.
pu_pe 10 hours ago [-]
I don't think this way because I like to collaborate. If a colleague can benefit from a tool I made I'm proud to save them time. I also think your attitude doesn't pass the golden rule: would you like to work on a team full of people like you?
TallGuyShort 10 hours ago [-]
I tend to agree with you - a rising tide lifts all boats and I want my team to be a rising tide. If I'm at a startup and I'm confident my tool is a good fit for what the rest of the team is doing and there's a genuine teamwork dynamic, oh absolutely I share things like this.
But when I've been stuck for a while in a dysfunctional team, I've definitely seen the flip side where other people will find ways to take a lot of credit for minor iterations on my work, where management will reward my productivity with high expectations and high pressure to continue the trajectory they perceive in a single idea, and when the tool becomes a support burden because too many people think it should solve all of their other problems too and I'm now perceived as being the owner of this thing they depend on.
libria 10 hours ago [-]
It does seem like a highly antagonistic way of working or perhaps I'm just naive.
If your only goal is to maintain a performance lead on your peers, you either need to gain and keep an advantage or find ways to actively make your coworkers disadvantaged (or both). And if you're already doing 1) then 2) isn't a far stretch.
> would you like to work on a team full of people like you?
If their team is already like this, what choice do they have? It's a prisoners dilemma where everyone else is defecting and I'm the sole cooperator.
IMO the onus for solving this is on the business owner, either through establishing a knowledge sharing culture or more comprehensive performance evaluation that rewards these innovations.
dogleash 9 hours ago [-]
> I don't think this way because I like to collaborate.
Nice passive aggressive dig!
Brian_K_White 6 hours ago [-]
I go completely the opposite direction. I stick my name right in the script and write a wiki page documenting it as clearly as I can manage. It becomes part of my value proposition to the company.
anonymars 12 hours ago [-]
What are your thoughts on open source? Seems like the same problem writ large
ravenstine 11 hours ago [-]
I love open source, but you are correct in identifying it as a very similar problem, though it's more a problem with software licensing than source code being publically available. Usually the argument is made that FOSS ends up as free labor, which is true in a lot of ways, but I see FOSS devaluing software as a whole. When software is open and libre, that sends a psychological signal that the software isn't that valuable. There would still be FOSS in a world where even projects like React charged a licensing fee to big organizations, but in that case there would be more choice between YOLO with free software or paying for quality software; as token expenses have proven, many companies could absolutely pay for the latter many times over. In terms of specifically open source, however, companies get a bit of a loophole in that their own employees (or LLM of choice) can be "inspired* by the source code and clone aspects of commercial software. This has the effect of devaluing the skill of individual software engineers to being glorified script kiddies.
SoftTalker 7 hours ago [-]
> I wrote my own CLI to make aspects of my job easier
I mean, according to your employment agreement, that code is owned by your employer, since you wrote it as an employee for use at work. They could easily demand that you share it, if they knew it existed.
This just illustrates that smart people figure out their own productivity/time-saving shortcuts at work, and little scripts and tools like this are part of it. Happens all the time. Other employees don't, and just plod through whatever manual process they were trained to do.
ravenstine 5 hours ago [-]
Yeah, well, I challenge them to do that. In the meantime, I'll keep it to myself.
Our_Benefactors 5 hours ago [-]
It sucks to work with people like you, honestly. Prima-Donna types that overindex on their own personal paranoias instead of trying to succeed, grow, and excel along with the people around them. Quite literally not a team player.
DrammBA 10 hours ago [-]
It sucks to treat the workplace as adversarial, but we unfortunately have to as long as companies have the zero-sum mindset of "wow, everyone is so productive and we're achieving so much, why do we have so many people again?"
And I'm not a "at work we're a family!" guy, but I wish we could just be excellent at our jobs and share it with each other without worrying if I'm digging my own grave.
thfuran 12 hours ago [-]
>but I’m not going to selflessly share my productivity gains with the broader company for free.
If your employer is expecting that you selflessly share your time for free, you’re getting fucked. Most people are paid to do their job. They are, of course, then expected to work for their employers while on the clock.
olsondv 11 hours ago [-]
May not have been clear. My job is not AI development. I have features to deliver. The ask from employer is to add the AI knowledge sharing on top of it. They don’t pay for that. When layoffs come, it wouldn’t save me from missed deliverables.
LeCompteSftware 10 hours ago [-]
I refuse to use LLMs and don't have a job, so I'm just some guy.
What I find strange about this is that in 2020 nobody would be this openly cynical and selfish about, say, good Python idioms, a useful emacs configuration, git shortcuts, etc. This attitude of "your job is to deliver value for the customer, anything else is a distraction, and if you share your hard-earned value-delivery techniques with others then you are a sucker" - this is new, and very disconcerting.
I understand there's not much we can do to stop the cyberpunk dystopia, but do we have to leap in head-first?
TallGuyShort 10 hours ago [-]
> What I find strange about this is that in 2020 nobody would be this openly cynical and selfish about, say, good Python idioms, a useful emacs configuration, git shortcuts, etc.
I definitely saw people have concerns about vimrc files and their personal library of shell scripts well before 2020, and I've seen people early in their career get burned by sharing it too. They had a tool that made them productive, it got out of their hands, and suddenly they're getting negative feedback from someone who tried using it and it didn't meet their expectations, or it got checked into the repository and now the script they used at their last job too has their current job's copyright notice and license on it, and they're perceived as being petty for trying to claw back their own intellectual property because they didn't go to the trouble of slapping legalese all over their personal tools.
Izkata 3 hours ago [-]
Those are completely different from a tool you wrote yourself: Where do they get support when something goes wrong?
This mindset has always existed in the area we're talking about, and not because it's sharing something to speed up with. It's because we don't want to get stuck doing a second job supporting the tool.
I've built all sorts of random tools for myself over the years and haven't shared a single thing, but share the tips and tricks like your examples all the time.
7 hours ago [-]
hnthrow0287345 9 hours ago [-]
I wouldn't share that with a manager even if they asked. If management were tech competent, they could proactively find inefficiencies themselves and allocate time to it instead of letting developers do all of the thinking for them.
If they gave immediate raises or bonuses for stuff like this, then things would change.
CoffeeOnWrite 8 hours ago [-]
Do y'all non-sharers not have equity in your companies?
pesus 5 hours ago [-]
The average dev probably doesn't have any significant amount of equity in their company. The stock price at my company going up just means my quarterly checks are going to be $4 instead of $3.
ap99 9 hours ago [-]
These "secrets" that are being hidden are basically on the same level as gary tan's list of super uber powerful prompts.
None of it is actually that crazy that everyone else could think up.
What I've noticed in my own experience here is that even when I do share my own prompts/skills few people use them (or alternatively they were so basic that everyone already had their own version).
e.g. If someone doesn't care about xyz before AI, they probably won't after AI even if I serve them it on a silver platter.
9x39 8 hours ago [-]
Another angle I see is that AI tools start by benefiting the individual and the user captures the increased productivity (you could argue appearance thereof) in the form of slack time. Some tedium almost eliminated here, a problem handed off and crushed over there, and we've got an extra hour or four back in our day.
Does that person rationally go find more work to take on with that reclaimed time? Probably not unless it's their company or exceptional motivating circumstances exist.
r_lee 8 hours ago [-]
I see a lot of this talk on HN
yet I don't see anyone question whether management will be just as excited to see that less work is needed and that it'd just result in layoffs
9x39 6 hours ago [-]
Oh, they would be, but the benefits of AI aren't evenly spread like peanut butter. I subscribe to the 'ai as amplifier' POV, so the fast get faster, and generally productive people don't get squeezed or the scrutiny the laggards do IME.
Contrast to remote work where the benefit was extended to all regardless of performance, thus becoming a large target for management to cut.
I think the talk about management & capital demanding ROI will be the inflection point to watch, as a downstream effect could be AI haves & have-nots, depending on open weight models' competitiveness and local capability relative to the SOTA models.
alaudet 12 hours ago [-]
As a 3 year retired Systems Analyst I feel bad for my younger colleagues. In 2023 I was one of the first in my team to use AI to untangle some legacy code that did something mission critical with Perl and whose original author had long ago left and apparently didn't understand anything about actually commenting code or documentation. We were all in awe of this new technology that got us out of a bind. But more and more it looks less like a tool that is available to you instead of something that is being _done_ to you. Nobody asked for this.
At what point is inspiration and thought just devalued and worthless in the name of doing things instantly. The work has no soul.
8 hours ago [-]
woodydesign 13 hours ago [-]
Great article. The part that stood out to me is the shift in how organizations define work.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
dgellow 11 hours ago [-]
Eventually whoever is responsible to fix prod incidents and maintain has the ownership. And I agree that’s pretty messy in a world where agents are crossing those boundaries. Will the AI engineer with their horde of agents be responsible to keep everything running? I really doubt so, but we will see
zbentley 4 hours ago [-]
> whoever is responsible to fix prod incidents and maintain has the ownership.
There's a mistaken assumption under there that businesses can identify who that statement describes.
Some can. But a lot of businesses cannot identify and reward, or support, or just not RIF, those people. Like a lot a lot. More than I'm comfortable lumping under statements like "well those are just bad places to work/places that should be shut down".
There's no punchline or counterproposal there; that's just my observation.
ohnei 12 hours ago [-]
If they create a system to compensate expert AI users wouldn't that career have a problem in that anyone (enticed by the new careers existence and) integrating their advice on any company particulars with a (weeks) more modern approach is basically putting them in the role of domain expert being eliminated.
woodydesign 11 hours ago [-]
The part I push back on is the idea that expertise is easy to learn in just a few weeks.
Take Andrej Karpathy as an example. Even if I knew exactly what tools he uses and what his workflow looks like, I still would not be able to produce anything close to what he can produce in a few weeks. And he is not standing still either—he is evolving at the same time.
A lot of real expertise is not in the visible/system-able workflow. It is in someone’s experience, taste, judgment, and wisdom. You can copy the artifact, but you cannot easily copy the thinking behind it: the principles, the decision-making, and the ability to apply those principles across many different/subtle situations.
But I do agree with the concern behind the argument. People may worry that sharing what they know could weaken their own position. And the more uncomfortable question is about peers: if someone’s role can be “retired” because others absorbed their knowledge and skills, then it is hard not to ask, “Am I next?”
ohnei 10 hours ago [-]
Sure but here you are not talking about the 100000+ local firm experts for windows networking or coding with agents you are talking about the people who can rewrite the best advice that makes those local experts out of date where their small experiences probably don't make up for not having integrated X, Y or Z yet.
ap99 12 hours ago [-]
Well, that's fine until your teammate does all of those things by default and gaps show up between them and the rest of the team.
cadamsdotcom 12 hours ago [-]
AI by itself isn’t that useful. An agent forgets and makes enough mistakes that you have to check all its work, which can be net productivity negative.
It really comes into its own when you treat it as a tool that can build other tools. For example, having it build tools that force it to keep going until its work reaches a certain quality, or runs compliance checks on its outputs and tells it where it needs to fix things. Then and only then, can you trust its work.
Right now most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
SadErn 12 hours ago [-]
[dead]
blitzar 13 hours ago [-]
> Where is the ROI for the 2 mio € we paid Anthropic last year?
The CEO has a youtube style platinum token plaque for their office.
dominictorresmo 12 hours ago [-]
It's just ass to work in this area now. In the company I work, the bosses let everyone use it, even non-developers. I really want to quit and work in another area but unfortunately where I live a beginning salary can't pay a rent and I'm getting old
romaniv 11 hours ago [-]
> "Where is the ROI for the 2 mio € we paid Anthropic last year?"
The bias in the assumptions here is absolutely bonkers.
Problem: GenAI is not generating any visible return on investment.
"Solution": rearrange your entire development organization around the technology and start inventing new tooling.
What's entirely obvious is that the point of such articles is not the stuff they purportedly discuss, but the normalization of assumptions those discussions are based on.
paodealho 11 hours ago [-]
LLMs can't fail, they can only be failed ... by you!
Terr_ 8 hours ago [-]
Hence OpenAI sinking $4b into "The Deployment Company", to pump up the number of Very Helpful Consultants which can offer to help your company overcome is tragic failure to adopt and buy, for a small fee...
ch_ase 12 hours ago [-]
It’s been helpful for me to look at the promise of AI by comparing with the dotcom boom. Lots of similarities.
But the internet was a simpler concept for businesses. Basically it was you can now sell to people from their computers. AI’s promise is what? It can approximate reasoning about things? This is much more challenging implementation puzzle to truly solve.
I don’t know that I’ve seen anything of real substance outside coding tasks yet.
threecheese 12 hours ago [-]
TIL my $company has used the same consultants as this guy. We started with Training and Champions, to Leadership/Lab/Crowd with a CoE/brown bags.
We are definitely struggling with the same issues author describes, but even worse the leaders down at the Crowd level have some perverse need to achieve reuse across their teams, rather than letting their Crowd experiment. One team does something interesting, we must stop and get that thing out to all teams in that group, so everyone “benefits”. This is a scarcity mindset, which made sense pre-AI where code was costly and ideas were more valuable.
At the same time, everyone not only has to do their work, they need to be 25% more efficient from AI (new KPIs), and so their own learnings slow to a halt, and the team with the cool idea has to give presentations instead of hacking.
Cthulhu_ 13 hours ago [-]
On the first part of the article, I believe it describes how individual productivity gains do not seem to translate to business / larger scale productivity. I think this is expected; individual developer productivity, code volume, LOC/day never was a valuable metric on a company scale. Number of delivered features might be one, but ultimately, revenue and customer growth etc are.
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
lbrito 2 hours ago [-]
>This cannot become employee surveillance...If people believe the organization is measuring whether they used enough AI, they will game the signals.
In many midsize companies I've worked as an engineer, developing a growth plan is something I have been asked to do, and to share it with my manager. Then at the end of the quarter I go over the growth plan and the manager agrees or disagrees with my actual growth, and gives me a performance bonus befitting my success..
I propose employees create self-training byproducts as a result of any AI interaction. And then they also work with their Cuban manager to make sure that these self-training byproducts are a part of their growth plan. This can guarantee growth without losing that opportunity To interact with the intelligent AI system (on topics that are relevant to the company's short, mid, and long-term strategic advantage,).
jt654 13 hours ago [-]
This is a great article. It helps you realize that the feedback loop is the goal but it won't just happen and traditional methodologies don't really support it. Has anyone here found a good way that promotes teams in a company to focus on the loop instead of productivity hack?
zidoo 12 hours ago [-]
Once people try to increase quality instead of speed they will see how LLMs are powerful. Everything else is just sales pitch by Nvidia and friends.
wongarsu 12 hours ago [-]
Even if LLMs write more buggy code they can still bring up software quality in the short to medium term by allowing you to clear out a lot of the backlog of bugs and UI issues that are known but never had enough priority to be fixed
Debugging and developing first fixes is also one of the spaces where current LLMs are the biggest force multipliers. Especially if you have reproduction cases the LLM can test on its own
But long-term it might look very different as more and more of the code becomes LLM written
MyHonestOpinon 10 hours ago [-]
Make sense to me. I can see how LLMs can help you make better systems. I don't have a christal ball but I can see how focusing on speed (or more precisely volume) can have a lot of unintended consequences.
raffael_de 11 hours ago [-]
My biggest gripe with language models is that technical and conceptual discussions which used to be led organically (with people having to think about what others wrote and decide what to reply themselves) now turned into AI slop avalanches with participants just copy pasting obviously generated text into the discussion. And those texts are always very long and super weird to respond to because they usually are overall correct enough so you can't just reject them but are flawed all over the place, missing the point, lacking depth where it would be important, skipping over important steps. This is a huge time waste. Funnily, many people have no idea how obvious it is that their texts are generated and rubbish at that.
woeirua 7 hours ago [-]
The hype is extreme right now and everyone is still trying to figure out how to use the tools. People who are further along the bleeding edge are trying to tear down all the process that we used to have to further improve velocity. After people go all the way to "dark factories" most companies will realize that they don't actually have any good ideas for what to build, and honestly never did. They've been coasting for years, someone else can replicate their product now and its just a race to the bottom. At that point, token budgets are going to collapse.
I'm staunchly pro-AI as a technology, but I do think the bubble is going to pop in the next year or two just because the business value won't materialize for most companies fast enough.
rob74 13 hours ago [-]
One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?
rogerthis 13 hours ago [-]
That already happens here. I am old dev who was the goto guy for people with certain business and technical questions. Not anymore (which is part good, as I'm interrupted much less, and part bad, as sometimes they regard the wrong answer as truth).
cadamsdotcom 12 hours ago [-]
You could vibe yourself up an AMA tool where people can submit questions, an agent goes to work on them, then the question and agent answer sit in a queue waiting for you to provide a review and give your weigh-in.
delecti 11 hours ago [-]
Coworkers are demonstrating that they value immediacy and possibly also some combination of embarrassment about their question or social anxiety about asking someone else, over accuracy. Not only does that still require the coworker to review the question, and also lose immediacy vs an LLM, but it might even take longer before rogerthis gets around to reviewing the queue.
djeastm 2 hours ago [-]
I think this is what people are paid by the hour to do as AI trainers at Mercor, etc.
i_think_so 12 hours ago [-]
I'm pretty sure this is the best idea I've ever heard of for this technology. You should build that tool and it should become mandatory throughout the tech world.
Can we get some enabling legislation? A UN resolution perhaps?
cadamsdotcom 11 hours ago [-]
Despite the snark I’ll engage.
The “get an immediate agent answer then a human expert’s fast-follow” is I think a great idea for many domains - imagine if you could get legal advice this way; the agent will have already explained the basics and the human expert just has to provide corrections - way less typing by humans.
Also, the corrections are now documented and could become future grounding for the agent.
Terr_ 8 hours ago [-]
I expect the time-limited expert will actually end up being tasked with more pain per request.
They won't just need to understand what problem the requestor has (or thinks they have) but also validate that the "immediate" feedback wasn't subtly horribly wrong.
dogleash 7 hours ago [-]
> The “get an immediate agent answer then a human expert’s fast-follow” is I think a great idea for many domains
So, like what already happens when my boss asks claude something and I have to pick up the pieces. Except now it's everything he slops about the topic, not just the ones we discuss later?
i_think_so 10 hours ago [-]
Absolutely zero snark. I'm serious. (About the serious part; obviously not the joke part.)
> a great idea for many domains
I completely agree. This is a great idea. If you don't do something with it I'm stealing it. ;-)
10 hours ago [-]
cindyllm 10 hours ago [-]
[dead]
b112 13 hours ago [-]
I think you hit the nail on the head, it's probably right, most of the time. Or, maybe 89% right, 91% of the time.
The more I use AI, the more I see mistakes. I've noticed others see these same mistakes, correct them, then when queried say "Oh, it gets it right all of the time!". No, having to point out "you got this wrong, re-write that last bit" isn't "getting it right". And it's not that the code is wrong overtly, it's subtle. Not using a function correctly, not passing something through it should (and the default happens to just work -- during testing), and more. LLMs are great at subtle bugs.
So moving forward with this isolation you mention, ensures that maybe the guy in the company, the 'answer guy' about a thing, never actually appears. Maybe, he doesn't even get to know his own code well enough to be the answer guy.
And so when an LLM writes a weird routine, instead of being able to say "No, re-write that last bit", you'll have to shrug and say "the code looks fine, right?", because you, and the answer guy, if he exists, don't know the code well enough to see the subtle mistakes.
skydhash 12 hours ago [-]
I’ve noticed that when I was implementing a build pipeline for a project. My changes introduced a runtime bug (I only tested that the thing was building), but then another developer broke the pipeline while fixing the runtime bug. While it was a failure of mine to introduce the runtime bug, I don’t think I can publish a fix for a bug without investigating why a bug appeared in the first place. Because code is all about assumptions and contracts, and if something that was working break, that means something else has changed and you need to be aware of it.
user34283 12 hours ago [-]
In a large codebase it‘s probably next to impossible to get people who fully understand the code to explain it to you with unerring accuracy.
AI can get a pretty good picture, near instantly, whenever you need it.
It’s not just competent-sounding, it is reasonably competent, and certainly very useful for tasks like that.
homeonthemtn 13 hours ago [-]
That's a valid point. Dev/team member isolation, not a great environment to build
reaperducer 12 hours ago [-]
Dev/team member isolation, not a great environment to build
Gone are the days of mandatory corporate "synergy" and after-work bar gatherings to promote "team building."
AI is showing people in the tech industry that they're just interchangeable cogs. AI is bringing the offshored Indian work environment to Silicon Valley.
cmiles8 12 hours ago [-]
There are some improvements on coding and speed of developers, but more broadly in the enterprise AI is just producing a lot of slop that folks are getting fed up with.
AI content has a look and feel people sense immediately.
It’s amazing to see how quickly things shifted from “wow this is so cool, AI is going to change everything” to folks calling out “you lazy bum, this just looks like some slop you threw together with AI… let’s get some real thinking please.”
We are firmly heading into “trough of disillusionment” territory on the hype cycle.
13 hours ago [-]
aykutseker 11 hours ago [-]
The Hub captures decisions humans made. It can't see the ones they didn't, which is most of what AI ships. You instrument the deliberate half and inherit the undeliberate one.
simoncion 13 hours ago [-]
> There is another pressure building underneath all this. AI usage will become more visibly metered. The current enterprise feeling of “everyone has access, don’t worry too much about the bill” will not hold forever, at least not in the form people are getting used to. ...
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
fallpeak 10 hours ago [-]
Tokens _are_ as cheap as keystrokes. A single keypress by a full-time SWE averages out to $0.005-$0.02 (depending on typing speed and TC). The relationship is obscured because the keystrokes are usually part of a fixed-price subscription plan but they absolutely have a cost. Prior to AI this was in fact a large reason everyone pontificated about concise programming languages and elegantly factoring problems and DRY and...
netcan 12 hours ago [-]
Path dependencies between invention and utilization... are complicated and hard to fathom.
Our mental models of developments like the industrial revolution, literacy, printing or suchlike tend to be a lot more straightforward than how things play out in practice.
When a bottleneck is eliminated... you tend to shortly find the next bottleneck.
Meanwhile, there is an underlying assumption everyone seems to make that "more software, more value" is the basic reality. But... I'm skeptical.
To do lists, wishlists, buglists and road maps may be full of stuff but...
Visa or Salesforce have already exploited all their immediate "more software, more money" opportunities.
The ones in a position to easily leverage AI are upstarts. They're starting with nothing. No code. No features. No software. With Ai, presumably, they can produce more software and make value.
Also... I think overextended market rationalism leads people to see everything as an industrial revolution...which irl is much more of an exception.
The networked personal computing revokution put a pc one every desk. It digitized everything. Do we have way better administration for less cost? Not really. Most administrations have grown.
Did law fundamentally change dues to dugital efficiency? No. Not really.
If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.
cyanydeez 10 hours ago [-]
>If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.
It's possible capitalism will drive all enterprise to terrible codebases.
OutOfHere 3 hours ago [-]
Link is dead.
cyanydeez 13 hours ago [-]
I think if these companies first adopted local models with fewer token outs and the learners got to watch the tokens get made, there'd be a lot more understanding.
i_think_so 12 hours ago [-]
> one team uses Copilot as autocomplete and calls it a day. Another team runs Claude Code in tight loops, with tests, reviews, and constant steering. A product owner suddenly prototypes real software instead of mocking screens in Figma. A senior engineer delegates a root-cause analysis to an agent and comes back to the valid solution in under an hour; this would’ve taken him two weeks without AI. A junior person produces polished code but has no idea which architectural assumptions got smuggled into the system. A support team quietly turns recurring tickets into workflow automation, because they know exactly where the work hurts and nobody in the Center of Excellence ever asked the right question.
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The title is just clickbait. The rest of the content are fluffy bunnies and rainbows. It's all summed up as "continue to consume product, but remember to also do X". Sales copy + HBR MBA bait.
The closest thing to an honest, less-than-rosy example is the "junior person" who has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
lbrito 2 hours ago [-]
>What about the filesystem developer who fell in love with his chatbot girlfriend
Fear not: he has a place to feel welcome and included!
Indeed. Any developer who has used copilot knows you can't rely on it 100%
The post's head image immediately bothered me. Copilot's strength is not on patching SDLC but to speed up the catching of typos and minor oversights. If you're using it as an integral part of SDLC, it causes problems immediately. So why posit the strawman? Marketing.
chrisjj 8 hours ago [-]
> The company may still learn almost nothing.
Not a problem if the hired "AI" now does that job. /i
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
So much of Management (both mid and executive) still considers Software as if it were an assembly line; "We make software just like how Ford makes cars". Code as a product.
Which isn't to say that most software development isn't woefully inefficient, but the important bits aren't even considered. "The Work" is seen as being writing code, not the research that goes into knowing what code has to be written.
And for AI marketing, this is almost a videogame-esque weakspot. Microsoft proclaims "50% faster code!" and every management fool thinks "50% faster product; 50% faster money!"
> Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend.
It's going to be a disaster once ROI is demanded. Right now everyone is fine with not measuring it; Investors are drunk on hype and nobody within the company actually wants to admit that properly measuring software development productivity is almost impossible.
But the hype won't last forever. Sooner or later investors will see the "$2M spend" and demand "$4M net profit", and that's not going to materialize.
Copilot and Claude won't be tackling the real bottlenecks. They're not going to dredge up decade old institutional knowledge, they won't figure out whether code looks bad because it is bad or because it solves a specific undocumented problem, they won't anticipate future uses.
Code just isn't the product. Not the real work. Really, if your codebase is in a healthy state, it's often a literally free output of the design and research processes. By the time you've refined "our procurement team finds the search hard to use" into a practical ticket, the React component for the appropriate search filters has basically already been written, writing up the code is just a short formality. Asking Copilot would turn a 10 minute job into a 5 minute job. Real impressive, were it not for the 6 hours of meetings and phone calls that went into it.
People who say this kind of thing probably have no idea how Ford makes cars either. The assembly line is the last step. All the research, design, engineering, and testing happens before any sheet metal is stamped out. So the comparison might be more true than not, but unknowingly.
I think the point OP is trying to make is that manufacturing and design are seperate steps with different workflows and expectations. And that the design step does have value, as without it your factory line has nothing particular to make or sell.
Nobody is sitting around Ford trying to make the clay modeling step faster or more error free, it's a design function. But there are hundreds of software execs out there trying to do exactly that. In part because cp and git and make and your other build tools that make up the factory line function are pretty much rock solid and cost optimized to nearly free.
The design, factory, supply chain, etc. is just the marketing arm for the loans...
Does that apply to phones?
It was a short pithy sentence, but it does have a kernel of truth to it.
I think this is probably going to happen at the same time that the providers start really jacking up token prices to extract all the value they can.
I can't answer that question but plenty of other managers are fully ready to just give bogus numbers.
For my team, use of AI has indeed lowered the story point cost. The coding part of the story takes less work so we have started to lower the story point cost for stories that would previously cost more. Think of a 5SP to 3SP reduction.
We have increased the number of features being delivered but our number of story points delivered has remained static.
And I'm not opposed to using story points, they have some utility within an agile team or program. They just aren't a valid way of quantifying productivity changes.
Our manager brought us into an all hands meeting and kind of read us the riot act because now we were on "Bob the executive" radar because it looked like we really weren't delivering much week by week. Had anybody actually looked at the amount of work we were doing and what we were shipping, it wouldn't be close.
Exactly as you predicted, we started over inflating our stories, creating Epics when they weren't needed, breaking out a single feature into a dozen or more stories. Over the next few weeks, we were all getting pats on the back for "really picking up the pace". When in reality, we were just doing the same thing we always did.
It just reinforced the idea that Agile had turned into a system that was easy to manipulate to create the illusion you were doing more than you really were. I imagine we're going to see a lot more of this as C-Suite folks start clamoring for ROI on the millions they're spending on tokens.
Some variant of this has been the case in every agile team I've ever worked on.
So what is it then? All the explanations and examples are in units of time, but with a disclaimer saying that the true nature of story points is not time-based, except for the fact that they can only be explained in terms of time.
I've also seen a supplier who was asked to provide some kind of tracking, where literally nothing existed. Their delivery team produced reports with story points per person, per task, per sprint. Every sprint, every person hit their target month after month after month. They were asked to stop.
The managerial goal is to take near-past moving average rates (from completed tickets) and use them to forecast near-future expectations. 1.0 of Team Alpha's points might mean 4 hours this week... but anybody who shows up six months later expecting exactly the same rate is foolish, doubly-so if they expect it to be the same across teams, or after a big change in staff or tooling or project.
______
Other musings: Whenever a manager says "my current estimate of the rate is X pts/hr, use that when sizing", I feel it's a mistake. I kills off the intuition you really want to capture. Team members ought to be comparing expected tasks to past tasks.
Also, the goal of "accurate scheduling predictions" exists in conflict with "measure employee output". Trying to use your point-system for one generally harms the other.
So a story might be estimated at 3SP to implement but there's a high risk that it would blow out (e.g. idea was not fully proven in a PoC, work is in an area that is historically underestimated, reliance on a different team, etc.), so we set it to 5SP to include that risk. Maybe 50% of the time it does get finished in what a normal 3SP would finish in, but at least we've covered the 50% of time it blows out.
Right now the subscriptions are still in the range of reasonable business expenses, but pretty soon they'll have to jump and $200/month/seat subscriptions turning into $2000/month/seat subscriptions is going to get even very badly ran companies to re-evaluate.
We're already here even. I know of a company that was doubling their Codex spend and hitting the cap week over week and finally they had enough and stopped increasing. Then they maxed out on credits and had a week of no Codex. A large percentage of the engineers loudly refused to work for the rest of the week. They were managing the Codex managing the codebase and were totally incapable of dealing with its output without it.
Worse—they'll get the people who hold that knowledge laid off, and at least 50% of the institutional knowledge won't be documented anywhere that even could be fed to the LLM.
Which, it should be noted, is the dumbest idea ever. The Ford assembly line makes more-or-less identical copies of the same design. How do you do that with software? The cp command.
If someone thinks like that, they probably read some business book and either didn't understand the book, don't understand their own business, or is following some guru who has one of those problems.
Software is less like an assembly line and more like plumbing:
Some people design which type of pipe needs to be routed from here to there.
The implementor actually pipes the outputs of one function, in a variable, and then taps it off as an argument to another function.
Software development is like plumbing really, so a good manager of a pipeworks and plumbing company might actually make a good manager for software companies as well.
This is also why its actually not so surprising that LLM's are mastering programming skills, it's essentially just being a plumber, and a lot of people are happy they no longer need to be a plumber. Physicists, engineers, scientists, ... they have much more complicated tasks compared to plumbers, programmers and code monkeys.
No, wrong again. Some software development tasks are like plumbing, but that misses a lot. Your claim in sort of like saying since the Wendelstein_7-X has wiring, the manager an electrical contractor would be good to lead that project.
Plumbers and electricians more or less solve the same problems over and over with slightly different parameters, and because of the repetitiveness, they can do a good job by following (a hefty number) of rules of thumb (the building code). A software developer isn't going to go far just throwing design patterns at a problem (though many bad ones try).
Just the other day I needed to make a calibration interface for a home automation app (pointing a dumb webcam at my washer and dryer so I can tell if they're done without running up and down two flights or stairs). I just wanted to be able to look at the whole scene and manually pick the ROI to extract and display on my home dashboard. So I asked the AI to build me a stupid little web UI where I can just click to select the ROI center, and what it built me in 10 seconds was perfect for my needs.
Was it pretty? Not really. Was it what I would have built myself? Not quite - but it solved the problem I had without me needing to remember or look up how to do all the specifics.
The machines all beep when they're done.
A baby monitor in the machine area will accomplish the same thing :-)
But of course the project is much more fun ...
I already have an old webcam and raspi I'm not using, and they measure exactly what I care about.
Ooh, new requirement!
Set timer on phone when setting timer on machine...
So like 95% of business school graduates?
They haven't even learned that "less code is better" yet, I wouldn't hold my breathe waiting for them to suddenly learn "more advanced" things like that before they learn the basics.
Feedback is often only considered once something is already on fire (financially, functionally, or literally).
They’re not even selling shovels, they’re selling subscriptions for shovels.
For example, there's also no cabal behind memory prices dropping (ignoring the development of the past months, of course), which in turn enabled web and game developers to use more memory and make their software non-viable on older devices.
> The whole thing dies if it turns into employee scoring
All ICs know this and some management does, but I’m willing to put money that it will 100% become employee scoring.
I would argue that any sufficiently large system reaches a point where more code is in fact the opposite of what it needs.
Nutrition and calories are only useful up-to a point and then we have diminishing and later on negative returns.
Even-tough it is not the best analogy because we are describing two different system, it helps put a mental model around the fact that churning more is often less.
Side Note: A got a feedback from a customer today that while our documentation is complete and very detailed, they find it to be too overwhelming. It turns out having a few bullet points to get the idea across it better than 5 page document. Now it is obvious.
I have absolutely worked on code bases I would describe as "marbleized bricks" where the best thing I can do is carve out the statue they already contain. There's a great satisfaction in making PRs that mostly delete things, but the later result is a program that works faster, has fewer bugs/edge cases, is easier for the next person to debug.
The LLMs certainly can add more layers of marble. Companies don't often know how much more they need an artist with sculpting tools more than a bricklayer.
Tl;dr's, quick references / QuickStarts / cheat sheets and FAQs are also some things they're great at generating.
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
The Theory of Constraints - AI Era
[0] https://en.wikipedia.org/wiki/Theory_of_constraints
[1] https://www.goodreads.com/book/show/113934.The_Goal
[2] https://www.goodreads.com/en/book/show/17255186-the-phoenix-...
Finance folks reached out asking if they could vibe code their own app using Copilot/Cursor/Claude for finance planning purpose. And because they know my management freezes whenever there are whispers of "our CFO said so" they even paraded that reasoning - "our CFO "tested" Lovable and he is convinced and asking us to vibe code the app".
If that is not enough they ended with a nicely wrapped reasoning of "we need to try this to be sure that using vibe coded app can exist in enterprise finance with appropriate data security and maintainability".
And mind you this is a reasoning at a company with more than 20+ billion in revenue.
And also, I have been in similar sounding scenarios multiple times. They talk big when everything is going smoothly and nothing is on the line. The day shit hits the fan, they will furiously message me on Teams and insist that I support them in finding out issues. So far it has been about mostly about shitty design choices. This is at whole new level. They want to vibe code an app which will be used to plan and guide company's direction for the next year.
And having less people involved means there is much less communication and alignment.
Not to say it's a panacea.
It would also, I would think, make it easier for the 30% fewer engineers to earn a better living in the long run and reduce human management effort.
This makes the most sense to me. So far AI, being fallible, can only augment humans so you can have less humans to do the same work (or tasks where accuracy can be less than 100%, like lower level support calls/questions). Next comes the task of re-balancing the distribution of labor or teaching other departments to utilize AI.
To me that rings the most true because where AI saves me the most time is in never having a bug that takes more than a few hours to pinpoint, even if I'm looking in the wrong place, because with enough clues the AI will look in the right place before I think of doing so. Like finding a needle in a haystack. It doesn't suddenly make me 100x more productive, but it saves a lot of time on some time consuming tasks.
SAFe is poison.
Release trains seemed like a mechanical release implementation detail in all cases; not the product or requirement of a given SDLC or process brand.
NO ONE in my meeting was familiar with the title or author.
I felt a little more impending doom upon realizing that.
"Hey we found this bug and-"
"We already found it with Claude, we're still waiting for out next release."
Or worse, its a bug that doesn't exist in prod, since the code keeps changing, and you wont know about the bug until its out there because there's one niche user with a niche use scenario everyone forgot or didn't even know existed, and he's going to somehow crash the entire system with your next deployment.
This was a huge motivation behind me trying to design an AI automation platform that comes "batteries included". I also think a lot of orgs, even engineering orgs do not know how to configure basic things like Claude plugin repositories into their installs.
That seems wild, niche/highly specialized field?
Organizations "born in AI" appear to buck this trend for obvious reasons (no legacy org. to deal with). My two cents.
I wonder what adoption is like at older non-tech companies.
The office next to mine is being used to teach a bunch of 20-30 year olds how to be insurance brokers using a powerpoint presentation. Copy paste the presentation into an LLM and you just replaced them all. It feels like... things might be kind of dark in 10-20 years if we just keep barreling down this road.
1: https://en.wikipedia.org/wiki/Bullshit_Jobs
My dog gets excited and barks to let me know whenever someone is at the door. It might crush his spirit if he knew he is just a useless pet and his contribution is meaningless.
On code you cannot possibly copyright. Yea they're all on the verge of "Locking in."
Just yesterday I was in a meeting with a customer asking if we could make our FOSS virtualization platform work such that if you yank the root disk out of a server and put it in another one, everything will work with no hiccups. Well, provided it's exactly the same model and you're going to put it on the same network with all the same IP assignments, you've got a shot. I've actually tried to do this before for the hell of it and I only needed to account for the MAC addresses of the NICs being different, as long as you have no other drives and everything else is exactly the same. I'm sure I could whip up something that scans for the predictable interface name and changes the old MAC stored in the NetworkManager configuration files (and wherever else they might happen to be) and change them to the newly discovered one before making a DHCP request, and maybe that will work, but how certain can I really be? I can test on servers I have and I don't have every possible combination of data center equipment all of our customers have. There is no feasible way to test every possibility. Having an LLM whip up the code for me instead of writing it myself doesn't change that.
Ironically enough, that customer is making software for another customer and their own requirement is that it has to run on very hardware on an airplane, which they don't have. So they're working on little NUC clusters in their cubes and at their houses instead, because their company doesn't have extra true server racks for them to use and no budget to acquire them, which probably won't change any time soon given the spike in hardware prices. They're all using AI but what good is it doing? They're spinning their wheels because they're targeting a runtime environment that doesn't exist that they can't test on.
It's a weird folly of the Internet age that the largest companies in the software world are all web companies. Mostly, they're media companies in disguise. Their only real product is human attention and they sell it to advertisers. Tech is just the vehicle that allows them to deliver it. We've valorized their "ship as fast as possible" ethic, which maybe matters, maybe doesn't, but it was never the source of their value. Nobody spends ad money on Facebook and Google because of the quality or delivery speed of their software. It's the human users and data they've captured, which to be clear, software plays a huge role in, but it's not a model all software companies can follow. We don't earn revenue from half braindead doomscrollers wasting most of their day with a background drip of vaguely dopamine-boosting noise blasting into their senses while they leak every fact about their lives to media companies. Our customers have to make intentional decisions to spend money out of finite budgets.
There's another story on the frontpage right now of Coinbase laying off a bunch of its employees and using AI to write more code. Okay, great, but the best that can do is reduce labor expense. They only earn more revenue if consumers decide they want to buy more Crypto and hold it in Coinbase. If Coinbase is using AI to write their software, so is everyone else, so that doesn't give them any kind of edge on quality or shipping speed. Their success is going to be determined overwhelmingly by whether or not people want to buy crypto, a broad market trend completely out of their control. No one in any business ever wants to admit this, but we're all at the mercy of these broader trends.
People are all over this thread citing Ford. Ford didn't decline because they couldn't ship fast enough. They declined because the market stopped wanting what they were making except their full-size pickups, and it's largely just Americans that want that. I don't blame them or think they did anything wrong exactly. People love to do these post-mortems contemplating a world in which someone like Ford accurately predicts every single shift in consumer sentiment that will ever happens and always stays ahead of the curve. It'll never happen. Everything that goes into style eventually goes out of style, and your ability to ship out of style shit faster won't help you.
You said you work for a bank and I'm honestly curious. What causes a customer to choose your bank over another? Do you think it has anything to do with software features? I'm lucky I even got a meeting with the customer I was with yesterday. He told me he loves our product and fought hard for it over a chief architect who wanted something else and made them do a long comparison study to prove our product met their needs better. Why did that chief architect prefer the other product? He plays golf with their CTO.
I kept saying this since Day 1 of llms - even 99% of development reduction means almost nothing in our company in speed of delivery of whole projects. And we are introducing generator of code that semi-randomly has poor performance when they have perf bottlenecks and fills the codebase with... sometimes questionable solutions. Sure, one has to check the results all the time, but then time is spent on code reviews, not much less than actual (way more fulfilling, rewarding and career-boosting) development.
Now I understand there are many more scenarios where gains are more realistic and sometimes huge, but it certainly ain't my current working place. So I use it sparingly to not atrophy my skillset but work estimates are so far the same and nobody questions that.
AI/LLMs aren't innovation the way TCP/IP, linux, or postgres were. To be clear: claude/codex/gemini/grok/whatever exist for profit, to squeeze the last drop of productivity out of you until there's nothing left, and then you're disposable (laid off).
If you like AI, use open source models, use them in your side projects.
2) It's been a hard lesson for me to learn because I'm naturally a contrarian, but you are hired to do what management wants you to do. If you resist, your best bet is to hope they don't notice or care, but it's not going to change much.
If you ran a software company, would you want to train juniors who are slower than AI and much more expensive? Who would just jump ship in two years?
It wouldn't make sense to. "Someone" should do it: someone other than you.
Now of course, you may think you are such a good engineer that companies will kill for you… perhaps that’s true now, but its not true for 90% of the engineers out there. And as the pool of engineers gets reduced, the chances of you being not as good as you thought go up. So the real question is: can you (we all) still make a good living by not using llms. You know support each other and fuck the higher ups? No, we cant. Wwe are full of ourselves, full of elitism (this is HN). We are rational folks, we believe in numbers, in data; we know what we deserve. fuck the rest. The ones who win are the higher ups, of course, not us.
There's still an opportunity for engineers to eat their bosses lunch and just start their own company. It's never been easier to start a lower cost competitor.
Employment isn't a social law of nature: it's a transaction of money for "units of work", just like the business might have with other vendors. Governments should be making it easier to become a vendor.
Is this a thing? Are there companies out there that don't want to go faster?
The juniors are eliminated and the seniors indulge in cognitive surrender because it feels good.
Serious question. I think the reason that there's such a disconnect among AI-for-work users about whether it's a panacea or bullshit accelerator is that different software developers have massively different duties and conceptions about what their job is or should be.
Anyone remember what SCO did to the industry as it went under?
The part I still don’t get is where Enterprises are dumping internal ‘secrets’ (code, processes, customer needs, internal politics, leadership dreams), into the hands of startups and untrustworthy conglomerates. MS used to be famous for NDA and deal abuse.
I don’t believe for a second the LLM giants would be shy about training on corporate materials and lying about it. And if they start going under? This gold rush might have a long, ugly, tail.
Quite honestly the firings that are happening are the ones who are not adopting the technologies, if you're doing this you're quite literally just putting yourself in scope.
Just read coinbase today. They are culling those who are not adopting the future because they get in the way of progress. They don't help, they don't push things forward and they hold back those who do.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
If developers are worried about their jobs with the way the market currently is, they should treat their personal workflows as trade secrets. My example was not specific to AI, but it applies just as much to AI workflows. In a worker's market, it was sometimes fun to share that kind of knowledge with an organization. In an employer's market, they can pay me if they want access to my personal choices.
That sounds like a toxic environment. Sharing those types of things is how I got the recognition to get ahead in my career and I have never once regretted it.
So while it might be nice to say I won't share, boss-man can certainly make it so I must share.
Boss-man actually has a very difficult time turning legal theoretic right into actual deliverables.
But when I've been stuck for a while in a dysfunctional team, I've definitely seen the flip side where other people will find ways to take a lot of credit for minor iterations on my work, where management will reward my productivity with high expectations and high pressure to continue the trajectory they perceive in a single idea, and when the tool becomes a support burden because too many people think it should solve all of their other problems too and I'm now perceived as being the owner of this thing they depend on.
If your only goal is to maintain a performance lead on your peers, you either need to gain and keep an advantage or find ways to actively make your coworkers disadvantaged (or both). And if you're already doing 1) then 2) isn't a far stretch.
> would you like to work on a team full of people like you?
If their team is already like this, what choice do they have? It's a prisoners dilemma where everyone else is defecting and I'm the sole cooperator.
IMO the onus for solving this is on the business owner, either through establishing a knowledge sharing culture or more comprehensive performance evaluation that rewards these innovations.
Nice passive aggressive dig!
I mean, according to your employment agreement, that code is owned by your employer, since you wrote it as an employee for use at work. They could easily demand that you share it, if they knew it existed.
This just illustrates that smart people figure out their own productivity/time-saving shortcuts at work, and little scripts and tools like this are part of it. Happens all the time. Other employees don't, and just plod through whatever manual process they were trained to do.
And I'm not a "at work we're a family!" guy, but I wish we could just be excellent at our jobs and share it with each other without worrying if I'm digging my own grave.
If your employer is expecting that you selflessly share your time for free, you’re getting fucked. Most people are paid to do their job. They are, of course, then expected to work for their employers while on the clock.
What I find strange about this is that in 2020 nobody would be this openly cynical and selfish about, say, good Python idioms, a useful emacs configuration, git shortcuts, etc. This attitude of "your job is to deliver value for the customer, anything else is a distraction, and if you share your hard-earned value-delivery techniques with others then you are a sucker" - this is new, and very disconcerting.
I understand there's not much we can do to stop the cyberpunk dystopia, but do we have to leap in head-first?
I definitely saw people have concerns about vimrc files and their personal library of shell scripts well before 2020, and I've seen people early in their career get burned by sharing it too. They had a tool that made them productive, it got out of their hands, and suddenly they're getting negative feedback from someone who tried using it and it didn't meet their expectations, or it got checked into the repository and now the script they used at their last job too has their current job's copyright notice and license on it, and they're perceived as being petty for trying to claw back their own intellectual property because they didn't go to the trouble of slapping legalese all over their personal tools.
This mindset has always existed in the area we're talking about, and not because it's sharing something to speed up with. It's because we don't want to get stuck doing a second job supporting the tool.
I've built all sorts of random tools for myself over the years and haven't shared a single thing, but share the tips and tricks like your examples all the time.
If they gave immediate raises or bonuses for stuff like this, then things would change.
None of it is actually that crazy that everyone else could think up.
What I've noticed in my own experience here is that even when I do share my own prompts/skills few people use them (or alternatively they were so basic that everyone already had their own version).
e.g. If someone doesn't care about xyz before AI, they probably won't after AI even if I serve them it on a silver platter.
Does that person rationally go find more work to take on with that reclaimed time? Probably not unless it's their company or exceptional motivating circumstances exist.
yet I don't see anyone question whether management will be just as excited to see that less work is needed and that it'd just result in layoffs
Contrast to remote work where the benefit was extended to all regardless of performance, thus becoming a large target for management to cut.
I think the talk about management & capital demanding ROI will be the inflection point to watch, as a downstream effect could be AI haves & have-nots, depending on open weight models' competitiveness and local capability relative to the SOTA models.
At what point is inspiration and thought just devalued and worthless in the name of doing things instantly. The work has no soul.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
There's a mistaken assumption under there that businesses can identify who that statement describes.
Some can. But a lot of businesses cannot identify and reward, or support, or just not RIF, those people. Like a lot a lot. More than I'm comfortable lumping under statements like "well those are just bad places to work/places that should be shut down".
There's no punchline or counterproposal there; that's just my observation.
Take Andrej Karpathy as an example. Even if I knew exactly what tools he uses and what his workflow looks like, I still would not be able to produce anything close to what he can produce in a few weeks. And he is not standing still either—he is evolving at the same time.
A lot of real expertise is not in the visible/system-able workflow. It is in someone’s experience, taste, judgment, and wisdom. You can copy the artifact, but you cannot easily copy the thinking behind it: the principles, the decision-making, and the ability to apply those principles across many different/subtle situations.
But I do agree with the concern behind the argument. People may worry that sharing what they know could weaken their own position. And the more uncomfortable question is about peers: if someone’s role can be “retired” because others absorbed their knowledge and skills, then it is hard not to ask, “Am I next?”
It really comes into its own when you treat it as a tool that can build other tools. For example, having it build tools that force it to keep going until its work reaches a certain quality, or runs compliance checks on its outputs and tells it where it needs to fix things. Then and only then, can you trust its work.
Right now most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
The CEO has a youtube style platinum token plaque for their office.
The bias in the assumptions here is absolutely bonkers.
Problem: GenAI is not generating any visible return on investment.
"Solution": rearrange your entire development organization around the technology and start inventing new tooling.
What's entirely obvious is that the point of such articles is not the stuff they purportedly discuss, but the normalization of assumptions those discussions are based on.
But the internet was a simpler concept for businesses. Basically it was you can now sell to people from their computers. AI’s promise is what? It can approximate reasoning about things? This is much more challenging implementation puzzle to truly solve.
I don’t know that I’ve seen anything of real substance outside coding tasks yet.
We are definitely struggling with the same issues author describes, but even worse the leaders down at the Crowd level have some perverse need to achieve reuse across their teams, rather than letting their Crowd experiment. One team does something interesting, we must stop and get that thing out to all teams in that group, so everyone “benefits”. This is a scarcity mindset, which made sense pre-AI where code was costly and ideas were more valuable.
At the same time, everyone not only has to do their work, they need to be 25% more efficient from AI (new KPIs), and so their own learnings slow to a halt, and the team with the cool idea has to give presentations instead of hacking.
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
It already has; ship has sailed.
https://blog.pragmaticengineer.com/the-pulse-tokenmaxxing-as...
I propose employees create self-training byproducts as a result of any AI interaction. And then they also work with their Cuban manager to make sure that these self-training byproducts are a part of their growth plan. This can guarantee growth without losing that opportunity To interact with the intelligent AI system (on topics that are relevant to the company's short, mid, and long-term strategic advantage,).
Debugging and developing first fixes is also one of the spaces where current LLMs are the biggest force multipliers. Especially if you have reproduction cases the LLM can test on its own
But long-term it might look very different as more and more of the code becomes LLM written
I'm staunchly pro-AI as a technology, but I do think the bubble is going to pop in the next year or two just because the business value won't materialize for most companies fast enough.
Can we get some enabling legislation? A UN resolution perhaps?
The “get an immediate agent answer then a human expert’s fast-follow” is I think a great idea for many domains - imagine if you could get legal advice this way; the agent will have already explained the basics and the human expert just has to provide corrections - way less typing by humans.
Also, the corrections are now documented and could become future grounding for the agent.
They won't just need to understand what problem the requestor has (or thinks they have) but also validate that the "immediate" feedback wasn't subtly horribly wrong.
So, like what already happens when my boss asks claude something and I have to pick up the pieces. Except now it's everything he slops about the topic, not just the ones we discuss later?
> a great idea for many domains
I completely agree. This is a great idea. If you don't do something with it I'm stealing it. ;-)
The more I use AI, the more I see mistakes. I've noticed others see these same mistakes, correct them, then when queried say "Oh, it gets it right all of the time!". No, having to point out "you got this wrong, re-write that last bit" isn't "getting it right". And it's not that the code is wrong overtly, it's subtle. Not using a function correctly, not passing something through it should (and the default happens to just work -- during testing), and more. LLMs are great at subtle bugs.
So moving forward with this isolation you mention, ensures that maybe the guy in the company, the 'answer guy' about a thing, never actually appears. Maybe, he doesn't even get to know his own code well enough to be the answer guy.
And so when an LLM writes a weird routine, instead of being able to say "No, re-write that last bit", you'll have to shrug and say "the code looks fine, right?", because you, and the answer guy, if he exists, don't know the code well enough to see the subtle mistakes.
AI can get a pretty good picture, near instantly, whenever you need it.
It’s not just competent-sounding, it is reasonably competent, and certainly very useful for tasks like that.
Gone are the days of mandatory corporate "synergy" and after-work bar gatherings to promote "team building."
AI is showing people in the tech industry that they're just interchangeable cogs. AI is bringing the offshored Indian work environment to Silicon Valley.
AI content has a look and feel people sense immediately.
It’s amazing to see how quickly things shifted from “wow this is so cool, AI is going to change everything” to folks calling out “you lazy bum, this just looks like some slop you threw together with AI… let’s get some real thinking please.”
We are firmly heading into “trough of disillusionment” territory on the hype cycle.
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
Our mental models of developments like the industrial revolution, literacy, printing or suchlike tend to be a lot more straightforward than how things play out in practice.
When a bottleneck is eliminated... you tend to shortly find the next bottleneck.
Meanwhile, there is an underlying assumption everyone seems to make that "more software, more value" is the basic reality. But... I'm skeptical.
To do lists, wishlists, buglists and road maps may be full of stuff but...
Visa or Salesforce have already exploited all their immediate "more software, more money" opportunities.
The ones in a position to easily leverage AI are upstarts. They're starting with nothing. No code. No features. No software. With Ai, presumably, they can produce more software and make value.
Also... I think overextended market rationalism leads people to see everything as an industrial revolution...which irl is much more of an exception.
The networked personal computing revokution put a pc one every desk. It digitized everything. Do we have way better administration for less cost? Not really. Most administrations have grown.
Did law fundamentally change dues to dugital efficiency? No. Not really.
If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.
It's possible capitalism will drive all enterprise to terrible codebases.
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The title is just clickbait. The rest of the content are fluffy bunnies and rainbows. It's all summed up as "continue to consume product, but remember to also do X". Sales copy + HBR MBA bait.
The closest thing to an honest, less-than-rosy example is the "junior person" who has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
Fear not: he has a place to feel welcome and included!
https://www.newsweek.com/inside-world-first-ai-dating-cafe-1...
Not a problem if the hired "AI" now does that job. /i