logo

Team Knows Things Your Systems Don't

By D10X

Feb 16, 2026

(And It's Costing You Every Day)

Let me tell you about something that happened last month at a clients plant.

Their automated quality system flagged 47 parts on a Wednesday morning. Out of spec, clear as day. The production supervisor, Rahul, looked at the alert, walked over to the line, picked up one of the flagged parts, and said: "These are fine. Run them."

The quality engineer was new. She protested—the system says they're bad, the data doesn't lie.

Rahul explained: "After we run the aluminum housings, the measurement head gets a thin film buildup. The parts aren't out of spec. The sensor is reading wrong. Give it two hours, someone will clean the head, and this will stop."

He was right.

Here's the thing that bothers me: This has happened every week for eight months. Rahul knows it. The operators know it. But the system? Still sends the same alert. Still flags good parts. Still wastes everyone's time.

Your smart factory keeps making the same mistake because nobody taught it what Rahul knows.

We Keep Having the Same Conversations

I've been visiting manufacturing plants for the past year, and I hear variations of the same story everywhere.

"Our system is telling us to replace parts that don't need replacing."

"The AI predictions are technically accurate but operationally useless."

"We have three years of data and fifteen years of experience, and they never meet."

Last week, I watched a maintenance team troubleshoot a press that kept stopping mid-cycle. The diagnostic system said the hydraulic pressure was fine—technically true. The operator said, "It sounds different today."

The operator was right. There was a tiny air bubble in the line. Not enough to trip sensors. Enough to cause problems.

The data showed everything was normal. The human ear caught what wasn't.

How often does this happen in your plant? Where someone's gut, someone's ear, someone's experience catches what your ₹2 crore monitoring system misses?

The Invisible Daily Tax

You probably don't count these as costs because they're just... Tuesday.

Your quality team spends three hours every week investigating alerts that turn out to be nothing.

Your best technician gets interrupted six times a day because "the system can't figure this out, but Suresh can."

Your operators have learned which alerts to ignore and which to pay attention to—knowledge that exists nowhere except in their heads.

Let's do some napkin math on just the alert noise:

Say you get 120 system alerts a week. Roughly 70 of them are false positives—things your experienced team knows are normal.

Each one takes maybe 10 minutes to check and dismiss. That's 700 minutes a week. 46 hours a month. At a blended cost of ₹800/hour, you're spending ₹36,000 every month investigating things that aren't actually problems.

That's ₹4.3 lakhs a year. Just on alert fatigue. On one line.

Now think about the flip side—the real problems your system doesn't flag because it doesn't know what your people know.

The Question That Got Me Thinking

I was talking to a plant manager in Chakan—let's call him Ashok. He said something that stuck with me:

"We're spending lakhs on systems that tell us what happened. But my operators can tell you what's about to happen. Why can't those two things talk to each other?"

Good question.

Why can't the system learn that when Machine 3 vibrates in that specific way, there's usually a coolant flow issue—not because the vibration sensor crossed a threshold, but because that's what your maintenance team has figured out over 50 troubleshooting sessions?

Why can't the system learn that parts from Supplier B need different handling, even though they're technically the same spec—because your operators have noticed the pattern over hundreds of batches?

Your people are doing pattern recognition all day long. Your systems are doing pattern recognition all day long. They're just not learning from each other.

What Happens When Someone Retires

You know you have someone like this.

Twenty years on the floor. When there's a weird problem, everyone calls them. They look, listen, sometimes just touch something, and they know.

Let's say he's 58 years old. Retiring in three years.

You could try to document what he knows. You could have him write procedures, train people, record videos.

But you know that's not going to capture it. Because what he has isn't a list of steps. It's intuition built over 10,000 shifts.

Here's what keeps me up at night: What if we could capture how he thinks while he's still around?

Not "check valve X when you hear sound Y"—that's a checklist.

But: "When you hear that sound, here are the six things he checks and why. Here's what he's ruled out first. Here's what combination of factors usually means it's serious versus not serious."

So when he retires, the next person isn't starting from zero. They're starting from "here's what 20 years of experience suggests you look at."

Not replacing the person. Scaling the wisdom.

A Different Kind of Smart

I think we've been thinking about "smart factories" wrong.

We've been asking: "How do we get better data? How do we deploy more AI?"

Maybe the better question is: "How do we help machines understand what our people already know?"

I saw this click into place at a plant in Manesar.

They had a persistent quality issue—defects would spike randomly, no pattern in the data. Engineers were pulling their hair out.

Someone finally said: "What if we just ask the operators to tell us when something feels off, even if they can't explain why?"

They set up the simplest possible thing—a tablet at each station. One question: "Did you notice anything unusual this shift?"

Took operators maybe 30 seconds to answer.

After a month, patterns emerged:

"Feels like parts are sliding more today" correlated with a specific supplier's packaging material that was slightly more slippery.

"Weird smell near the dryer" preceded adhesive failures by 6-8 hours.

"Line sounds different after lunch shift starts" mapped to a compressed air pressure dip when the canteen equipment kicked on.

None of this was in the sensor data. All of it was in human observation.

When they connected human observations with machine data, they cut that quality issue by 60% in two months.

Not with better sensors. With better listening.

What This Could Look Like for You

I'm not going to pretend I have a perfect solution. But I think there's something here worth exploring.

What if your systems could learn from your people the way your people learn from experience?

Imagine next Monday morning:

Your system flags an anomaly on Line 2. But instead of just showing you a graph, it says:

"This temperature pattern appeared 4 times in the last year. Three times, Ramesh adjusted the coolant flow and it resolved in 20 minutes. Once, it was a faulty sensor—took 3 hours to diagnose. Here's what was different in each case."

You're not starting from scratch. You're starting from institutional memory.

Or imagine your weekly quality review:

Instead of going through 15 incidents one by one, the system says:

"Three incidents seem related—all involved parts from Supplier A's Tuesday deliveries, all flagged by operators before sensors caught them, all resolved with extended drying time. Worth investigating the supplier's process?"

You're not drowning in data. You're seeing patterns that actually matter.

Here's What We're Working On

We're building something we're calling AI connectors. Not the most exciting name, I know.

The basic idea: Create bridges between the data your systems collect and the knowledge your people have.

It works on top of what you already have—your MES, your sensors, your ERP. We're not asking you to rip anything out.

What it does:

  • Makes it stupidly easy for operators to share what they're seeing (20 seconds, not 20 minutes)
  • Connects those observations with machine data from that moment
  • Learns patterns that combine both human and machine intelligence
  • Surfaces insights when decisions are being made, not buried in a dashboard somewhere

We've been testing this with a few plants. Here's what we're seeing:

A team in Pune solved a recurring downtime issue in 2 days that used to take them a week—because the system connected what operators noticed with what sensors measured.

A plant in Nashik reduced false alerts by 65% in the first month—the system learned which alerts were noise and which were signal.

A supplier in Chakan is capturing troubleshooting knowledge from a senior tech who's retiring next year—not in a manual, but in a system that suggests what to check based on what he's learned over 2,000 repairs.

Is it perfect? No. Are we still learning? Absolutely.

But it feels like we're onto something.

The Money Conversation (Let's Be Real)

You're probably wondering what this costs.

Most digital transformation projects we see:

  • ₹3-7 crores upfront
  • 10-18 months to implement
  • Requires replacing or heavily modifying existing systems
  • ROI is theoretical for the first year

What we're talking about:

  • ₹50-90 lakhs to get started (works with what you have)
  • 2-3 months to prove value on one real problem
  • Monthly platform fee around ₹1-2 lakhs
  • ROI we're seeing: 8-12 month payback

One plant manager told me: "We spent ₹75 lakhs. Within 10 months, we'd saved ₹1.2 crores in reduced scrap and faster problem-solving on just two lines. I wish we'd started with four lines."

But honestly? The money isn't the interesting part.

The interesting part is watching a system actually get smarter. Watching alerts become useful instead of noise. Watching new operators get help from the collective experience of everyone who came before them.

I'm interested in understanding what's actually happening in your plant. Where the gaps are. Where people know things systems don't. Where you're spending money on problems that feel solvable but haven't been solved.

Maybe AI connectors are relevant. Maybe they're not. Maybe there's a different approach that makes more sense for you.

30 minutes. No deck. Just a conversation between people trying to solve real problems.