The 4 Ps: How to Read a Company Before It Reads You
A diagnostic framework for leaders entering new companies — or diagnosing the one they’re already in
I came into a company once that had a story about itself. Engineering and Product worked well together. Requirements were clear. Collaboration was strong. The leadership team was aligned and focused on growth.
It took about a day to see that everything was a number one priority.
That alone is a signal. But a single signal can be explained away — a rough week, a team under temporary pressure, a coincidence. What you need is a picture, and a picture requires more than one lens.
There is a narrow window when a new leader enters a company where they can see things that nobody else can. They have no historical context, which means they have no scar tissue — no accumulated tolerance for the things that have slowly become normal. They are not yet part of the system, which means the system has not yet shaped what they notice or what they are willing to say. And their feedback, precisely because of their newness, is often held in higher regard than the feedback of people who have been saying the same things for years and been ignored. That window closes. The system will begin to shape the new leader’s perception the moment they start building relationships, absorbing context, and learning what is and is not said in certain rooms. Not using that window deliberately and aggressively is a failure of the opportunity the role provides.
Over years of coming into broken systems — as a lead, an architect, a CTO — I developed a diagnostic framework I call the 4 Ps: Process, People, Projects, and Patterns. The premise is simple. The execution is not. You run all four simultaneously, from the moment you walk in, and you pay attention not to what each one tells you individually but to where they contradict each other. Every company has a version of itself it wants you to believe. The contradictions are where the actual truth lives — and the window for seeing them clearly is shorter than most leaders realize.
This piece walks through the framework in full — how each P works, what it shows you, what it can’t, and how to use them together to see what no single lens can reveal. It’s free and un-gated because this is the kind of thing that should be in more leaders’ hands.
Process: The What
Process leaves evidence. It is the most auditable of the four lenses and usually the first place the official story starts to crack.
In this company, the evidence was in Jira. What I found wasn’t a single bad sprint or an isolated delivery failure. It was a pattern so consistent it had become invisible to the people inside it. Story points climbing sprint over sprint. Tickets injected mid-sprint with nothing removed to make room. Commitments made at the start of a cycle bearing almost no relationship to what shipped at the end.
And underneath all of it, the detail that told me more than any other: several teams had quietly stopped running sprints. Nobody had decided to. The environment had made structure impossible, and they had adapted — drifting into a continuous stream of work that never stopped and never got prioritized, because prioritization would have required someone to say no, and no one had that permission.
Teams don’t abandon structure because they’re undisciplined. They abandon it because the environment has made discipline untenable. The adaptation is evidence. Read it.
How to Read Process
The most important thing Process shows you is not what broke — it’s what the system has learned to tolerate. The primary surfaces are wherever work is tracked and committed to. In engineering contexts that’s a project management tool — Jira, Linear, Shortcut, or equivalent. In other organizations it’s project plans versus delivery records, meeting commitments versus follow-through, stated priorities versus where time and budget actually went.
The tool changes. The diagnostic logic does not.
Pull at least six months of history and look at the gap between what was committed and what actually shipped. A consistent gap is not a planning problem. It is a signal that commitments have stopped being real — that the organization has learned that saying yes to everything is safer than saying no to anything. When teams sandbag estimates, it is because accuracy has been punished or ignored often enough that sandbagging became rational. That is not a character failing. It is an adaptation to an incentive environment.
Mid-cycle scope injection is the cleaner diagnostic. When new work appears after a commitment has been made, ask what came out to make room for it. If the answer is nothing, the boundary is not a real boundary — it is a scheduling fiction the organization performs while actually running on a different system. The teams in this company had recognized that fiction and stopped performing it, which is why some had drifted into Kanban without naming it. They were not being undisciplined. They were being accurate about what the environment actually required.
Roadmap vagueness is worth examining for intent, not just quality. A vague roadmap is sometimes the result of poor planning culture. It is sometimes deliberate — a loose roadmap can be interrupted and redirected without creating visible commitment violations, because there was nothing specific enough to violate, no line to cross. The diagnostic is not a single conversation. It is a pattern read across multiple sources over time: look at how the roadmap has changed across the last several quarters, note which items have remained consistently vague versus which have been specific, and look at what happened to the specific ones. If specific commitments were repeatedly broken and the roadmap gradually became less specific in response, the vagueness is an adaptation. If the roadmap has always been vague regardless of team or period, it is likely a planning culture problem. If specificity exists in some areas and not others, map it against which areas have had the most executive interference — the specific items that survived are often the ones that were protected by someone with enough authority to hold the line. The ones that drifted into vagueness tell you where that protection was absent.
What the Evidence Means
Ask a mid-level contributor — not a leader, but someone close to the work — what happens when a new request comes in mid-cycle. The answer decodes the actual operating system. “We have a conversation about what comes out” means the tradeoff structure exists and is used. “We figure it out” means the team absorbs the addition and the boundary is cosmetic. “We just do both” means the system has already encoded that pushback is not permitted and that questioning scope is more costly than delivering on it. The answer you get in the first thirty seconds, before the person has time to calibrate, is the most accurate one.
The absence of tradeoff conversations is the most critical finding. When scope is added without anything being removed, it means no one is accountable for the tradeoff decision. Either the person with authority to make that decision is not being asked, or they are defaulting to yes on everything, or they have actively communicated that the answer is always yes and the question should stop being asked. Each is a different problem with a different intervention, but all three produce the same observable outcome in the data.
Second-Order Effects
When scope injection becomes normalized, the organization loses the ability to make credible commitments at all. Planning ceremonies continue — because they are expected — but they stop producing alignment. Estimates are padded to absorb the inevitable additions. Delivery dates are treated as aspirational. And because commitments have stopped meaning anything, the people who can tell the most accurate and useful story about why things are late begin to carry more organizational weight than the people closest to the work. This is how strong technical organizations get politically outmaneuvered from within. It does not happen overnight. It happens cycle by cycle, tolerance by tolerance, until the pattern is the culture.
Third-Order Effects
Organizations that have lost credible commitments eventually lose the ability to make strategic decisions at all. Every resource allocation becomes negotiable in the moment. Every priority is provisional. Leadership operates in a state of permanent tactical reactivity, responding to whatever is loudest rather than what is most important, because the mechanism that would allow them to distinguish between those two things — reliable delivery data — has been corrupted by the same dynamic that made the commitments meaningless in the first place.
What Not To Do
The temptation when Process reveals dysfunction is to address it as a methodology problem. Implement better tooling. Run an agile training. Hire a project manager. These interventions address the surface without touching the root. The absence of tradeoff conversations and the normalization of scope injection are not failures of methodology. They are rational adaptations to an incentive environment where saying yes is rewarded and saying no is costly. Improving the process in that environment produces better-documented chaos. The improvement will be absorbed by the system and the pattern will resume.
What Process Cannot Tell You
Process tells you the shape of the dysfunction and how deeply it has been adapted to. It does not tell you why the tradeoff conversations are not happening, who is driving the scope injections, or what happened to the people who tried to push back. Those answers live in People and Patterns.
People: The Why
Every conversation was polite. Measured. And exhausted.
The Engineering leaders had narrowed their focus almost entirely to technical concerns — code, architecture, the structure of the systems they were building. None of them were looking at process health, sprint dynamics, or the external pressures shaping how their teams operated. In another environment that might be a leadership gap. Here it read like a retreat — into the one domain the system couldn’t take from them.
Product was more openly exasperated. The kind of tired that doesn’t come from a bad quarter but from years of absorbing things you have no power to change. Teams, when left alone, were in reasonable spirits. But some teams were markedly more beaten down than others, and it tracked almost exactly to which part of the product they worked on and how much executive attention that area received.
The more I dug, the more a specific pattern emerged. Executive screaming fits were not uncommon. Not at other leaders — at individual contributors. In meetings. People had taken leaves of absence because of it. The behavior had been acknowledged, the right amount of lip service paid, and then everything continued exactly as before.
What struck me wasn’t just that it had happened. It was that no one thought it could be different. Not “I wish things were different.” Not even that. The possibility of change wasn’t in the language at all. It was a completely closed loop — self-reinforcing, self-sealing, with no exit except resignation or quitting. The org was heavily weighted toward people earlier in their careers, which was not accidental. Younger, less experienced people are less likely to name what’s happening to them, less likely to recognize it as abnormal, and more likely to absorb it as simply the way things are. That is not a coincidence. That is a feature.
When the screaming comes from the top, the silence coming from below isn’t weakness, and it isn’t even acceptance — it’s the inevitable conclusion of an abusive system.
How to Read People
The most diagnostic thing People shows you is not what people say — it’s the gap between what they say in groups and what they say alone. In a group setting, people calibrate their responses to what is safe. In a one-on-one, with enough trust established, they tell you what is actually true. The size of that gap tells you how costly truth-telling has become in this organization. A small gap means the public and private versions of reality are roughly aligned. A large gap means the organization has taught people that the public version must be managed, which means leadership is operating on a sanitized signal.
Watch where leaders have narrowed their focus and, just as importantly, where they have stopped looking. In this company, every Engineering leader had retreated into purely technical concerns. None of them were looking outward at process health or organizational dynamics. This was not a leadership deficiency in isolation. It is what happens when a leader concludes, through accumulated experience, that looking outward produces pain without producing change. The retreat is a rational response to an environment that has consistently punished engagement with things above their control. When you find this, you are not identifying a capability gap. You are reading evidence of what the system has rewarded.
Attrition among experienced mid-level contributors is the leading indicator most organizations ignore until it is too late. Junior contributors leave when they are unhappy. Experienced contributors leave when they have concluded that nothing will change. When mid-level attrition is elevated — particularly among people with 3 to 7 years of tenure — the organization has been signaling something for long enough that the people who recognized it decided to act on it. What remains is a population increasingly weighted toward people who either have not yet recognized the signal or have decided to absorb it. Neither group will change the system.
An organization heavily weighted toward early-career contributors is not inherently a problem. But when it coincides with the other signals — the exhaustion, the silence, the retreat — it is worth asking why. Less experienced people are less likely to name what is happening to them, less likely to have the context to recognize it as abnormal, and more likely to absorb dysfunction as simply the way things are. That demographic profile, combined with those other signals, describes conditions that are easier to maintain precisely because the people inside them do not yet have the frame of reference to push back.
What the Evidence Means
In a group setting, raise a real, visible problem and observe who speaks, who defers, and who looks at someone else before answering. Then have the same conversation privately with two or three of the people who were quiet. If the private conversation sounds completely different from the public one — if the quiet people have substantive, specific things to say when the room is empty — you have your answer. The organization has not merely failed to create safety. It has actively taught people that the group setting is not where real things get said.
The distinction between the resigned and the defeated matters enormously for what comes next. Resigned people have made a rational calculation: engagement is too costly given current conditions, so they have withdrawn. They still know what good looks like. They can articulate what should be different. If the conditions change, they can re-engage. Defeated people have gone further. They have lost the belief that change is possible regardless of conditions. They have internalized the system’s story about itself. Treating these two groups the same way will fail both. The resigned need evidence that conditions have changed before they will test them. The defeated need to see change actually happen — not promised, not planned, but real and visible — before they will believe it.
Second-Order Effects
When truth-telling becomes costly, the organization loses its early warning system. Problems that could have been addressed cheaply at the early signal stage are only visible at the later consequence stage — after they have compounded. Leaders who have eliminated the conditions for honest feedback will consistently be surprised by failures that were visible to everyone below them for months. The surprise is genuine, which makes it worse: the leaders are not lying when they say they did not see it coming. They could not see it, because the system filtered it out before it reached them.
Third-Order Effects
Organizations that have made truth-telling costly over a long enough period begin to select against truth-tellers. The people who thrive — who have adapted to the environment’s rules, made peace with its costs, learned how to perform alignment without producing it — become the unconscious template against which new candidates are evaluated. Over time, the organization becomes less capable of recognizing, retaining, or acting on the kind of thinking that would change it. This narrowing is gradual and becomes visible only in retrospect, usually after the people who could have changed things have already left.
What Not To Do
Do not attempt to surface truth in groups before establishing that truth-telling is safe individually. A town hall, an all-hands Q&A, or an anonymous survey deployed into an environment where people have learned that honesty is costly will produce either silence or performance. The silence you get in a broken organization is not the absence of things to say. It is the presence of a well-calibrated understanding of what is and is not safe to say in that room. Forcing a public forum before the private conditions have changed will reinforce the existing pattern.
Do not conflate the resigned with the defeated and deploy the same intervention for both. A compelling vision and a call to re-engage will land differently on someone who has withdrawn rationally versus someone who has stopped believing. The former may respond. The latter needs proof, not inspiration.
What People Cannot Tell You
People reveals what the system has cost and what it has taught people to do and not do. It does not reveal the full incentive architecture producing those outcomes — why the system is structured the way it is and who benefits from it staying that way. That requires Projects and Patterns.
Projects: The Real Priorities
Nothing ever ended.
There were no real project boundaries. No finish lines. No moments where the org had to stop, assess, and decide what came next. Everything was treated as the normal course of business, which meant nothing could ever be identified as an outlier — a problem, a thing that needed to stop before moving forward. The chaos didn’t have to end. There was no mechanism that required it to.
What there was instead: long-running architecture and code improvement efforts with no visible end state, and SOW projects — custom development work built in feature branches, hidden behind a proliferation of toggles and custom switches nobody outside engineering fully understood. The executive team wanted the revenue. Engineering delivered it. Nobody asked what it cost, because engineering didn’t have the vocabulary to make the cost legible, and the executive team didn’t have the vocabulary to hear it. Tech debt. Total cost of ownership. Architectural risk. These were not part of the conversation.
So engineering quietly adapted. Built the workarounds. Implemented the hacks. And in doing so, silently absorbed the true cost — the technical debt, the total cost of ownership, the architectural fragility accumulating beneath the surface. Not because they didn’t know the cost. Because surfacing it was more dangerous than swallowing it. The result was a codebase that appeared to be functioning until it didn’t, and when it didn’t — when executives or Sales encountered an area that was fragile, or heard that something would take far longer than expected — the surprise was genuine and the blame was misplaced. The people who had been quietly absorbing the tradeoffs were now explaining why a system nobody had invested in properly was behaving exactly as an underinvested system behaves. Those hacks became load-bearing — threaded so deeply into the codebase that removing them would require dismantling things the org depended on. The workaround had become the architecture.
That is exactly what had happened to the organization itself. People adapting to demands they couldn’t push back on. Workarounds becoming standard operating procedure. Informal structures hardening into permanent ones. The codebase and the org were doing the same thing, for the same reasons, through the same mechanism — because that is what systems do when the inputs they receive are never corrected. They encode them. They build on top of them. They eventually cannot tell the difference between the workaround and the foundation.
Conway’s Law says you ship your org structure — that the architecture of what you build mirrors the communication structure of the people who built it. There is truth in that. But what Organizational Physics reveals goes deeper: it is not just the org structure shaping the architecture. It is the incentive geometry — what gets rewarded, what gets punished, what gets silently absorbed — shaping everything. The codebase was not just a reflection of how the org was organized. It was a record of every incentive the org had acted on, every demand it could not refuse, every cost it had chosen to defer rather than confront. That is a different and more damning diagnosis, because it means fixing the org chart without addressing the incentive geometry will produce a different structure encoding the same dysfunction.
How to Read Projects
The question Projects answers is not “what is the company working on” — it is “what does the company actually value, as demonstrated by where it puts its people and its attention.” Those two things are frequently different. Stated priorities live in roadmaps and all-hands decks. Actual priorities live in who gets pulled for what, which projects get resourced when there is a conflict, and what work gets celebrated regardless of its strategic coherence.
Look at what is currently open and how long it has been open. Perpetually open work is not just a planning failure. It is evidence that the organization has decided — explicitly or by default — that completing things is less important than continuing things. When nothing ends, there is no forcing function for accountability. There is no moment at which the organization must confront the gap between what it committed to and what it delivered. The absence of endings is a structural decision, even when it was never consciously made.
Look at the ratio of core product work to custom or SOW work, and how that ratio has changed over time. An increasing proportion of custom work is not inherently a problem, but it requires active management of the technical and strategic cost it carries. When no one in the organization can answer what a given SOW costs in maintenance burden, architectural complexity, and roadmap displacement over the next 18 months, the company is making resource decisions without a complete picture of what those decisions are doing to the system underneath them.
Custom work hidden behind feature flags and toggles deserves particular attention. Engineering teams build these structures because they have to deliver something the architecture was not designed for and have no pathway to change the architecture. The toggle is the workaround that makes the impossible request deliverable in the short term. When toggles accumulate without being retired, the codebase becomes a record of every demand the organization could not say no to. It is worth noting that entire industries and product categories have been built to legitimize and productize this pattern — A/B testing platforms abused into serving this, as well as feature flag management tools like LaunchDarkly, and others — which gives it institutional cover and makes it substantially harder to identify as dysfunction. The existence of a professional tool for managing the workaround does not make the workaround strategic. In non-engineering contexts, look for the equivalent: the informal process that exists because the formal one doesn’t work, the workaround that everyone uses but nobody has documented, the exception that became the rule without anyone deciding it should.
What the Evidence Means
In a planning or intake conversation, raise a question about how a proposed piece of work relates to the broader strategy, or how it could be structured to serve more than one customer or use case. If the room goes genuinely blank — not uncomfortable, but as if the question had not occurred to anyone — you have found the boundary of the strategic vocabulary the organization has been operating within. This is not a sign of incapable people. It is a sign of a system that has never required this kind of thinking. The question that produces blankness is the question the organization most needs to be asking.
The parallel between the technical architecture and the organizational architecture is the most important pattern Projects can surface. When the codebase is a record of every demand the org could not say no to, and the org itself has done the same thing — informal structures hardening, workarounds becoming policy, exceptions becoming standard — you are looking at a system that has been encoding its own dysfunction into both layers simultaneously. What you fix in one without addressing the other will not hold.
Second-Order Effects
When custom and project work accumulates without tracking its cost, the organization loses the ability to assess its own capacity accurately. Every new request is evaluated against a perceived available capacity that does not reflect the actual load being carried. This produces chronic underdelivery on committed work, which damages credibility, which produces more pressure, which produces more scope injection, which produces more underdelivery. The cycle is self-reinforcing and accelerates over time.
Third-Order Effects
As custom work grows relative to the core product or service, the company’s identity begins to drift without anyone deciding to drift it. The sales motion orients toward what can be customized. The support burden orients toward what has been customized. Engineering priorities orient toward maintaining what has been built. Leadership attention follows the revenue pressure. The core product atrophies not because anyone decided to deprioritize it, but because every individual decision made rational sense in its moment and the cumulative effect was never accounted for. When the company eventually tries to reorient toward the core, it discovers that both the technical architecture and the organizational muscle memory have been rebuilt around the custom work. Reversing that is not a sprint. It is a structural redesign, and it will take longer and cost more than anyone will want to admit when the decision is finally made.
What Not To Do
Do not address the absence of project discipline as a methodology problem. Better tooling, sprint training, and project management certifications will not fix a system where the root cause is that the people with authority to say no to scope are not using it — and are not using it for reasons that have nothing to do with methodology. Tooling improvements in a broken incentive environment produce better-documented chaos. The process improvement will be absorbed by the system and the underlying pattern will resume, now with more ceremony attached to it.
Do not attempt to address the custom work accumulation by auditing and cleaning up the technical debt without first changing the incentive structure that produced it. The technical debt is a symptom. If the incentive structure remains unchanged, the cleanup will be followed by another accumulation cycle.
What Projects Cannot Tell You
Projects shows you what the company actually values in practice and what the structural consequences of those values have been over time. It does not tell you why the incentive structures are designed the way they are, who benefits from them, or how durable they are. That requires Patterns.
Patterns: The System Underneath the System
Patterns is where the story stops being about symptoms.
The first thing Patterns revealed was the attempts. There had been people — mostly newer — who had tried to make things better. Some had carved out small pockets where the worst of the system hadn’t penetrated. Others had been more ambitious and paid for it. The system had cajoled them downward through accumulated friction, until the gap between what they knew was possible and what they were willing to fight for had quietly closed. Some had forgotten, in practice, how to do things differently. They had learned to keep their heads down.
The ones still holding the line were holding it recently. The bubbles I could see were new. The older ones had already been burst. The system was actively working against the ones that remained.
The pattern that closed the case was subtler than the screaming. I was in a project intake when I raised how a particular SOW might serve other customers or be leveraged for an upsell and this should be a factor in deciding on taking on the SOW. The room went quiet. Not uncomfortable quiet. Genuinely blank. Nobody had the language for what I was describing — not because they were incapable, but because the system had never required it. Revenue came from closing deals. What happened after was someone else’s problem.
Which brought me to Sales. Sales had mastered the only game being played: controlling the story first. When something went wrong, Sales complained loudest and earliest, so the executives heard their version before any other. It wasn’t manipulation. It was adaptation. Sales did precisely what their incentive structure rewarded. Close the deal. Hit the number. What the deal cost the product, the engineering team, or the long-term architecture was not in the commission structure and therefore not in the calculation.
That is not a Sales problem. That is an incentive design problem. A sales team rewarded purely for closed deals will, over time, sell things the company cannot sustainably deliver — fragmenting the product, inflating the SOW pipeline, hollowing out the core. Not out of malice, but because the structure makes it rational. The executives had built that structure. They were getting exactly what they’d designed for, and couldn’t see it.
That was the pattern. Not chaos. Not dysfunction. A system operating perfectly according to its actual incentives, which had nothing to do with the ones written on the wall.
How to Read Patterns
Patterns requires looking at the history of the system, not just its current state. The most important question is not what is happening now — the other three Ps have shown you that. The most important question is why it has persisted. The answer almost always lives in the incentive structures that have been allowed to operate, whether anyone designed them intentionally or not.
Start with the history of improvement attempts. Find out what happened to the last several people who tried to make substantive changes — not complaints, but structured, serious attempts to change how something worked. Were they supported? Were they promoted? Were they managed out after a period of sustained friction? Were they simply worn down until they stopped trying? The outcome of previous change attempts is the most accurate map of what the system will do to future ones. If every serious attempt eventually failed or was absorbed, the system has demonstrated its response function. Understanding that function before you decide how to engage with it is not optional.
Look at compensation and incentive structures, particularly in Sales and at the executive level. These structures are the clearest expression of what the company actually optimizes for, because they represent decisions made with real financial stakes attached. Every downstream outcome that follows from a given incentive structure is predictable in advance if you read the structure clearly. The SOW accumulation, the architectural fragmentation, the engineering teams absorbing custom work indefinitely — none of it is a surprise if you start from the incentive geometry and reason forward.
Watch who controls the narrative when things go wrong and how quickly they move to do so. The function that is most incentivized to protect its own story will consistently arrive first with that story. In this company, Sales had learned that the first version of events to reach the executive team was the version that stuck. That was not a character failing. It was a rational adaptation to a system where narrative control had become a survival mechanism. Who speaks first, loudest, and most confidently when things go wrong tells you which function has the most to protect and has learned how to protect it.
Watch the bubbles of resistance — the pockets where someone is trying to do things differently. Note how old they are, how healthy they are, and what happened to the ones that no longer exist. A new bubble that has not yet encountered serious resistance is different from one that has been holding for two years. The age and condition of the resistance tells you how far along the system is in processing it.
What the Evidence Means
The pattern that closed the case in this company was a room full of people who had no language for a basic strategic question about their own work. They were not incapable. They had simply never been required to think that way, because the system had never made that kind of thinking necessary or rewarded. When you find the edge of the strategic vocabulary — the question that produces genuine blankness — you have found the boundary of what the system has been optimizing for. Everything outside that boundary has been allowed to atrophy.
Second-Order Effects
When incentive structures reward behavior that damages the company’s long-term health, the company will reliably produce that behavior at scale regardless of individual intentions or capability. Coaching a sales rep, running a leadership offsite, publishing new values — none of these change the behavior because none of them change the incentive. The behavior is rational given what the incentives reward. Individual interventions aimed at behavior that is structurally produced will be absorbed by the system and the behavior will resume.
Third-Order Effects
Companies that operate on misaligned incentives for long enough begin to select for people who are adapted to those incentives. The people who thrive become the unconscious template for hiring. The people who push back leave or are pushed out. Over time, the organization loses the internal diversity of perspective that would allow it to recognize and correct what it is doing. The system narrows into an increasingly stable version of itself — stable not because it is healthy, but because it has eliminated the friction that would force it to change. By the time this is visible from the outside, the internal correction capacity is often already gone.
What Not To Do
Do not present pattern findings to the people who built the system one data point at a time. A single finding shown in isolation to someone invested in the current state is a target — something to explain away, reframe, or dismiss with authority and confidence. The full picture, held together across all four Ps, is structurally harder to dismiss because the convergence of independent evidence from multiple lenses closes the exits that a single finding leaves open. Build the complete case before presenting any of it.
When you do present it, present the structural diagnosis — “this incentive structure is producing this outcome” — not the moral one. “Your culture is toxic” invites a defensive conversation about intent, which is a conversation you cannot win and which will not change anything. “Your Sales compensation structure is systematically incentivizing decisions that fragment the product” invites a structural conversation about design. These are not the same conversation, and only one of them leads to an outcome that changes something.
What Patterns Cannot Tell You
Patterns closes the case on why the system persists. What it cannot tell you is how much force will be required to change it, or whether the conditions for changing it exist. That assessment requires something the 4 Ps alone do not provide: a structural understanding of how incentive geometry propagates through organizations over time, how distortions compound and harden into architecture, and why certain interventions work while others get absorbed. That is the territory The Doctrine’s Organizational Physics was built to map. If the 4 Ps showed you what the system is, the Doctrine shows you the mechanics of why systems become that way — and what it actually takes to change them.
Running the 4 Ps: Reading the Story the Company Hasn’t Told You
The 4 Ps do not produce a map. They do not produce a checklist or a report. What they produce, when run correctly, is a reconstruction — a truth that no one in the organization has spoken aloud, assembled from evidence that is incomplete, contradictory, and often actively managed by the system you are trying to read.
This is why the simultaneity is not optional. If you audit Process first and draw conclusions before running People, the Process findings will shape what you look for in People conversations and bias what you hear. The story will degrade. Parts of it will hide. The system has a version of itself it wants you to see, and if you examine it one dimension at a time, you give it the opportunity to show you only the pieces that cohere. Running all four lenses at once forces the contradictions into view before the narrative can close around them.
What follows is a guide for structuring this work. The timeline is not a prescription — faster is better, but so fast as to miss or ignore things is worse. The appearance of investigation without interpretation is itself a signal the system will read and respond to.
Before You Start: Write Down the Official Story
Before you begin, document what you were told. The version of the company presented to you during the hiring process, in onboarding, in your first leadership conversations. Be specific. Write down the claims, not just the impressions.
This is not because you expect the official story to be accurate. It is because the gap between the official story and what the 4 Ps reveal is itself evidence. The places where the story diverges from reality tell you what the system most needs you to believe, which tells you where the most significant problems are likely to live. In this company, I was told Engineering and Product worked well together and requirements were clear. The Jira data, the conversations with exhausted Product managers, and the project intake meetings where no one had language for basic strategic questions told a different story in three different directions simultaneously. The gap was the diagnosis.
Open All Four Lenses at Once
Do not complete Process before starting People. Do not finish People before looking at Projects. Open all four lenses from the moment you arrive, even if your initial findings in each are shallow.
Early observations from one P will sharpen what you look for in the others. The exhaustion you notice in People on your first week will change what you look for in the process data. The roadmap vagueness you find in Process will change how you read the project intake conversations in your second week. The custom work accumulation in Projects will change how you listen when Patterns starts revealing who controls the narrative when things go wrong. The lenses are not independent. They inform each other continuously, which is why closing one before opening another degrades the picture.
Concrete examples of what this looks like in practice: you notice in People conversations that Engineering leaders have retreated entirely into technical concerns. You hold that observation and look at Process — you find that mid-sprint injections are chronic and tradeoff conversations are absent. You hold both and look at Projects — you find that custom SOW work is accumulating behind feature flags and no one is tracking the cost. You hold all three and look at Patterns — you find that Sales compensation is tied purely to closed deals with no adjustment for delivery cost, and that the executives see whatever story Sales tells first. None of these findings is conclusive alone. Together, they close the case.
Act While You Investigate
The 4 Ps are not a reason to freeze. A leader who appears to be investigating without acting sends a signal the system will read as either indecision or political maneuvering. Neither builds the credibility you need to get honest feedback.
Throughout the diagnostic process, make changes on the things you can change directly. If tradeoff conversations are not happening in your team’s planning process, start having them. If scope is being injected without accountability, install accountability. If your direct reports are not looking outward at process health because the environment has made it costly, make it safe — and visibly so. These actions serve a dual purpose: they improve the system incrementally, and they demonstrate that you are serious about making real change on actual issues, not performing investigation. The people who have been waiting to see whether you are different from every leader before you will begin to tell you more of the truth once they have evidence that the truth changes something.
What you should not do is make broad structural pronouncements before the picture is complete, or intervene in systems outside your direct authority before you understand how they work. Acting within your scope, on things you have direct evidence of, while the full picture develops is discipline. Acting outside your scope before you have the evidence to support it is how new leaders get absorbed by the system they were supposed to change.
Look for the Contradictions
The diagnostic power of the 4 Ps lives in where they contradict each other. A company that says Engineering and Product are well-aligned but whose process data shows chronic scope injection is telling you two different things. One of them is true.
For every contradiction you find, ask: which version is more consistent with the incentive structures? Which version would the people who benefit from the current state most prefer you to believe? The answer to the second question is usually the one that is false.
In this company, the official story was that Engineering and Product worked well together. The Process data showed chronic injection. The People conversations showed Product was openly exasperated and Engineering had retreated. Projects showed the custom work accumulating without strategic accounting. Patterns showed Sales controlling the narrative and executives unable to hear any story other than the revenue one. The contradictions between what I was told and what the four lenses showed were not ambiguous. They were a complete picture of a system operating perfectly according to incentives that had nothing to do with the ones on the wall.
Build the Complete Case Before Presenting Any of It
Resist presenting individual findings as you collect them. A single data point shown in isolation to someone invested in the current state is a target — something to explain away, reframe, or dismiss. The full picture, held together across all four Ps, is much harder to dismiss because the convergence of evidence from multiple independent lenses closes the exits.
The complete case requires three things. First, a clear statement of what is actually happening, grounded in specific evidence from Process and Projects. Second, a clear statement of what it is costing, grounded in People and the second and third-order effects visible across all four lenses. Third, a structural diagnosis — not a moral one — of why it persists, grounded in Patterns.
Acting on the Findings: Three Tracks
When the picture is complete, the findings sort into three tracks based on what is required to change them.
The first track is what you can change directly, within your own authority, without needing permission or partnership. Change these things. Change them clearly and explain why. This is where your credibility gets built or lost — not in the investigation, but in whether the investigation produces different outcomes for the people inside your system.
The second track is what requires partnership with peers — things that cross organizational boundaries or require coordination with other functions. Bring your findings to those peers as structural diagnoses, not accusations. “This incentive structure is producing this outcome” is a conversation. “Your team is causing this problem” is a conflict. One of these leads somewhere useful.
The third track is what requires systemic change at the executive level — things above your authority that are generating dysfunction downstream. These require escalation, and they require the full case, not individual complaints. Walk the findings through every leader who reports up through you — not just direct reports, but all leaders who carry reports and are part of the system. They will surface things you missed and test your conclusions against their own observations. Then bring the complete, tested case to executive leadership with confidence, not hedging. Show them what is going to be different in your organization, why, and how. Do not ask for permission to fix things in your own system. Do not frame your findings as concerns to be considered. Name what the system is doing, name what you are changing, and invite them to engage with the structural diagnosis. If you hedge, if you soften, if you frame your findings as preliminary and your recommendations as suggestions, you have done the work of the investigation and then handed its conclusions back to the system that produced the problem. The system will thank you for your insights and route around them. That is what systems do.
Be confident in your findings. Test them. And when you present them, present them as someone who has already decided to act — because you should have.
On Existing Frameworks, and Where This Differs
Most diagnostic tools are designed with a shared assumption: that the organization will cooperate with its own diagnosis. Culture surveys ask people to rate their experience. Gap analyses compare stated values to observed behavior. Listening tours invite people to share what they think. Each of these approaches hands the system an opportunity to present its best version of itself — and systems that have spent years encoding dysfunction into their architecture are very good at presenting a best version.
The 4 Ps are built for a different problem. They are not designed to surface what the organization is willing to tell you. They are designed to reconstruct what it hasn’t — from evidence that is incomplete, contradictory, and often actively managed by the system you are trying to read. The contradiction between what Process shows and what People says is not noise to be resolved. It is the finding. The gap between the official story and what the four lenses reveal is not a discrepancy to be explained away. It is the diagnosis.
This is not a claim that other frameworks are wrong, or that the 4 Ps supersede them. Most leadership frameworks address real problems. Many of them are useful. What they were not built for is the specific situation of a leader entering a new company with no historical context, a narrow window of clarity before the system begins to shape their perception, and an urgent need to understand what is actually happening — not the version the company has learned to present. The 4 Ps are a tool for that specific situation: rapid, simultaneous, adversarial diagnosis designed to surface the systemic forces influencing each other before the leader becomes part of the system themselves. Used alongside other frameworks, it makes those frameworks more effective. Used in place of observation and judgment, nothing will.
This is also why the framework is qualitative and interpretive rather than quantitative and standardized. The most important things happening in any organization are not in the numbers. They are in the gap between the numbers and the truth — which requires judgment to find, not instruments to measure. That is not a limitation of the 4 Ps. It is the point.
Why the 4 Ps Work Together
Most diagnostic approaches fail because they are single-axis. Financial reviews look at numbers. Culture surveys look at sentiment. Process audits look at process. Each gives you a piece of the picture the company is willing to show you, filtered through the lens of whoever designed the diagnostic.
The 4 Ps produce triangulation. Process tells you what is happening. People tells you what it is costing. Projects tells you what the company actually values. Patterns tells you why it stays that way. When all four are run simultaneously, the contradictions between them become the most important data you have.
A company can maintain the fiction of good process in a survey while the project data tells a different story. A leader can perform alignment in a one-on-one while their team’s attrition pattern reveals something else. A project can be described as strategic while resource decisions show it is actually disposable. Patterns can be explained away in isolation while they become undeniable when held up against the other three.
The 4 Ps don’t just show you what is broken. They show you what is protecting the break — and who benefits from it staying that way. That is the only starting point worth having, because without it, every intervention you make will be aimed at a symptom while the system quietly routes around it.
One lens confirms the story the company wants you to believe. Four lenses held simultaneously show you the one it doesn’t.


