A New Approach to Enterprise Data Management
Understanding not just where your data is,
but why it exists and what problems it solves
Keeshin Database Services, LLC
February 2026 — Updated Edition
Here's the problem: Most enterprise data tools tell you what data you have and where it flows. But they miss the most important questions: Why does this data exist? What does it mean to your business? And what problems is it supposed to solve?
This white paper introduces Organizational Data Flow Architecture (ODFA)—a new approach that maps data flows to organizational structure to reveal the business meaning and purpose behind your data. Instead of treating data as a purely technical concern, ODFA recognizes that data reflects how your organization actually works: the legal boundaries between subsidiaries, the operational flows between business units, and the daily decisions made by people in specific roles.
If you're a CFO, Chief Data Officer, Chief Compliance Officer, or CTO struggling to understand how data actually flows through your complex organization, this approach offers a fundamentally better way forward than what's currently available in the market.
While developing the kDS Data Source Discovery platform, I kept returning to a fundamental question: Are there any established methodologies that propose a "complete data flow topology map overlaid on organizational structure"?
After extensive research, I discovered something surprising—there isn't one. At least, not in the way that addresses the real challenges enterprise organizations face with data management.
Sure, there are related approaches. But each one misses a critical piece of the puzzle.
Show processes, data stores, and flows—but they don't integrate corporate or legal hierarchy. They're purely technical, missing the organizational context that gives data meaning.
Vendors like Collibra, Informatica, and Alation track data's journey from source to destination. But they don't map to organizational hierarchies or show which data flows cross legal entities.
Map Responsible, Accountable, Consulted, and Informed roles to activities. Useful for accountability, but they don't show data flow topology—just static responsibility assignments.
Introduces concepts of data "zones" and layers (cloud, edge, device). Focused on technical deployment, not organizational structure.
What became clear is that we needed something new—a synthesis of:
This hybrid framework combines data lineage concepts (technical flow), functional organizational hierarchy (legal and operational structure), data governance (ownership and accountability), and business process mapping (who uses what data for what purpose).
So we built it. And that's what we're calling Organizational Data Flow Architecture.
Traditional data tools answer important questions about what data exists (data catalogs), where data flows (lineage tools), and how data transforms (ETL documentation). But they miss the crucial context of why the data flow exists (business purpose), what it means (organizational context), who depends on it (stakeholder impact), and what problems it solves (business value).
When you map data to organizational hierarchy, suddenly everything gains context. Let's walk through each level:
At the parent company level, data serves consolidated reporting, board governance, and investor relations. This is "official company data"—audited, regulated, public-facing. The stakes are highest here because this data represents the company to the outside world.
At the subsidiary level, data flows serve legal compliance, define regulatory boundaries, and manage liability protection. Data crossing subsidiary boundaries has profound legal implications—inter-company transactions, transfer pricing, data sovereignty requirements. A transaction moving from one subsidiary to another isn't just a technical data transfer; it's a legal event with tax and regulatory consequences.
At the business unit level, data represents actual business operations—P&L accountability, strategic initiatives, revenue and costs, customer relationships. This is where business strategy becomes visible in data patterns.
At the role level, data is created, validated, and understood by the people who work with it daily. This is the source of truth, where data quality is determined and business context is richest.
One of the most underappreciated dynamics in enterprise data management is that the relationship between organizational structure and data quality runs in both directions. Structure shapes data quality—and data quality shapes structure. Each reinforces or undermines the other in ways that can either drive excellence or entrench dysfunction.
Understanding this bidirectional relationship is not merely academic. It is essential for anyone attempting to improve data management in complex organizations, and it explains why purely technical approaches to data governance so often fall short.
Organizations with centralized data teams often achieve higher consistency and standardization, but struggle with responsiveness. The central team cannot possibly understand every domain deeply enough to catch quality issues at the source. Conversely, federated models where business units own their data can achieve better domain accuracy but frequently fragment standards, creating integration problems downstream. The practical sweet spot tends to be a "hub and spoke" model—central standards and infrastructure with domain ownership of actual data assets—but this requires sophisticated coordination that many organizations lack.
Functional silos create their own version of truth. Marketing defines "customer" differently than Sales, who defines it differently than Finance. Each department builds systems optimized for its own workflows. The result is not just technical debt—it is genuine semantic ambiguity about what fundamental business concepts actually mean. When Finance closes the books, they are working with fundamentally different concepts than the product team analyzing user behavior. This fragmentation degrades data quality because reconciliation happens manually, late, and incompletely. The longer data stays within silos, the more it drifts from shared organizational reality.
In most organizations, nobody actually owns data quality. IT owns systems. Business units own processes. Analysts own reports. But the data itself? It is an orphan. This diffusion of responsibility means data quality becomes everyone's problem and therefore no one's priority. High-quality data requires clear ownership—someone who feels real pain when data is wrong and has the authority to fix it. This accountability rarely maps cleanly to traditional functional structures, which is precisely why it so often goes unaddressed.
When centralized data cannot be trusted, organizations spawn shadow systems. Spreadsheets multiply. Teams build their own databases. What begins as a practical workaround ossifies into parallel infrastructure that further fragments the organization. The "official" system becomes decorative while the real operational data lives in a senior analyst's personal database. This is not merely a technology problem—it is a structural adaptation to systemic data unreliability, and it reinforces the very fragmentation that caused the quality problem in the first place.
Reliable, accessible data reduces coordination costs dramatically. Teams can make decisions autonomously because they trust the information they see. Fewer layers of management are needed to reconcile conflicting views of reality because there is actually a shared reality to reference. Organizations with excellent data platforms can operate with more distributed authority—but only if everyone believes the numbers. Trust in data is the prerequisite for organizational agility.
Which data gets attention reveals organizational priorities. Customer acquisition metrics that are pristine while customer retention data is a mess? That reflects what leadership actually cares about, regardless of stated strategy. Political power flows to whoever controls definitive data. The tension between IT-owned data warehouses and business-owned analytics platforms is not merely technical—it is a struggle over organizational authority. ODFA makes these dynamics visible, which is why it functions as a structural intervention as much as a technical tool.
One of the most instructive examples of the structure-data quality relationship is the chart of accounts. A chart of accounts is essentially a financial mirror of organizational structure—the way accounts are organized reflects how the business is structured, how costs are allocated, and how performance is measured across divisions, departments, and cost centers.
When organizational structure changes—a merger, a new division, a restructuring—the chart of accounts must follow. And that realignment is often painful, because financial data history no longer maps cleanly to the new structure. This creates a well-documented organizational pathology: companies resist restructuring partly because of the downstream impact on financial reporting, budgeting, and consolidation logic. The chart of accounts becomes a kind of organizational fossil record—reflecting decisions made years ago that now constrain how the business can evolve.
This is a subtle but important dimension of data quality that traditional tools completely miss: data quality is not just about accuracy and completeness. It is about whether the data still reflects organizational reality. ODFA surfaces this structural-data interdependency explicitly—revealing not just what data exists, but whether the organizational architecture that generates it still makes sense for how the business actually operates today.
The companies that benefit most from the ODFA approach are typically those experiencing genuine pain from their current structure—growing fast enough that informal coordination breaks down, or complex enough that siloed approaches create expensive redundancies. The approach works best when there is already organizational recognition that the current approach is not sustainable.
What ODFA provides is not just a map of data flows. It is a map of organizational meaning—the implicit decisions, historical artifacts, and power dynamics embedded in how data actually moves through an enterprise. Making the implicit explicit is inherently disruptive, but it is the only path to data management that genuinely reflects and serves organizational reality.
Let's make this concrete with an example based on a real-world company. Newco Foods is a food manufacturing company with three subsidiaries: Corporate Services, Food Manufacturing, and Food Safety. Each subsidiary has multiple business units, and each unit has roles that create and use data. This example comes from actual sample data that ships with the kDS platform, modeled on real organizational structures we've encountered.
Figure 1: Newco Foods organizational structure showing Parent → Subsidiaries → Business Units → Roles
Consider production quality data flowing through this organization.
"Quality metrics flow from production floor systems to quality database to executive dashboard."
The same data flow reveals its business meaning and the problems being solved:
The Quality Assurance Technician in the Quality Control business unit captures inspection data because they're ensuring product meets safety standards. This data represents the moment a batch passes or fails quality checks—a critical decision point for food safety.
Problem solved: Preventing unsafe products from reaching consumers and maintaining lot-level traceability for potential recalls.
The Quality Control business unit aggregates daily quality metrics for trend analysis and compliance reporting. This ensures the subsidiary maintains its quality certifications and meets regulatory requirements.
Problem solved: Maintaining FDA compliance, identifying quality trends before they become critical issues, and supporting continuous improvement initiatives.
When quality data moves from the Food Safety subsidiary to the Food Manufacturing subsidiary, it crosses a legal boundary. This isn't just data sharing—it's inter-company compliance coordination with implications for liability, insurance, and regulatory oversight. If Food Safety certifies a batch and Food Manufacturing ships it, both subsidiaries share legal responsibility.
Problem solved: Coordinating quality assurance across legal entities to ensure unified food safety standards while maintaining proper legal separation and liability management.
Finally, Newco Foods Corporation consolidates quality metrics for board reporting, investor communications, and corporate compliance. This becomes "official company performance data"—the basis for claiming food safety standards to customers, regulators, and investors.
Problem solved: Demonstrating enterprise-wide quality performance to external stakeholders, managing corporate reputation, and ensuring consistent messaging about food safety commitments.
The organizational architecture mapping makes data management relevant to everyone in the enterprise—from the C-suite to operational teams. Here's why different leaders care:
Cares about which data crosses subsidiary boundaries for inter-company accounting and tax optimization.
Focuses on food safety certifications crossing jurisdictions and sensitive formulation data protection.
Needs to understand system dependencies across business units and where bottlenecks occur.
Need clarity on who owns ingredient and allergen data when it crosses boundaries, especially critical for product recalls and regulatory audits.
"Table A joins with Table B to create View C."
"Operations data from the Division level is aggregated into management reports at the Parent level because executives need visibility into business unit performance and investors require segment reporting under SEC rules—solving the business problem of transparent performance measurement and regulatory compliance."
This semantic layer answers questions traditional tools cannot:
Organizational Data Flow Architecture doesn't just map data flows—it maps the organizational meaning of data flows and the business problems they solve.
This transforms data discovery from a technical documentation exercise into a strategic data management capability. You're capturing not just what data exists, but:
What the data means to the business
Why the organization structured data flow this way
Who depends on this data for what business purpose
What problems this data is designed to solve
Where organizational structure itself creates—or destroys—data quality
It helps you prioritize data quality issues, determine what's critical versus nice-to-have, identify redundant sources, and make informed architecture investments.
Because this approach fills a gap in existing methodologies, Organizational Data Flow Architecture represents a new category in enterprise data management—one that speaks the language of business, not just technology.
It could change how enterprises approach data management entirely:
This is the key differentiation. Tools like Collibra and Alation, along with the big consulting firms, approach data from pure technology or pure governance perspectives—not from organizational architecture and business problem-solving. Organizational Data Flow Architecture bridges that gap.
Organizational Data Flow Architecture is more than a theory—it's a practical approach being proven in real enterprise environments through the kDS platform and its BETA program.
Each implementation teaches us more about how organizational architecture shapes data architecture, how business problems drive data requirements, and how making these relationships explicit transforms data management outcomes. The February 2026 update to this white paper reflects a deeper understanding of one of those relationships: the bidirectional dynamic between organizational structure and data quality. Structure is not a neutral backdrop for data—it is the primary driver of how data is created, fragmented, owned, and trusted. And data quality, in turn, shapes how organizations evolve, where informal power concentrates, and how effectively they can restructure themselves as business conditions change.
If you're working in enterprise data management and this approach resonates with challenges you're facing, we'd welcome your input. The approach is evolving, and the best insights will come from practitioners grappling with real data management challenges in complex organizations.
Keeshin Database Services, LLC developed the kDS Data Source Discovery App—an AI-powered platform that implements the Organizational Data Flow Architecture approach. Through structured interviews with subject matter experts across your organization, kDS maps data sources and flows to your organizational structure, revealing not just what data you have, but why it exists and what business problems it solves.
For more information about the kDS platform and the BETA program, visit www.keeshinds.com or contact us at [email protected].