Each engine analyses a different dimension of the NDIS ecosystem. Together, they create an integrity intelligence layer that no single-engine system can match.
Traditional fraud detection examines individual claims in isolation. Network Graph Analysis maps the entire NDIS ecosystem as a connected graph -- providers, participants, workers, locations, and the relationships between them. This reveals hidden patterns that are completely invisible at the transaction level.
The engine builds a continuously updating directed graph using NetworkX. Nodes represent providers, participants, workers, and locations. Edges are weighted by billing volume, service relationships, and employment links. Cycle detection algorithms identify closed-loop flows, while community detection reveals suspicious clusters.
Provider A pays Worker X. Worker X is also registered with Provider B. Provider B bills NDIS for services to Provider A's clients. The money flows in a circle -- this is invoice cycling, and it's invisible without graph analysis. CareIntegrity.AI detects this automatically by finding cycles in the provider-worker-participant graph.
In the demo dataset alone, the Network Graph engine identified 12 closed-loop alerts and multiple shared-staff clusters -- patterns that would take human auditors months to uncover through manual investigation.
Fraudulent providers don't start fraudulent -- they drift. The Behavioural Drift Engine tracks each provider's "fingerprint" over time: billing patterns, session lengths, staffing ratios, geographic spread, and service mix. When a provider's behaviour changes faster than any legitimate business could, the engine flags it as structurally impossible.
A small therapy provider with 5 participants and 2 workers suddenly grows to 80 participants in 6 weeks -- but still has the same 2 workers. This is a structural impossibility. No legitimate provider can serve 40 participants per worker. The Behavioural Drift Engine catches this by comparing growth trajectories against workforce capacity.
Every human has exactly 24 hours in a day. Every worker can only be in one place at a time. Every journey between locations takes time. The Time Budget engine enforces these immutable physical laws against claimed service delivery, revealing fabricated services that violate the laws of physics.
Worker WRK-0005 is billed for a 3-hour session in Parramatta ending at 2:00 PM, and another 3-hour session in Campbelltown starting at 2:10 PM. These locations are 35km apart. Even at highway speed, this journey takes 30+ minutes. The Time Budget engine flags this as a travel impossibility -- one of these sessions is fabricated.
Every provider has a "DNA" -- a unique pattern of how they bill, who they serve, what services they deliver, and when they work. The Provider DNA engine converts each provider into a high-dimensional vector embedding, then uses PCA and distance metrics to detect mutations, cluster anomalies, and providers that suddenly change their fundamental nature.
A registered Occupational Therapy provider's DNA suddenly shifts -- their embedding shows a dramatic move toward SIL (Supported Independent Living) billing patterns within 2 months. Their service mix goes from 90% therapy to 70% SIL. This is a semantic role shift anomaly -- legitimate providers don't fundamentally change what they do overnight. This often indicates the provider has been taken over or is exploiting a new billing category.
What does "normal care" actually look like? The Synthetic Simulation engine answers this by generating realistic care baselines for each participant based on their support needs level, disability type, and plan allocation. It then compares actual billing against these baselines to detect over-servicing, inflated frequency, and unnecessary service stacking.
A participant with "low" support needs (expected ~8 hours/week) is receiving 35 hours/week from a single provider -- 4x the expected baseline. They're simultaneously receiving Occupational Therapy, Psychology, Speech Therapy, and Support Work every day. The Synthetic Simulation engine flags this as service stacking -- there's no clinical reason for this intensity of concurrent services at a low support level.
The most damaging NDIS fraud isn't individual -- it's organised. Provider cartels coordinate to maximise billing through shared staff, shared addresses, and circular referral patterns. The Collusion Detection engine uses graph community detection algorithms to identify these hidden networks, rendered as an interactive 3D collusion map.
The engine builds a weighted provider affinity graph. Edge weights combine shared staff (3x weight), shared locations (2x weight), and shared participants (1x weight). Greedy modularity community detection identifies tightly-connected clusters. High-density clusters with multiple shared resources are flagged as potential cartels.
Four providers (PRV-0003, PRV-0007, PRV-0009, PRV-0011) share 8 workers between them, operate from 3 common addresses, and bill the same 45 participants. The Collusion Detection engine identifies this as a provider cartel with 0.62 network density -- these providers are operationally the same entity hiding behind multiple registrations to maximise billing.
Every single invoice is scored against multiple baselines simultaneously -- the provider's own history, the participant's pattern, peer group averages, workforce constraints, and geographic feasibility. The result is a composite fraud likelihood score that combines statistical deviation, network risk, and behavioural drift into a single actionable number.
Fraud Likelihood = Deviation(0.4) x Network Risk(0.3) x Behavioural Drift(0.3)
Invoice CLM-45892 charges $847 for a 9.5-hour session at $89/hour. The provider's average is 3.2 hours at $62/hour (hours z-score: +2.8 sigma). The participant normally receives 2-hour sessions (participant z-score: +3.1 sigma). The provider is also flagged by the Network Graph as part of a shared-staff cluster. Combined fraud likelihood: 78%. The fraud officer reviews the forensic evidence and issues a penalty with one click.
While the 7 AI engines detect patterns automatically, experienced fraud officers often know specific red flags unique to their region, provider type, or current investigation. The Custom Rule Engine lets officers define their own detection rules with multiple conditions, operators, and AND/OR logic -- turning institutional knowledge into automated detection.
A fraud officer investigating SIL providers notices a pattern: providers billing exactly 8.0 hours at exactly $70/hour every day, across multiple participants. She creates a custom rule: hours == 8.0 AND rate == 70.0 AND service_type contains "SIL". The rule immediately finds 340 matching claims across 3 providers -- confirming a coordinated billing template being used to generate fraudulent invoices.