Loading...
Loading...
Learn how to apply quantitative screening filters systematically in transfer pricing benchmarking. This guide covers industry codes, independence indicators, size thresholds, and a practical filter sequence for narrowing a broad company universe to a defensible candidate set.
Borys Ulanenko
CEO of ArmsLength AI

Continue exploring
Browse the full resource library or contact us if you want recommendations for your specific use case.
Before launching an external-company database search, check whether reliable internal comparables exist. OECD's typical process reviews internal comparables before external sources (), and EU JTPF guidance states that internal comparables should be preferred where they exist and meet the comparability standard. The search for comparables follows the delineation of the transaction and does not drive it.
Quantitative screening filters narrow a database of thousands of companies to a manageable set of potential comparables before manual review. The key filters are: industry codes (NACE/SIC/NAICS), geographic location, independence indicators, size thresholds, financial data availability, and profitability requirements.
Apply filters in a logical sequence from broad to narrow—starting with industry and geography, then independence and status, then size and profitability. Document all criteria and the number of companies remaining at each step (per EU JTPF transparency recommendations). A well-executed quantitative screen yields a manageable set of candidates for qualitative review—often a few dozen companies—from which practitioners finalize a smaller, higher-quality set after manual review.
Use this as a checklist of the most common quantitative filters. Exact thresholds depend on the tested party and data universe—what matters is that each filter is economically motivated and traceable.
| Filter Category | What it screens | Common patterns (illustrative) | What tax authorities look for |
|---|---|---|---|
| Industry classification | Wrong line of business | Start with specific codes (e.g., 4-digit NACE), broaden only if needed | Evidence codes match real activities (not just labels) |
| Geography | Different market conditions | Start local; broaden to region if local data is sparse | Rationale for scope and any broadening steps |
| Independence | Non-independent (controlled) companies | BvD A/B included; C/D excluded; handle “U” explicitly | Clear independence logic and consistent application |
| Status / going concern | Inactive, bankrupt, distressed shells | Active only; exclude bankruptcies/liquidations | A defensible “going concern” screen |
| Data availability | Missing years/fields for your PLI | Require minimum years + required fields for the chosen PLI | Consistent year set; no silent gaps |
| Size / scale | Material scale differences | Revenue/employee bands anchored to tested party | Why scale matters + why thresholds are reasonable |
| Intangible / R&D intensity | Companies with materially different intangible profiles | R&D-to-revenue or intangible-assets-to-total-assets ratios | Whether intangible intensity distorts margin comparisons |
| Export-sales intensity | Companies with materially different market orientation | Export-to-total-revenue ratio anchored to tested party | Rationale for why export mix affects comparability |
| Inventory / working-capital profile | Companies with different capital structures | Inventory-to-revenue or working-capital-to-revenue ratios | Consistency of capital structure with tested party |
| Diagnostic ratios | Outliers or functionally different companies | Ratios such as OPEX/revenue, COGS/revenue used as refinement tools | Evidence that diagnostic ratios improve reliability |
| Special situations | Start-ups, bankrupt, or liquidating companies | Exclude companies in first years of operation, under insolvency, or winding down | A defensible screen for non-going-concern entities |
| Profitability pattern | Persistent distress or extreme outcomes | Exclude all-year loss-makers; investigate one-off losses | Avoid “winner-only” bias; document loss-maker logic |
Quantitative screening is the first step in a comparables search. Its purpose is to filter a database universe—often containing millions of companies—down to a few dozen or hundred candidates that warrant individual examination.
Why It Matters: Without systematic quantitative filters, you'd either (a) review thousands of companies manually, which is impractical, or (b) apply filters arbitrarily, which is indefensible. Quantitative screening ensures efficiency and consistency.
The process uses objective, measurable criteria that databases can apply automatically:
After quantitative filters, the remaining companies undergo manual (qualitative) screening—reviewing business descriptions, functional profiles, and financial details to confirm true comparability.
Industry filters ensure only companies engaged in comparable economic activities are considered. They rely on standardized classification codes.
| System | Origin | Digits | Coverage |
|---|---|---|---|
| NACE Rev.2 | European Union | 4 | EU-standard; used in Amadeus, Orbis |
| SIC | United States | 4 | U.S. Standard Industrial Classification |
| NAICS | North America | 6 | U.S./Canada; more modern than SIC |
| ISIC | United Nations | 4 | International standard; less common in TP databases |
Most databases support multiple systems. Orbis, for example, allows searching by NACE Rev.2, NAICS, US SIC, and proprietary BvD sectors.
Best practice: Use the most specific code that captures the tested party's activity—typically 4-digit level.
Using 4-digit codes yields a homogeneous set performing very similar functions. Broader 2-digit codes introduce "false positives"—companies in the same sector but with different economics.
When to Broaden: Only expand to 3-digit or 2-digit codes if the specific code yields insufficient results. If NACE 46.52 returns only 3 companies in your target geography, step back to 46.5 before abandoning the search.
Companies have one primary code (main business) and may have secondary codes (other activities). Filter by primary code initially—this ensures the company's core business matches your tested party. If results are too narrow, you can include secondary codes, but expect more manual vetting.
Industry codes aren't perfectly precise. They're often self-reported or assigned by analysts and may be outdated. A company's registered code doesn't always reflect its actual current business. The EU JTPF notes industry classification inconsistencies and recommends combining codes with other elements.
Best practice: Codes are a starting point; validate with keywords and qualitative review of business descriptions. Don't rely on code granularity alone.
Geographic filters limit the search to companies operating in relevant markets. Market differences—competition, cost levels, consumer behavior—can significantly affect profitability.
Start with the tested party's country. Many jurisdictions prefer local comparables where available, though explicit legal requirements vary:
| Jurisdiction | Preference |
|---|---|
| Poland | Domestic comparables preferred where reliable; regional/EU benchmarks commonly used when domestic data is insufficient (no explicit legal "local-first" rule) |
| India | India has no explicit legal preference for domestic over foreign comparables; choice depends on the tested party and facts. In practice, local comparables often require stronger justification to bypass |
| Japan | Domestic comparables are preferred in Japan's framework; foreign comparables may still be used where they are more reliable or where domestic data is insufficient |
| EU general | Pan-European acceptable, especially if local data is sparse; EU JTPF notes pan-European searches shouldn't be rejected solely for being non-domestic |
If a single-country search yields insufficient comparables (a very small set may justify broadening if it is not sufficiently reliable for the facts and method), expand to the region:
Whatever scope you choose, document why:
"A local search in [Country X] yielded only 3 independent companies after all filters. Therefore, the search was broadened to include [Country Y and Z], which share similar economic conditions and wage structures with [Country X]."
For detailed guidance, see our Regional vs Local Comparables Guide.
Independence filters exclude companies that are part of multinational groups. This is critical—comparables analysis relies on observing uncontrolled results. A subsidiary's financials might be influenced by transfer pricing, making them unsuitable comparables.
Bureau van Dijk databases (Orbis, Amadeus) use a standardized indicator:
| Code | Meaning | Include? |
|---|---|---|
| A | No shareholder owns >25% | ✅ Yes |
| B | No shareholder >50%, but at least one 25-50% | ✅ Yes |
| C | Majority owned (>50%) | ❌ No |
| D | Direct subsidiary (>50% direct ownership) | ❌ No |
| U | Unknown ownership | ⚠️ Case-by-case |
The 50% threshold balances data availability with comparability. Applying stricter independence criteria (e.g., requiring no shareholder >25%) can materially reduce the available sample—sometimes to the point where a meaningful analysis becomes difficult.
Why 50%? A company with a 40% minority shareholder may remain in the initial pool under a >50% starting-point test, subject to further review of governance structure and related-party influence. Some jurisdictions apply stricter thresholds (e.g., less than 25% ownership).
Companies with "U" status have no ownership information available. A common conservative approach is to exclude them by default—they could be undisclosed subsidiaries. If your sample is critically small, you might include U-rated companies but investigate each manually. Document your reasoning.
Bottom line: Independence is a core screening criterion for external company comparables. In BvD terms, analysts typically include A/B and exclude C/D, subject to local rules and the specific facts of the analysis.
Size filters exclude companies that are extremely larger or smaller than the tested entity. Scale differences affect profitability—larger companies may have economies of scale; very small companies may be volatile or unsustainable.
These ranges are frequently used in practice; adjust based on tested party characteristics and local norms:
| Filter | Common Ranges | Purpose |
|---|---|---|
| Minimum revenue | €1M - €10M | Exclude micro-enterprises |
| Maximum revenue | 5-10x tested party's revenue | Exclude companies with different scale economics |
| Employees | >10 employees (optional) | Exclude shell companies or dormant entities |
The tested party's size should guide your thresholds. For a €25M distributor:
If the tested party is €500M, you might set minimum €50M and no maximum.
Size filters significantly reduce candidate numbers. If moving from €1M to €5M minimum drops your pool from 200 to 30, that's fine. If it drops from 30 to 3, reconsider.
These filters ensure companies have sufficient data for analysis. There's no point including a company if it lacks the financial metrics you need.
Common practice: Require at least 3 years of financial data where available. Transfer pricing analysis often uses multi-year data to understand business cycles and improve comparability—though this is context-dependent, not a universal minimum. Hungary's new transfer pricing decree (45/2025 NGM), published on 23 December 2025, applies mandatorily for tax years starting in 2026 and may be applied early for 2025. As a main rule, the review period covers the three years preceding the year under review, with data available for all three years.
For TNMM/CPM analysis, you need:
Set database criteria to exclude companies with null values in critical fields for your analysis years.
Entity-level (unconsolidated) financials are often preferable where they better match the tested party, but consolidated accounts should not be automatically excluded at the quantitative stage. If consolidated comparables remain in the pool, accept them only where comparability can be demonstrated; consolidated accounts involving more than 10 subsidiaries should generally be excluded.
Profitability filters screen out companies with problematic financial patterns. However, they're contentious—excluding all loss-makers can introduce upward bias.
| Filter | Description | Risk |
|---|---|---|
| All-year loss-makers | Exclude if losses in every year of analysis period | Reasonable |
| Any-year loss-makers | Exclude if loss in any single year | Too strict—may bias results |
| Multi-year average negative | Exclude if 3-year average profit < 0 | Reasonable |
| At most 1 loss year in 3 | Allow companies with one bad year | Balanced approach |
Bias Risk: Excluding all loss-makers can create upward bias—you're essentially selecting only "winners." The Italian Supreme Court (Decision No. 19512, 2024) ruled that potentially comparable entities cannot be excluded from the comparability analysis solely because they have low profits or losses—the exclusion must be based on substantive economic analysis, not automatic rules.
These are policy choices, not universal rules—adjust based on case-specific analysis consistent with OECD guidance:
For detailed guidance on handling loss-makers, see our Loss-Making Comparables Guide.
Apply filters in a logical order—from broad to narrow. This commonly efficient sequence maximizes efficiency and ensures the most relevant cuts happen first. Other sequences can be equally defensible if documented and consistently applied.
This sequence ensures that when you apply profitability filters, you're evaluating companies of similar scale in the same industry—not mixing apples and oranges.
Note on Sequence: The EU JTPF supports a step-based approach with transparent documentation of criteria and outcomes, but does not canonize any specific ordering. What matters is consistency and traceability—your sequence may vary depending on database functionality and tested party facts.
An accept/reject matrix is the simplest way to make your screening defensible. It shows that companies were included or excluded based on predefined criteria, not outcome-driven judgment.
| Company | Stage (Quant/Manual) | Key reason | Decision | Evidence / note |
|---|---|---|---|---|
| Company A | Quant | Independence | Reject | BvD indicator C (majority owned) |
| Company B | Quant | Data availability | Reject | Missing EBIT for an analysis year |
| Company C | Manual | Functional mismatch | Reject | Description indicates manufacturing, not distribution |
| Company D | Manual | Accept | Accept | Functionally similar; no disqualifying issues found |
Symptom: Zero or near-zero companies remain after screening.
Cause: Too many restrictive criteria stacked together—narrow industry code AND single country AND strict size AND no losses ever.
Fix: Relax filters progressively. Check which filter eliminated the most companies and consider whether it's too strict.
Symptom: Hundreds or thousands of companies remain, requiring impossible manual review.
Cause: Broad 2-digit industry code, no size filter, no independence filter.
Fix: Apply sensible thresholds. Add revenue minimum, ensure independence is checked.
Symptom: Companies in the set don't actually do what the tested party does.
Cause: Relying on codes that don't match actual activities, or codes that were accurate years ago but not now.
Fix: Verify code by reading descriptions of companies in that category. Consider multiple codes if the tested party spans activities.
Symptom: Final comparables include subsidiaries of large groups.
Cause: Database didn't have independence filter applied, or filter was set incorrectly.
Fix: Always apply independence filter. For public companies, manually check for parent ownership.
Symptom: Revenue cutoff of exactly €7.3M with no explanation.
Cause: Setting thresholds to achieve a desired outcome rather than based on comparability rationale.
Fix: Every threshold should be tied to tested party characteristics or standard practice. Document reasoning.
This illustrative example shows how filters progressively narrow the candidate pool. Actual counts vary by database version, year, and specific filter settings.
Tested Party: German limited-risk distributor of electronic components, €25M annual revenue.
Database: Orbis/Amadeus
| Step | Filter Applied | Companies Remaining (Illustrative) |
|---|---|---|
| 0 | Starting pool: NACE 46.52 worldwide | ~3,500 |
| 1 | Geography: Germany, Austria, Switzerland | ~800 |
| 2 | Independence: A and B only (no >50% owner) | ~350 |
| 3 | Status: Active, incorporated before 2020 | ~330 |
| 4 | Data availability: ≥3 years (2021-2023) | ~200 |
| 5 | Revenue: €5M - €100M | ~80 |
| 6 | Profitability: Not loss-making all 3 years | ~60 |
| Output | Proceed to manual screening | ~60 |
Thresholds should be set to improve comparability, not to force an outcome. The easiest way to defend them is to document:
Using the illustrative funnel above:
That reduction is normal. But if a single threshold wipes out almost everything, you should reconsider whether it’s too strict (or whether an earlier filter was too narrow). Your audit defense is the narrative: “we set thresholds to align scale and ensure data completeness, and we monitored the impact at each step.”
"The comparables search was conducted using Orbis (accessed December 2025). Companies were selected based on NACE Rev.2 code 46.52 (wholesale of electronic equipment), limited to DACH region to ensure market comparability. Only independent companies (BvD Independence A or B) with at least 3 years of financial data were included. Revenue thresholds of €5M-€100M were applied to align with the tested party's scale (€25M). Persistent loss-makers (losses in all three years) were excluded. The quantitative screening yielded 60 companies for manual review."
Benchmarking Guides:
Documentation Guides:
Glossary:
Apply filters from broad to narrow: industry → geography → independence → status → data availability → size → profitability. This sequence ensures maximum efficiency—you eliminate large swaths of irrelevant companies early (different industries, subsidiaries) before fine-tuning on size and profitability. Independence and data availability are objective, easy-to-apply filters that should come before more subjective decisions.
Many practitioners aim for a few dozen companies for manual review, from which they finalize a smaller, higher-quality set. The reliability of the final set matters more than hitting a target count—there is no required minimum or maximum number of comparables (EU JTPF explicitly states that even one or two comparables may be acceptable in some cases). If your set after quantitative filters is very small, consider broadening geography or industry code if doing so would improve reliability. If you have more than 200, tighten size thresholds or add stricter criteria—manual review of hundreds is impractical and suggests under-filtering.
A = No shareholder owns >25% (truly independent). B = No shareholder >50%, but one owns 25-50% (minority-held, still acceptable). C = Majority owned by a shareholder >50% (subsidiary—exclude). D = Direct subsidiary with >50% direct ownership (exclude). Standard practice: include A and B, exclude C and D. The U (Unknown) category requires case-by-case judgment.
Use the system native to your database and jurisdiction. For European searches (Amadeus, Orbis Europe), NACE Rev.2 is standard. For U.S. searches (Compustat), SIC or NAICS work well—NAICS is more modern. Most global databases support multiple systems and can cross-reference. The key is using 4-digit specificity in whatever system you choose.
There is no fixed OECD or EU turnover rule. Revenue or employee thresholds should be anchored to the tested party's scale and business model, and justified by how they improve comparability. For example, for a €25M tested party, a range such as €5M-€100M might be reasonable—but treat such figures as illustrations, not defaults. Always document the rationale: explain why your chosen thresholds improve comparability for your specific facts.
First, verify your filters aren't too restrictive—check if any single filter caused a dramatic drop. Then consider: (1) expanding geography to a broader region, (2) including a related industry code, (3) lowering minimum revenue, (4) accepting companies with one loss year instead of zero. Document any broadening: "Local search yielded insufficient comparables; expanded to pan-European as permitted under OECD guidelines when local data is sparse."
No. Automatically excluding all loss-makers is too aggressive and can bias results upward. explicitly states companies shouldn't be rejected solely for having losses. Exclude only persistent loss-makers (losses in all years of the analysis period) and investigate single-year losses to determine if they reflect normal business cycles or extraordinary circumstances. Document your analysis for each loss-maker considered.
Create a search strategy report listing: (1) database used and search date, (2) each filter criterion with threshold, (3) rationale for each threshold, (4) number of companies remaining after each filter, and (5) final count proceeding to manual review. This documentation should allow someone to replicate your search. Many practitioners include this as a dedicated section in the benchmarking study or as an appendix.
The OECD Transfer Pricing Guidelines provide guidance on comparable selection and screening: