➽Explainer Article

Passive Attack Surface Mapping from a Single Domain: How It Works

May 30, 2025
|
by Cyber Analyst
Passive Attack Surface Mapping from a Single Domain: How It Works

➤Summary

Understanding your digital exposure is essential. One of the most overlooked sources of risk is your external attack surface — the sum of all internet-facing assets attackers could exploit. A well-designed attack surface tool should address this by performing passive reconnaissance starting from a single domain name.

Unlike active scanning tools, this approach gathers intelligence without touching any of the target’s infrastructure directly. It relies entirely on public sources, certificate logs, WHOIS records, and internet-wide search engines to build a complete map of an organization’s footprint.

Objective of an attack surface mapping tool

Starting with just a main domain (e.g., example.com), the goal is to:

  • Discover all possible subdomains and related IP addresses.
  • Identify IP ranges associated with the organization.
  • Collect detailed metadata on exposed services without sending a single packet to the target systems.
  • Link all findings together to provide a coherent picture of what is visible to the internet — and to potential attackers.

Why Client Feedback and Traditional Vulnerability Scanning Aren’t Enough

Many organizations assume they already know the full extent of their external assets — or believe that running vulnerability scanners like Nessus is sufficient for managing cyber risk.

Unfortunately, that’s rarely the case.

Why relying on client input alone falls short

When you ask a client “Which domains and servers do you own?”, the answer is often:

  • Incomplete — Many systems are forgotten, undocumented, or unknown to security teams.
  • Outdated — DNS entries change, cloud infrastructure expands, and staging environments multiply.
  • Inaccurate — The person providing the input may not be aware of external vendors, legacy systems, or developer-deployed tools.

Common blind spots include:

  • Shadow IT: Employees deploy SaaS tools, dev environments, or third-party services without formal approval.
  • Mergers & acquisitions: Domains from acquired entities may still be online — and vulnerable.
  • Cloud mismanagement: DevOps teams expose services in AWS, Azure, or GCP that never get reported.
  • DNS delegation and vendor sprawl: Subdomains are managed externally with no central visibility.

Why Nessus (and other scanners) aren’t enough

Vulnerability scanners like Nessus are essential — but they only work once you know what to scan.
They require a list of targets (IP addresses, domains, or ranges) to begin with.

If you miss part of your infrastructure during asset discovery, those assets will remain unscanned and unprotected. That’s where an attack surface monitoring tool comes in. Passive attack surface enumeration provides:

  • A hacker’s perspective: What an attacker would see by simply observing public data.
  • Speed and scale: Can be run in minutes across many domains — no risk, no waiting for scan windows.
  • No intrusion: No packets are sent to target systems. There’s no risk of breaking anything or triggering alerts.
  • Inventory validation: Highlights unknown or forgotten assets the client didn’t mention.

Once a attack surface monitoring soltion mapped the full attack surface passively, this should become the input for active vulnerability scanning tools like Nessus. So finally both steps are needed.

Step-by-Step Breakdown of an attack surface detection solution

Step 1: Discover Subdomains

The process begins with subdomain enumeration using a variety of passive methods:

  • Query APIs like C99, SecurityTrails, VirusTotal for known subdomains.
  • Scan Certificate Transparency logs to identify domains with issued SSL certificates.
  • Use DNSDumpster scraping to retrieve indexed DNS records.
  • Extract CNAME records to detect third-party services and SaaS dependencies (shadow IT).

This results in a broad map of possible FQDNs used by the organization.

Step 2: Resolve to IP Addresses

Each discovered domain needs to be resolved to its IP address using DNS. This ensures only active, routable hosts are considered.

Step 3: Map Ownership with WHOIS and RDAP

IP ownership is validated with:

  • WHOIS queries (including Swiss RDAP for .ch and .li).
  • WHOAPI to retrieve structured registrant metadata.

This confirms which IPs are controlled by the organization, and gathers registrant names, countries, and ISP information.

Step 4: Expand to Full IP Ranges

Once individual IP addresses have been identified, the next step is to expand the scope by identifying entire IP ranges (CIDRs) that may belong to the same organization.

This is done by querying Regional Internet Registries (RIRs) — such as ARIN (North America), RIPE NCC (Europe), APNIC (Asia-Pacific), LACNIC (Latin America), and AFRINIC (Africa) — using the organization name or other WHOIS registration fields.

Why this works: When companies own their own IP space (especially larger ones, universities, or ISPs), they typically register IP blocks directly with a regional registry. These registrations are public and include:

  • The organization name
  • Contact information (email, phone)
  • The size and range of the IP block
  • Sometimes specific use-cases (e.g., hosting, corporate use)

By searching for the organization name in RIR databases, it’s possible to find additional subnets beyond the ones already discovered — potentially revealing:

  • Legacy infrastructure
  • Remote offices
  • Unused but still routable IPs
  • Services spun up outside the core IT visibility

This can significantly increase coverage of the external attack surface.

When it does not work

In many modern IT environments, however, organizations don’t manage their own IP space. Instead, they:

  • Host everything in the cloud (AWS, Azure, GCP)
  • Use shared hosting or CDNs
  • Outsource infrastructure to MSPs or external IT providers

In these cases, the IP addresses used by the company will belong to the provider, and WHOIS/RIR entries will reflect the provider’s organization — not the client.

For example:

  • A web server hosted on AWS will show Amazon as the owner in RIPE/APNIC/ARIN.
  • A small business website hosted by a marketing agency will show the agency or hoster.

This means IP-range expansion will not reveal anything further unless the organization has its own assigned IP blocks.

How an attack surface detection tool should handle this

To avoid false positives, an attack enumeration tool needs to check:

  • If the WHOIS registrant matches the expected organization
  • If the name is clearly that of a hosting provider or cloud service (via a predefined list)
  • If ownership patterns are shared by many unrelated domains (a sign of shared hosting)

Only confirmed organizational blocks are used to expand the footprint.

CTA Darknetsearch.com

Step 5: Reverse Lookup and Activity Check

With IP ranges known:

  • Reverse DNS identifies active named services.
  • Highlights which segments of a range are actively used, especially valuable in large blocks (e.g., /16).

No scanning is performed — only passive reverse DNS lookups.

Step 6: Threat Intelligence Enrichment

Without touching the targets, the an attack surface mapping tool should then query Shodan, ZoomEye, and similar platforms via API for each IP:

  • Open ports
  • CVEs and exposed services
  • TLS configs and banners
  • Technologies (Apache, Nginx, RDP, Elasticsearch, etc.)

This gives deep, contextual visibility into exposed services without generating alerts or legal risk.

Step 7: Crawl Public Web Pages for More Domains

Before initiating any crawl, an attack surface tool should first perform a lightweight check via threat intelligence APIs (e.g., Shodan, ZoomEye) to verify whether port 80 (HTTP) or 443 (HTTPS) is open on the resolved IP address. Additionally a  TCP connect scan on port 80/443 is also recommended. Only if a domain has one of these ports exposed — indicating a web service is actually running —the crawler should proceed.

Crawl Logic

For eligible domains, the crawler passively should download:

  • HTML content
  • JavaScript files
  • Meta tags
  • Internal and external links

From this, it extracts:

  • Additional subdomains referenced in scripts, links, or AJAX calls
  • Mentions of 3rd-party services (e.g., APIs, analytics, CDNs)
  • Forgotten or shadowed interfaces like /admin, /staging, or internal dashboards

This step can often uncover development portals, non-indexed endpoints, or old tools that were not listed in DNS or certificate transparency logs.

Challenges and Limitations

Crawling is not always straightforward:

  1. Client-Side Rendering
    Many modern web apps (especially single-page applications or SPAs) load content dynamically using JavaScript. In such cases, a simple HTTP GET request may return an empty or minimal HTML shell, while the real content is rendered on the client using frameworks like React, Angular, or Vue.

To extract information from such pages, a headless browser (e.g., Puppeteer or Playwright) would be required — but that would introduce complexity and increase runtime.

  1. Obfuscated or Minified JavaScript
    Links or subdomains may be buried inside minified or obfuscated code, making them difficult to extract with basic regex or string parsing.
  2. Redirections and Blocking
    Some sites redirect traffic based on geolocation, user-agent, or require JavaScript to trigger navigation. Others may present CAPTCHAs or block automated tools entirely.
  3. Crawl Depth and Scope
    To avoid uncontrolled crawling (and accidental overreach), the tool should limit:

    • Depth (e.g., only follow 1–2 link levels)
    • Scope (stay within the same domain)

Step 8: Detect Shared Hosting and Other Owned Domains

For each discovered IP a attack surface mapping tool should:

  • Perform Reverse IP lookup: what other domains are hosted on the same IP?
  • Use WHOIS to check if they belong to the same organization.
  • If yes, add them to a list of “Other domains likely owned”.

Then perform a Reverse WHOIS lookup (based on email or registrant name):

  • Identify all domains registered by the same organization.
  • Feed them back into the pipeline — repeat all steps above for each domain.

This recursive enrichment helps uncover multi-brand infrastructures, acquisitions, or shared cloud assets.

Conclusion: Key Advantages of a  Passive Approach

  • Stealth: No packets sent to target infrastructure. No logs, no alerts.
  • Broad Coverage: Leverages multiple public data sources.
  • Zero-risk: Legal and compliance-friendly.
  • Automation Ready: Can scale across entire client portfolios.
  • Accuracy: Uncovers assets even the client might not know they have.
💡 Do you think you're off the radar?

Most companies only discover leaks once it's too late. Be one step ahead.

Ask for a demo NOW →