This guide is currently under development, and I greatly welcome any suggestions or feedback or at reaper.gitbook@gmail.com

Passive Reconnaissance

This OSINT workflow focuses on systematic, non-interactive information collection. Use passive sources only (public records, archives, certificates, search engines, code repos). Do not access systems or services directly unless explicitly authorized.

Flow: Target Identification → Domain Intelligence → Infrastructure Enumeration → Technology Analysis → Human Intelligence → Digital Asset Discovery → Analysis

Rules

  • Passive only: no active scans or probes.

  • Record sources and timestamps for every item.

  • Verify sensitive findings across multiple independent sources before action.


1

Target Identification

Establish organizational context and initial scope.

Representative steps / commands:

whois target.com               # domain registration
whois 198.51.100.0             # IP/net allocation
search company registry sites  # local corporate registries, SEC EDGAR, Companies House

Manual checks: annual reports, press releases, merger history, parent/subsidiary relationships.

2

Domain Intelligence

Map DNS and domain relationships using passive sources.

Representative checks:

dig target.com A AAAA MX NS SOA
curl -s "https://crt.sh/?q=%.target.com&output=json" | jq .   # certificate transparency

Sources: crt.sh, SecurityTrails, PassiveTotal, DNSDumpster, Wayback Machine. Extract TXT records (SPF/DKIM/DMARC) and historical DNS data.

3

Infrastructure Enumeration

Collect IP, ASN, and hosting details without active probing.

Representative steps:

Resolve IPs via public DNS and CDN records (dig)
Query RIR / whois for ASN and allocation details
Use public passive IP datasets (SecurityTrails, Censys, Shodan passive data)

Identify hosting providers, CDNs, and geographic footprints from passive feeds and certificate issuer metadata.

4

Technology Stack Analysis

Identify web technologies and software stacks using passive, public information.

4.1 Web Technology Fingerprinting (CLI)

  • Use Wappalyzer CLI to detect tech stacks:

# Scan a single URL
wappalyzer https://target.com

# Scan multiple URLs from a file
wappalyzer -i targets.txt -o results.json
  • Detectable items include:

    • CMS and web frameworks (WordPress, Joomla, Django, etc.)

    • JavaScript libraries and versions

    • Server software (Apache, Nginx, IIS)

    • Analytics, marketing, and CDN services

4.2 Third-Party Service

  • From public content and CLI output, identify integrated services:

    • Payment processors (Stripe, PayPal)

    • Cloud hosting (AWS, Azure, Google Cloud)

    • Analytics and support platforms (Google Analytics, Zendesk)

4.3 Version

  • Determine software versions for vulnerability research:

    • Meta tags, JavaScript/CSS file names, and HTTP headers

    • Use this for prioritizing potential exploit research without scanning live systems

5

Human Intelligence Gathering

Build an employee and role map from public profiles.

Representative sources:

  • LinkedIn: names, titles, org charts, tenure.

  • GitHub/GitLab: repos mentioning company domains or keys.

  • Twitter, conference pages, speaker bios, technical blogs.

Notes: extract common email formats and leadership contacts for prioritization.

6

Digital Asset Discovery

Search engines and repository searches for exposed assets and secrets.

Google dork examples:

site:target.com inurl:admin
site:target.com filetype:env "DB_PASSWORD"
site:target.com intitle:"Index of /" "Parent Directory"
site:target.com inurl:api/v1 OR inurl:swagger

Repo search examples (GitHub):

org:target-company "target.com"
"user:employee" "api_key" OR "secret"

Also check public S3/Blob listings, archived backups, and public bucket indexes via passive discovery tools.

Last updated

Was this helpful?