Passive Enumeration
Passively gathering public information is legal if done for legitimate purposes (audits, authorised pentesting, defensive intelligence). Do not use these techniques against targets without authorisation.
1. Define Target & Scope
What or who are you looking for? You can search for domains, company names, emails, social medias, people, employees, or services, among other things.
One of the first tools you can use is WHOIS, which allows you to gather registration information about a domain, such as the owner, registrar, registration date, and contact details. You can also use RDAP (Registration Data Access Protocol), which is the modern replacement for WHOIS.
whois lucafacchini.com
To find useful information, you can also try to view old snapshots of the website, via WayBack Machine. Remember, everything can be useful.
2. Website Reconnaissance
Before starting any further information gathering, it’s important to gather information about the website’s technologies, domains, and subdomains. Two key files can help with this:
robots.txt
– Visithttp://example.com/robots.txt
. This file is typically present on most websites and is meant for search engines. It lists pages that should or shouldn’t be indexed, which can reveal hidden or restricted areas of the site.sitemap.xml
– Visithttp://example.com/sitemap.xml
(note: it may not always exist). This file provides a hierarchical map of the website, including all pages. If certain pages aren’t visible on the front-end, they might still appear here.
I also recommend, before moving further, to use tools to discover technologies being used on those websites. The ones I recommend are:
3. IP, DNS, Domains & Subdomains
This step involves collecting information about a target’s network and domain structure. You identify the IP addresses used by the target, explore DNS records to find name servers, mail servers, and other services, and discover all associated domains and subdomains.
DNS Enumeration
You can perform DNS passive enumeration using a combination of command-line interface (CLI) tools and online web-based services. CLI tools like dig
, host
, or dnsrecon.
There are some really useful tools online, such as:
ViewDNS.info (my favourite)
theHarvester (very useful and complete)
Host Command
host lucafacchini.com # Get the IP addresses
host -t MX lucafacchini.com # Get mail servers (MX)
host -t NS lucafacchini.com # Get name servers (NS)
host -t TXT lucafacchini.com # Get TXT Records
Dig Command
dig lucafacchini.com # Basic query to get A record (IP address)
dig MX lucafacchini.com # Get mail servers (MX)
dig NS lucafacchini.com # Get name servers (NS)
dig TXT lucafacchini.com # Get TXT Records
dig -x 192.0.2.1 # Reverse DNS lookup
DNSRecon Command
dnsrecon -d lucafacchini.com # Standard enumeration for DNS records
dnsrecon -d lucafacchini.com -t axfr # Attempt zone transfer (if misconfigured)
Domains & Subdomains
You can still use WHOIS to discover domains and subdomains, but in most cases you should also use some tools that will help you finding more.
Use, if you haven't already, use DNSDumpster.
Search for subdomains that appear in Certificate Transparency Logs. crt.sh
Search on Passive DNS Services, such as VirusTotal, SecurityTrails, Shodan and Censys
Use Google Dorks / Google Hacking. Search for the following:
site:*.lucafacchini.com
. And for more detailed Google Dorks, you can view the Google Hacking Database (GHDB).
You can also use automatized tools, such as:
theHarvester (very useful and complete)
4. Extract Metadata
Metadata extraction involves retrieving embedded data within files which can provide insights into the document's author, creation date, software used, and more.
Using Google Dorks, search on the Internet:
site:example.com filetype:pdf
, wherepdf
represents the file type. Remember that you should really try to find every file, such as.doc
,.xlx
,.ppt
, anything.
Extract metadata from files: Once files are collected, use tools like:
Last updated