Hydrarecon – All In One, Fast, Easy Recon Tool

HydraRecon tool is an automated tool developed in the Python language which performs the task of Information Gathering and Crawling the associated links of the target domain. HydraRecon tool can gather the list of subdomains, can take a screenshot of each subdomain, and many more. The crawling module gathers the associated links, collects JavaScript files links, and also fetches the robot.txt file contents for the target domain. HydraRecon is an all-in-one tool for Reconnaissance and Crawling. This tool is available on the GitHub platform for free and is open-source to use.

Note: Make Sure You have Python Installed on your System, as this is a python-based tool. Click to check the Installation process: Python Installation Steps on Linux

Installation of Hydrarecon Tool on Kali Linux OS

Step 1: Use the following command to install the tool in your Kali Linux operating system.

git clone https://github.com/aufzayed/HydraRecon.git

Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool.

cd HydraRecon

Step 3: You are in the directory of the Hydrarecon. Now you have to install a dependency of the Hydrarecon using the following command.

sudo pip install -r requirements.txt

Step 4: All the dependencies have been installed in your Kali Linux operating system.

python3 hydrarecon.py -h

Working with Hydrarecon Tool on Kali Linux OS

Example 1: Basic recon module

python3 hydrarecon.py --basic -d w3wiki.net

In this example, we are using the basic recon module that gathers the basic data of the target domains like subdomains, screenshots, etc.

We have got the list of subdomains that are being fetched from various sources on the internet.

Hydrarecon tool also clicks the screenshot of subdomains to check their activeness on the internet.

We have displayed the screenshot clicked by the tool.

We have displayed screenshot-2 clicked by the tool.

Example 2: Crawl module

python3 hydrarecon.py --crawl -d w3wiki.net

In this example, we are using the crawl module that crawls or fetched the associated URLs, robot.txt file, and many more links.

The tool is fetching the w3wiki.net URLs from the internet.

We have got the associated links of w3wiki.net.

In the below screenshot, the tool has crawled the robots.txt file contents.