Working with SourceWolf Tool on Kali Linux OS
Example 1: Simple Usage
python3 sourcewolf.py --url http://w3wiki.org/wp-admin
In this example, We are testing only a single directory on the target domain w3wiki.org. We have got 500 as a status code which defines that there is a generic error response from the server.
Example 2: Brute force
python3 sourcewolf.py -b http://w3wiki.org/FUZZ
1. In this example, We will be brute-forcing directories on the w3wiki.org domain. We are using a custom or default word list for brute-forcing.
2. In the below Screenshot, We have got the results with the server response status.
Example 3: Verbose
python3 sourcewolf.py -b http://w3wiki.org/FUZZ -v
1. In this example, We will be printing the results in a more realistic way or in more detail. We have used -v tag for verbose mode.
2. In the below Screenshot, We have got the results in real-time and all the directories tested are shown in the terminal with the status code returned from the server.
Example 4: Wordlist
python3 sourcewolf.py -b http://w3wiki.org/FUZZ -w /usr/share/wordlists/dirb/common.txt
1. In this example, We will be using the custom word list which is specified in the -w tag.
2. In the below Screenshot, We have specified the command for using the custom word list.
3. In the below Screenshot, We have got the results of our fuzz and we are trying to open the http://w3wiki.org/About URL whose status code is 200 (Ok).
4. In the below Screenshot, We have opened the About page URL on the web browser.
Example 5: Output
python3 sourcewolf.py -b http://w3wiki.org/FUZZ -w /usr/share/wordlists/dirb/common.txt -o ok
1. In this Example, We are saving the results on our disk for further use. We are using the -o tag along with the name of the directory where results will be saved.
2. In the below Screenshot, We have got the results of our scan.
3. In the below Screenshot, New directories are created with the name of status codes. In this directory, the associated status code web page information will be saved.
4. In the below Screenshot, We have opened the 2xx directory which contains all the web pages whose response is 200 from the server.
5. In the below Screenshot, We have opened the file whose status code is 200. All the HTML, JS code data is stored in the file.
SourceWolf – A CLI Web Crawler Tool in Linux
Web crawling is the process of indexing data on web pages by using a program or automated script and these automated scripts or programs are known by multiple names, that includes web crawler, spider, spider bot, and often shortened to the crawler. Manual crawling consumes a lot of time if the scope of the target is more. SourceWolf is an automated script developed in the Python Language that crawls the directories from the domain server and the status code. This can help the tester to test the pages whose responses are 200 or 301 quickly. SourceWolf is an open-source and free-to-use tool. SourceWolf tool supports custom word lists for brute-forcing. The output feature of SourceWolf is excellent as the output is stored in the leading directory, and the main directory contains sub-directories with separates status code directories.