Welcome back, my aspiring cyberwarriors!
When attempting to hack/pentest a website, it can be extremely useful to get the parameters of various pages. These might include php, woff, css, js, png,svg, php, jpg and others. Each of these parameters might indicate a particular vulnerability such as SQL injection, XSS, LFI, and others. When we have discovered the parameters, we then test each of them for vulnerabilities. This can be particularly useful in bug bounty hunting.
There are a number of tools we can use to spider the site such as OWASP-ZAP or Web Scarab but such tools can be very noisy and don’t offer any stealth. Any security device or security engineer worth their overpaid salary will likely notice the additional traffic and rapid requests. One of the ways to avoid this detection is to scan the archive of the website at archive.org (as you know, archive.org maintains a repository of all websites from the past). Of course, these archives will not likely be identical to the live site, but will likely have enough in common to minimize false positives while at the same time, not alerting the site owner.
There is an excellent tool for finding these parameters using archived websites called ParamSpider. Let’s take a look at it here in this tutorial.
Step #1: Download and Install ParamSpider
Our first step is to download and then install paramspider. We can use git clone to clone it into our system.
kali > sudo git clone https://github.com/devanshbatham/ParamSpider
Now, navigate to the new directory ParamSpider and list the contents of the directory.
kali > cd Paramspider
kali > ls -l
Note the requirements.txt file. We can use that file to load all this tool’s requirements using pip such as;
kali > pip install -r requirements.txt
Now, we are ready to run paramaspider!
Step #2: Start ParamSpider
Now that we have paramspider installed, let’s check its help screen.
In it’s simplest form, paramspider syntax is simply the command plus -d (domain) followed by the domain name. For instance, we want to scan Elon Musk’s tesla.com, we simply enter;
kali > sudo python3 ./paramspider.py -d tesla.com
When hit enter, you will see paramspider scanning the tesla.com site in archive.com looking for a variety of parameters within the URL’s.
As you can see above, paramspider found 148.192 unique URLs of tesla.com and Elon Musk doesn’t even know we have been scanning them!
To view these results, we can simply use the more followed by the name of the output file (by default, the output file is in the output directory with a file name of the domain with a .txt extension) command. In this case, we can simply enter;
kali > more output/tesla.com.txt
In some cases, we may not want to view ALL the parameters. For instance, we may not want to see those ending in php, jpg, png, aspx, etc. This will help to narrow your focus to fewer parameters to test. We exclude certain parameters by using the -e switch such as;
kali > sudo python3 ./paramspider.py -d tesla.com -e php, aspx, jpg, png
Summary
In web app hacking or bug bounty hunting, it can be very advantageous to find parameters that are often vulnerable to a particular type of attack. With a tool like paramspider, we can enumerate and list these URL’s and then use them for testing at a later time without raising any alarm bells at the site’s owners. Although far from perfect, it can provide us with some insight into the potential vulnerabilities as well as vulnerabilities that have been mitigated (if a particular type of vulnerability existed in one section of the site, it is very likely there are other similar vulnerabilities).
ParamSpider is one more valuable tool in the web app hacker/pentester/bug bounter hunter’s tool box!