Web Recon (Bug Bounty)
Prerequisites
GitHub - devanshbatham/ParamSpider: Mining URLs from dark corners of Web Archives for bug hunting/fuzzing/further probing
GitHub - jaeles-project/gospider: Gospider - Fast web spider written in Go
GitHub - tomnomnom/waybackurls: Fetch all the URLs that the Wayback Machine knows about for a domain
GitHub - hakluke/hakrawler: Simple, fast web crawler designed for easy, quick discovery of endpoints and assets within a web application
GitHub - dwisiswant0/galer: A fast tool to fetch URLs from HTML attributes by crawl-in.
GitHub - tomnomnom/qsreplace: Accept URLs on stdin, replace all query string values with a user-supplied value
GitHub - lc/gau: Fetch known URLs from AlienVault's Open Threat Exchange, the Wayback Machine, and Common Crawl.
GitHub - 003random/getJS: A tool to fastly get all javascript sources/files
GitHub - tomnomnom/anew: A tool for adding new lines to files, skipping duplicates
export PATH=/home/USER/go/bin:$PATH
Y que estén disponibles en el PATH
findAllURLs.sh
cookie='Cookie: a=1;'
file_path='domains.txt'
for domain in $(cat "$file_path"); do
echo "[+] $domain"
domain="$domain"
echo "$domain" | httpx -silent | gospider -c 10 -q -r -w -a --sitemap --robots --subs -H "$cookie" >> urls.txt
paramspider -d "$domain" --output ./paramspider.txt --level high > /dev/null 2>&1
cat paramspider.txt 2>/dev/null | grep http | sort -u | grep "$domain" >> urls.txt
rm paramspider.txt 2>/dev/null
gau "$domain" >> urls.txt
waybackurls "$domain" >> urls.txt
echo "$domain" | httpx -silent | hakrawler >> urls.txt
echo "$domain" | httpx -silent | galer -s >> urls.txt
done
cat urls.txt | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | sort -u | qsreplace -a > temp1.txt
mv temp1.txt urls.txt
getAllJS.sh
domain=TARGET
cookie='Cookie: a=1;'
cat urls.txt | grep "\.js" | grep "$domain" >> js_urls.txt
sort -u urls.txt js_urls.txt | getJS --timeout 3 --insecure --complete --nocolors -H "$cookie" | grep "^http" | grep "$domain" | sed "s/\?.*//" | anew js_urls.txt
httpx -silent -l js_urls.txt -H "$cookie" -fc 304,404 -srd source_code/ >> temp
mv temp js_urls.txt
Proxy through Burp
httpx -http-proxy http://127.0.0.1:8080 -l urls.txt
Find directories
grep '/\w[^ ]\w*' $target_file
JS deobfuscation
https://archive.is/o/kkif9/https://lelinhtinh.github.io/de4js/
Fuzz over discovered endpoints
Sacar endpoints del archivo de urls:
grep '/\w[\w*]' $target_file | cut -d '"' -f4 | tee -a endpoints.txt
Sacar endpoints de una carpeta , buscando en todos los archivos recursivamente:
grep -rEo "(/[a-zA-Z0-9_.-]+)+" . 2>/dev/null | cut -d: -f2 | sort | uniq > paths.txt
grep -rPoh "['\"]\K\/[^'\"]+" . 2>/dev/null
Combinar dominios con endpoints
for domain in $(cat domains.txt); do
for endpoint in $(cat endpoints.txt); do
echo "http://$domain$endpoint" | tee -a new_urls.txt;
done
;done
Encontrar nuevas urls
httpx -l new_urls.txt -fr -fc 404 -sc -td -cl -silent -o httpx_new_urls.txt
Methodman
Clic derecho > Send to methodman
API wordlists
Actions & Objects
GitHub - chrislockard/api_wordlist: A wordlist of API names for web application assessments