Burp Suite User Forum

Create new post

import known URLs into sitemap

McGuire, | Last updated: Apr 29, 2016 03:06PM UTC

I have access to the directory structure of the application I am scanning with Burp and I can see that there are URLs that a scanner or spider would never find but are still accessible if someone were to guess or obtain the source code. I know I can visit these URLs one-by-one but I'd like to import a list of URLs into burp and have them added to the site map.

PortSwigger Agent | Last updated: May 03, 2016 07:49AM UTC

We have this feature request captured in our backlog. In the meantime, you could use a script to request all your URLs using curl, and pipe the requests through Burp as its proxy. Alternatively, if you run into this a lot, you could create a short extension to take a list of URLs and add them all to the site map via the API.

Burp User | Last updated: Mar 01, 2017 09:20AM UTC

Hey all, Just stumbled into this problem, and whilst I don't have time mid-test to write an extension that might do this, I did have time to alter a previously written dirty python(2.7) script that would do it. Set passive scanning to scope in Burp, as curl calls the next URL in the list through the proxy, Burp will add it to the sitemap. Will copy it in below for someone else to use. Will have a bash at an extension in the future that does this. Bear in mind that there are a few things that aren't really used in this script, because it was adapted from something else and I've not tidied it up, just got the sucker working for brevity's sake. The tidy version is on my github. When calling python script, give the text file with list of URLs as sys.argv[1], and ensure you have exported valid session cookies from your browser and saved them. You will get lots of red errors as the error bit isn't really fit for purpose, but you can remove that yourself. Did I mention it was dirty? :) import subprocess import sys import time clear = "\x1b[0m" red = "\x1b[1;31m" green = "\x1b[1;32m" cyan = "\x1b[1;36m" infile = open(sys.argv[1], "r") iplist = infile.readlines() for ip in iplist: single = ip.strip() cmd = ["curl","-k","-b", "cookie.txt", "-x", "http://127.0.0.1:8080", single] proc = subprocess.Popen(cmd,stdout=subprocess.PIPE) time.sleep(5) #quick time hack to ensure target is not overwhelmed as it's live print("%s{+}Running: %s%s " % (cyan,cmd,clear)) while True: output = proc.stdout.readline() if "Unauthorised" in output or proc.poll() is not None: print("%s{!} Process errored... %s%s" % (red,cmd,clear)) print("%s{!} Moving on... %s" % (red,clear)) break if output: print output.strip() print("%s{+} Next target... %s" % (cyan,clear)) rc = proc.poll()

Burp User | Last updated: May 03, 2017 03:12PM UTC

Hi Ian could you link your github. Thanks

Burp User | Last updated: Aug 07, 2017 12:40PM UTC

@Brian sure - https://github.com/meta-l

Burp User | Last updated: Oct 26, 2017 01:03PM UTC

Just found an even easier answer - use dirb (with your known wordlist or not) and point it at your proxy. It populates the sitemap as it finds directories.

Burp User | Last updated: Apr 22, 2019 02:16AM UTC

So, late to this show, but there's a dude who actually wrote a proper extension that parses your list and pushes get reqs for each. Saved me a bunch of time. https://github.com/SmeegeSec/Burp-Importer/blob/master/BurpImporter.py

You must be an existing, logged-in customer to reply to a thread. Please email us for additional support.