Burp Suite User Forum

Create new post

Crawler API for Burp

Randy | Last updated: Feb 03, 2022 07:49PM UTC

Hi, I am using the current version of Burp Suite Professional. Currently, I am running Burp headless to scan our application, but I want to use crawler to find path without providing paths in the sitemap. I am looking at https://portswigger.net/burp/extender/api/, but only find doActiveScan and doPassiveScan. Does Burp Extender API have a way to configure crawler and start it?

Hannah, PortSwigger Agent | Last updated: Feb 07, 2022 09:15AM UTC

Hi To trigger a scan from Burp Suite, you would need to use the REST API. You can enable/disable this from "User options > Misc > REST API". Once you've set up the service, you can browse the API documentation and interact with the API at your service URL. Using the REST API means that you can configure scans and trigger them without interacting with Burp's GUI.

Randy | Last updated: Feb 10, 2022 04:02PM UTC

Hi, I look into the REST API and saw that it default to 'crawl and audit', but do you know if I can only scan with 'crawl only' options? Also, where can I find this REST API documentation? There is not much detail in the API doc that Burp startup. I want to find out how to provide 'crawl strategy' configuration to the scan.

Hannah, PortSwigger Agent | Last updated: Feb 14, 2022 08:37AM UTC

Hi The REST API only offers the option to perform both a crawl and audit. If you'd just like to perform a crawl, you will need to provide a scan configuration that has all audit checks disabled. The REST API has an interactive interface you can use to help you build scans. It also provides you with the curl command that is issued when you send your request. To add a scan configuration, you can either use a preset "Named configuration" (which can be found in Burp's config library) or provide it with a string containing your scan configuration that has been exported from Burp. To export a scan configuration, set up your configuration as you would like it (Dashboard > New scan > Scan configuration) and select "Save to library" before saving it. From there, you can open your config library (Burp > Configuration library), select the scan configuration and export it. This will save as a JSON file, which you can then provide the contents of, to the REST API.

dan | Last updated: Aug 30, 2023 10:03AM UTC

Hello is there away to stop the started scan / crawl task via rest api ? def start_scan(target_url): scan_config = { "urls": [target_url], "scan_configuration_id": "default" } response = requests.post(f"{BASE_URL}/v0.1/scan", headers=HEADERS, json=scan_config) if response.status_code == 201: print(f"Scan started successfully for {target_url}!") return response.json() else: print(f"Error starting scan for {target_url}:", response.text) return None def stop_scan(scan_id): response = requests.delete(f"{BASE_URL}/v0.1/scan/{scan_id}", headers=HEADERS) if response.status_code == 200: print(f"Scan {scan_id} stopped successfully!") else: print(f"Error stopping scan {scan_id}:", response.text) if __name__ == "__main__": # Start a scan on "http://example.com" scan_details = start_scan("http://example.com") if scan_details: scan_id = scan_details['scan_id'] print("Scan ID:", scan_id) # Wait for a period (e.g., 10 seconds) before stopping the scan time.sleep(10) # Stop the scan stop_scan(scan_id) it started the scan but doesn't stop it after 10 seconds

Ben, PortSwigger Agent | Last updated: Aug 30, 2023 11:25AM UTC

Hi, The functionality available in the Burp REST API is limited to that described in the API documentation which is available via the service URL, as previously mentioned. There is no mechanism to stop a scan an initiated scan via the REST API.

dan | Last updated: Aug 30, 2023 11:41AM UTC

hmmm ok can i pause it or limit it in the config somehow to run only 1 minute instead of hour and do audit also which i don't need )) ?

Ben, PortSwigger Agent | Last updated: Aug 30, 2023 01:27PM UTC

Hi, There is the 'Maximum crawl time' crawl configuration that can be set to limit the crawl to a certain time period in minutes (this is set to 150 by default). If you are creating your configuration from within the UI and then exporting it this setting can be found within the 'Crawl Limits' section of the crawl configuration.

You must be an existing, logged-in customer to reply to a thread. Please email us for additional support.