Burp Suite User Forum

Login to post

Host-level BChecks only run once per host

Andres | Last updated: Jun 20, 2023 12:01AM UTC

Hi, I tried experimenting with the new BChecks feature in Burp 2023.6. It's a nice new feature. I found that host-level BChecks only run once per host, which according to the documentation might be intentional. From the example host check documentation: "This check enables Burp Scanner to see whether the target application exposes a Git directory. It is an example of a per-host check (_that is, a check that runs once for each host scanned_)." However, I would expect the checks to run again when starting a new scan for the same host. If you delete an issue found by a host-level check and then run the scan again, it should discover the issue again (as long as it's still present), but it doesn't, because it doesn't perform any of the checks again. Currently, a workaround is to close and reopen Burp, then the scanner will run the checks again. But that shouldn't be necessary. To me this is a bug that should be fixed, all the checks should run again when starting a new scan.

Dominyque, PortSwigger Agent | Last updated: Jun 20, 2023 07:09AM UTC

Hi Thank you for your input. This is the desired functionality to allow it to run once to prevent duplicate issues. The Burp Professional scanner works the same way: when you scan for issues, once an issue has been reported, it will not be reported again during that project. However, I will discuss with the devs about this issue.

Andres | Last updated: Jun 21, 2023 01:45AM UTC

Hi, Thanks for the response. Duplicate issues should be resolved similar to how extensions resolve them in the `consolidateIssues` method, so it shouldn't be a problem. There is a distinct difference between host-level and request-level checks though. Request-level checks always run again, i.e. the request is performed. Running a host-level scan on the same host will not actually scan anything (i.e. no requests are performed) if the host has been scanned once already. This is problematic for many reasons, mainly that you can never be sure whether or not your custom BChecks actually executed at some point, therefore potentially missing vulnerabilities. Another problem is that if a vulnerability is fixed and you delete the issue, you can't scan again to verify the fix without reopening Burp first. When I start a new scan, I expect Burp to actually scan instead of doing nothing.

Dominyque, PortSwigger Agent | Last updated: Jun 21, 2023 07:16AM UTC

Hi Andres You raise very valid points, and I will present these points to the developers. Thank you for your input.

Andres | Last updated: Jun 22, 2023 05:46AM UTC

Hi, Slightly off-topic, but some additional feedback regarding BChecks. 1. I just want to run my BChecks and don't care about crawling, but when clicking "New Scan" on the Dashboard, the only options are "Crawl and audit" and "Crawl". There should be a third option called "Audit (only)". I know you can right click a request in the proxy or repeater, select scan and then there's an option to "audit selected items", but I want to be able to enter URLs to scan manually. 2. There should be some better solution to this: ---------- run for each: potential_path = "metrics" define: base_path = {base.request.url.path} given request then if {base_path} matches "/$" then send request called check: method: "GET" path: `{base_path}{potential_path}` else then send request called check: method: "GET" path: `{base_path}/{potential_path}` end if ---------- Namely the if condition that prevents a double slash (e.g. https://example.com/api//metrics) or no slash (e.g. https://example.com/apimetrics) depending on the entered URL to scan (https://example.com/api/ vs https://example.com/api).

Dominyque, PortSwigger Agent | Last updated: Jun 22, 2023 08:53AM UTC

Hi Andres Thank you for that feedback. I have now passed this over to the development team.

Hannah, PortSwigger Agent | Last updated: Jun 22, 2023 03:58PM UTC

Hi Andres We'll be publishing a new roadmap soon with more details, but we just wanted to let you know that we do have additional functionality coming to improve the development and testing workflow for BChecks.

Andres | Last updated: Jun 22, 2023 10:40PM UTC

Hi, Not that there was any confusion on your part, but I just want to clarify one thing for other readers since the title and my first post are not that clear. The host-level checks absolutely should run once per host, that's their main difference from request-level checks, but the issue is that the host-level checks do not run again with a new scan, even though request-level checks do when scanning the exact same requests/URLs. As an example, when entering the following URLs to scan: - https://example.com - https://example.com/api - https://another.com - https://another.com/api A host-level check should run once for https://example.com and once for https://another.com while request-level checks should run for all four URLs. It works correctly as described, but only with the first scan of these hosts. When starting a new scan with the exact same URLs, all the request-level checks run, but host-level checks do not. Thank you Dominyque and Hannah. Looking forward to the roadmap and further updates to the BChecks.

Andres | Last updated: Jul 11, 2023 01:33AM UTC

Hi, After some more testing, the above example in my last post actually does not work correctly as described even on the first scan. That's a Burp Scanner issue rather than a BChecks issue. When entering the above four URLs to scan, Burp Scanner actually will not audit https://example.com/api and https://another.com/api unless the crawler can crawl to those URLs from the main domain, which is often not the case. In some cases, there is nothing to crawl as there's only an API and no frontend. This behavior is unacceptable, especially when dealing with BChecks. Burp Scanner should make a request to each entered URL and these requests should be given as input to BChecks. This is related to the need for an "Audit (only)" option for the scanner like I wrote in a previous post. I tried to perform the scan I wanted by leveraging the "Audit selected items" option instead, but it was not possible, because not all the items I wanted were available to select, as they cannot be found by the crawler. It might be possible to select the items after manually performing the requests via the Repeater first, but this is tedious and shouldn't be necessary.

Hannah, PortSwigger Agent | Last updated: Jul 13, 2023 09:48AM UTC

Hi Andres Could you drop us an email at support@portswigger.net with some more information? Could you include some screenshots of your scan configuration? Are you entering these URLs in the "URLs to scan" box on the scan configuration wizard, or are you using a detailed or advanced scope configuration?

Andres | Last updated: Jul 15, 2023 02:59AM UTC

Hi, I found what caused the issue. Yes, I'm just entering these URLs in the "URLs to scan" box. The issue was that I limited the crawler to the bare minimum which means I unchecked everything in the crawling configuration and set the maximum request count to 1. With an unlimited crawler, it does work correctly. So the issue comes back to the need for an "Audit (only)" option. I want the scanner to audit only the entered URLs without any additional crawling.

Hannah, PortSwigger Agent | Last updated: Jul 18, 2023 08:30AM UTC

Hi Andres As you mentioned, the "Audit selected items" option is available for auditing without crawling. This does require the request to be in Burp already. For example, by proxying the site through Burp, sending a request in Repeater, or by auditing a previously crawled item. In order to audit an item, the Scanner needs to know the base request to send. This is why you need to have an existing item to audit.

Andres | Last updated: Sep 19, 2023 01:15AM UTC

Hi, The main issue of host-level checks only running on the first scan seems to be fixed now in v2023.10.2 EA. It might have been fixed already in an earlier version, not sure. Has there been any update to the BCheck language to improve the code I posted above that prevents the double slash? I've tried "appending path:" instead of just "path:" before, but it didn't prevent the double slash issue.

Michelle, PortSwigger Agent | Last updated: Sep 19, 2023 03:27PM UTC

Hi The latest EA version (2023.10.2) is the version where the host-level checks issue fix was released. This fix will be coming to the stable release channel soon. For the issue with the double slashes, we haven't made any changes to the language. This might be one where it's best to use a regular expression, I'll have a chat with the team and let you know.

Michelle, PortSwigger Agent | Last updated: Sep 20, 2023 08:06AM UTC

Hi We've had a look at this and you can achieve what you need by using a regular expression, for example: metadata: language: v1-beta name: "Test BCheck" description: "Test case" author: "Test" run for each: potential_path = "metrics2" given request then send request called check: method: "GET" replacing path: `{regex_replace(base.request.url.path, "/$", "")}/{potential_path}` I hoe this helps. Please let me know if you have any further questions.

Michelle, PortSwigger Agent | Last updated: Sep 20, 2023 08:14AM UTC

Hi Sorry, the indents disappeared when I posted that, so they will need to be added back in.

Andres | Last updated: Sep 20, 2023 11:17PM UTC

Thank you Michelle, that works and is much shorter and cleaner. :) As this is a pretty common requirement in my opinion, it might still be worth considering to perhaps change the behavior of "appending path" to normalize the path by default and maybe introduce a new keyword like "appending raw path" for those cases where path normalization is not desired.

Michelle, PortSwigger Agent | Last updated: Sep 21, 2023 02:13PM UTC

Hi I've passed on your feedback. We do have some other changes in the pipeline that we think should help with this.

You need to Log in to post a reply. Or register here, for free.