The Burp Suite User Forum was discontinued on the 1st November 2024.

Burp Suite User Forum

For support requests, go to the Support Center. To discuss with other Burp users, head to our Discord page.

SUPPORT CENTER DISCORD

Burp scanner paused for unknown reasons

floyd | Last updated: Sep 27, 2018 11:55AM UTC

In Burp 2.0.07beta, the crawl&scan can sometimes pauses. The message in the Dashboard reads: "Paused do to error: X consecutive audit items have failed." where X is a number (by default the first time it occurs 10). I'm testing a regular Wordpress website that doesn't use cookies at all and has no login mechanism. When the error message in the log is shown, I would like to know what went wrong. So what I did: 1. I checked the web page (from the same IP/location) and the website works fine 2. The website also behaves correctly just as before 3. When I force a resume of the task, the scanner will only send a single GET request to the web root (/) and the server responds nicely with the correct start page as always. The scanner then pauses again and prints the same message (consecutive audit items failed) in the Event log. 4. I checked with Wireshark and there seems to be no connection reset when I restart the scanner (there might have been at one point, but there is for sure none when I resume the scanner) My problems: 1. I don't know what is wrong with the scanner. Does it do another TLS connection but gets a connection reset? But according to Wireshark not. What is the reason why Burp stops? 2. The Custom Logger plugin doesn't show me anything more than the one request 3. I miss the old "Alerts" tab 4. Burp doesn't talk to me. Is it possible to get more detailed logs/feedback? I would really appreciate some options to get more insights into the crawl/scanner feature, what the reasoning is and why it stopped. Usually when I look at requests it is sending it looks super weird and seems to do the entire job multiple times (for the last crawl I did it send every request four times).

Liam, PortSwigger Agent | Last updated: Sep 27, 2018 12:16PM UTC

Thanks for your message Flloyd. 1. That error message indicates that when auditing the application, Burp was unable to get a response from the server for 10 consecutive requests (at which point it considers the scan a failure). From what you have indicated in your message, it sounds like the website is still accessible, did you check this during the scan? It might be that the application struggled with the amount of traffic the scanner sent, if this is the case, you may be able to use the Resource Pool seetings to throttle the scanner: - https://portswigger.net/burp/documentation/desktop/dashboard/task-execution-settings Additionally, you can increase the number of errors before Burp pauses: - https://portswigger.net/blog/handling-application-errors-during-scans 2. Try using Logger++. It's also worth noting that we do plan to make logging a core Burp feature. 3. The new logging feature should enhance how we are able to deal with alerts. The event log on the dashboard should be providing everything the old alert tab did. 4. For now the best option is to use Logger++. We do have additional logging features planned in our dev backlog. Unfortunately, we can't provide an ETA. Please let us know if you need any further assistance.

Burp User | Last updated: Sep 27, 2018 02:12PM UTC

Hi Liam, Thanks for the quick response time. 1. Yes, the website was available during the scan. I wonder which "10" requests failed. I know I can change these settings and I did, but that doesn't help. 2. Hehe, I used Logger++ first and then thought I use Portswigger's "Custom Logger" extension as well, both don't show any more messages that just that one GET request that seems to indicate that everything is fine with the server but that the Burp Scanner is misbehaving. 3. Ok good to know the Alerts warnings are shown in the Event logs. 4. I did Ok, no problem, it's just that I can't do a more detailed error report than what I already provided. cheers, floyd

PortSwigger Agent | Last updated: Sep 28, 2018 09:15AM UTC

Hi Floyd, You can capture the incomplete requests by using Flow instead of Logger++ and enabling "Include incomplete requests" in the configuration. There are configurations for "Never stop crawl due to application errors" and "Never stop audit due to application errors" which should get you working. Longer term, I think we need to make the pause be a bit less sensitive. Please let us know if you need any further assistance.

Burp User | Last updated: Oct 01, 2018 07:04AM UTC

Hi Paul, I tried Flow now as well, but I don't get any more information than before. The scanner just seems to send GET requests to / and the response is the normal/valid response, then the scanner stops. I think there is nothing I can do here rather than wait for logging to be part of the core feature set. cheers, floyd

PortSwigger Agent | Last updated: Oct 01, 2018 07:07AM UTC

Hi Floyd, Did you enable "Include incomplete requests" in Flow? There's an options button next to the filter bar that includes this. You may want to use the configuration "Never stop crawl due to application errors" or "Never stop audit due to application errors". Yes, by default the crawler does every 4x, this helps it determine what changes are due to actions we've done during the crawl, and what are due to something external. It also restarts from the entry point frequently. Quite a lot of complexity to deal with stateful appss. People here have worked really hard on it, and it's really cool, although I hope we've not made it too complex. Please let us know if you need any further assistance.

Burp User | Last updated: Oct 01, 2018 08:19AM UTC

Just another data point: I'm currently scanning the exact same website from the exact same IP address, but this time with Burp 1.7.37 and there is not a single alert in the Alerts windows and Logger++ looks good. Just to make sure I also tried Burp 2.0.7 at the same time and the problem is still there.

Burp User | Last updated: Oct 01, 2018 12:46PM UTC

Yes, I enabled "include incomplete requests", still only seeing one single GET / request. I switched to the "Never stop" configurations and it started running, at first producing 1000 errors. Then running a little, produce another 800 errors and again this time with 2000 errors. Next thing it started using about 3GB RAM, now it is using all the CPU and the UI is stuck. Then some more RAM until it hit a "Failed to allocate memory" limit. Eventually it finished with 36'000 requests and 5'000 errors. I totally get it, the new crawler sounds wonderful. The problem is usually I need a tool that simply works very well (Burp 1) or that tells me what is wrong if it doesn't work (the logging feature). The new crawler/scanner feature is currently a bit too blackbox for me. I think that's it for now. Thanks for the help!

PortSwigger Agent | Last updated: Oct 01, 2018 12:53PM UTC

Hi Floyd, You are a hero! Thanks for doing that. The crawler guys are going to get on this ASAP. I can't offer you a cash bounty, but I will see about sending you some swag. Awesome work :)

Burp User | Last updated: Oct 08, 2018 04:18PM UTC

So I debugged this one for you. Your crawler sends in the crawling stage "Accept-Language: en", but the scanner in the scanner phase won't. So on the website I was testing the crawler sent a GET / and got a 302 to /en/, but the scanner sent a GET / and got a 200 OK with default german website in it... So when the scanner starts, it thinks a 200 OK must be an error (sic!) so it stops the scan.

Burp User | Last updated: Oct 09, 2018 08:52AM UTC

Hi Paul, Thanks, it was bugging me and I just wanted to know :) . Don't worry about cash, all topics money related in life are a nuisance. But thanks for the kind words and I'm really looking forward to the swag. Btw. you should also look at the "Cache-Control: max-age=0" header the Scanner seems to be sending, but the crawler doesn't. And they also send the headers in different orders, now I have only seen about 2 web applications where that matters (usually where clients are not just regular browsers), but they exist (and probably do not comply to any standards). I think if you really want to detect when something errorish occurs, crawler and scanner need to send the exact same request to determine if the web site started to behave differently. Or another way to look at it, you can only expect constant responses when the exact same requests are sent. cheers, floyd

PortSwigger Agent | Last updated: Oct 09, 2018 08:55AM UTC