The Burp Suite User Forum was discontinued on the 1st November 2024.

Burp Suite User Forum

For support requests, go to the Support Center. To discuss with other Burp users, head to our Discord page.

SUPPORT CENTER DISCORD

Requests showing -1 status and response length

Hardik | Last updated: Jul 20, 2020 06:12PM UTC

Requests not returning any response after executing. When i installed logger++ , it shows me Requests showing -1 status and response length.

Liam, PortSwigger Agent | Last updated: Jul 21, 2020 12:15PM UTC

Is the application accessible in your browser? Which Burp tool is sending the requests? Do you see any error messages in the Event log?

Hardik | Last updated: Jul 22, 2020 05:11PM UTC

This usually happens when I set the concurrent request limit as a higher value

Michelle, PortSwigger Agent | Last updated: Jul 23, 2020 09:23AM UTC

If you increase the concurrent request limit that can cause some sites to stop responding to all requests as the systems protecting them see it as an attack. For those sites, it is necessary to throttle requests in order for them to keep responding to all requests sent by the Scanner. Please let us know if you need any further assistance.

Ilguiz | Last updated: Sep 22, 2020 07:39AM UTC

Nope, Intruder works fine sending thousands requests in a breeze. Using Scan stops at 10 or 25 requests no matter how many threads, delays or reduced analysis options I allocate. I disabled all extensions. I restart BURP on every configuration change attempt. I delete and re-create the "audit selected items" scan task against a single unauthenticated request. The "Estimating time remaining..." message turns dangerously annoying as it never progresses. I wish there was more progress reports than that. Update: after a few minutes my scan progressed from 25 to 80 requests, then stopped again. Is this related to using Burp Collaborator? Is there a way to turn its usage off?

Ilguiz | Last updated: Sep 22, 2020 07:58AM UTC

Downgrading to 2020.5.1 worked (I could get past 80 requests). Using 2020.6 and later stopped at the first 25 requests.

Michelle, PortSwigger Agent | Last updated: Sep 22, 2020 09:48AM UTC

Hi Can you tell us a bit more about the site/request you are trying to scan when you see these issues, please? Is the site still responsive when this happens? Does this happen with all sites/requests or just this one? You mentioned you start the scan on an unauthenticated request, does the site also have some requests that are authenticated? If so, does the same issue occur on the authenticated requests? If you have the Logger++ extension installed so you can see the requests being sent by the Scanner, does the scan pause at the same stage each time? If you're happy to share some details on the scan but would prefer to share them directly you can email them to support@portswigger.net.

Ilguiz | Last updated: Sep 24, 2020 12:28AM UTC

We observed some instability with my MacOS's client hostname resolver using a curl built by Homebrew. The "host" utility works flawlessly and quickly, but using curl throws a name resulution error after --max-time 10 every few tries. Perhaps, Java relies on the same mechanism in MacOS? The other, unrelated, issue that occurs even in BURP 2020.5.1: proxying a UI service serving static Javascript chunks starts spinning on one of the two chunks, raising the CPU consumption to 400%. This happens with all tasks, including the passive audit, removed.

Michelle, PortSwigger Agent | Last updated: Sep 24, 2020 11:23AM UTC

Could you send a separate email to support@portswigger.net for each of these two issues so we can look into them in more detail with you, please? For the issue with scanning can you please let us know the following: - Whether you are using the installed version of Burp or launching the jar file from the CLI. If you are using the jar file, which version of Java do you use? - Does this happen with just one site or different ones? - If you add the hostname under Project options -> Connections -> Hostname resolution, do you see the same issues? For the issue with proxying a UI service can you let us know these details: - The version of Burp used, the OS, and the machine specifications - Whether the default tasks turned off before you started to proxy - Whether this happens with one specific site - Whether you are able to identify specific requests which cause the CPU to spike, if so are you happy to share details of these with us?

Ilguiz | Last updated: Oct 28, 2020 05:32PM UTC

After some back-and-forth I tend to think this was due to my employer's routing outgoing HTTP requests via Zscaler. The latter causes 307 redirects from time to time, especially when request headers change. The Zscaler man-in-the-middle service (whose CA certificate is deployed to employees' laptops and must also be injected in every JRE's cacert storage) aims at human browser users who would see a stern warning on trying to access anything not work-related). The Zscaler service showed some unexplained 30s pauses even before generating the 307 redirect. I am not sure if disabling the "follow redirects" checkbox in the scan configuration allows to bypass the infinite wait in Burp scans. This would need extra testing with the same disabled checkbox and the latest Burp version. If my theory about Zscaler's causing the infinite wait in Burp is right, there would be little recourse to properly pentesting our applications, with or without bypassing the infinite waits.

Ilguiz | Last updated: Nov 02, 2020 05:53PM UTC

Burp may also wait indefinitely (until the targeted endpoint's timeout) if I am using Token Extractor. If the extension substitutes an element of a request body, and the element's length changes, the extension's omission in updating Content-Length results in an underfilling or overfilling the HTTP protocol. https://github.com/NetSPI/BurpExtractor/issues/6

Ilguiz | Last updated: Dec 26, 2020 09:29PM UTC

The token extractor appears correlated. If I do not enable ("turn on") the extraction/substitution rules before starting the scan and enable the rules once the scan started, I can see the scan proceeding well. This makes me think that the token extractor breaks the agreement between Content-Length and content length if enabled before the scan (probably by substituting with an empty value) and that this extractor's deficiency can be worked around by letting it find new tokens first, then enable substitution.

Michelle, PortSwigger Agent | Last updated: Jan 04, 2021 12:08PM UTC