Burp community forum

Estimating time taken for Application security testing

Karthik | Last updated: Aug 17, 2015 01:17PM UTC

Though not related to Burp Suite, thought of posting here so that some one could share their thoughts I would like to do some kind of estimation for time taken to scan a website using Burp Suite. I will be testing websites against OWASP Top 10 From Burp suite, we can identify Number of static/dynamic URLs, Total and unique Number of parameters in a website. Number of insertion points, Tests selected under active and Passive scan will also contribute towards the time taken. With that said, what are all the factors that we can include for arriving at a time taken for performing security assessment ? Since estimation should be done before we start testing and number of URLs / Parameters in a website will be known in later stages, is there any way that we can do the estimation ? I would like to do this estimation to convince my client about the time taken for performing assessment. For example, if my client asks to perform assessment of 10 websites in 'n' days, I should be in a position to tell them with proof/estimation that it will take 'X' time. Could some one share your thoughts ?

PortSwigger Agent | Last updated: Aug 17, 2015 01:27PM UTC

This is a good question. In case you aren't aware, if you map all of the application's content in Burp (either by manually browsing or through automated spidering), you can use the Target Analyzer function (under Engagement tools on the site map context menu) to get the number of static and dynamic pages, and numbers of parameters. Another key factor in a manual test will be the nature and complexity of the business logic flows within the application, and the extent of access controls (number of functions to test, number of different privilege types and levels).

Burp User | Last updated: Aug 18, 2015 05:03AM UTC

Hi Dafydd, Thanks for the answer I am aware of the functionality that you have mentioned - getting static and dynamic pages/parameters. The parameters that are shown here is obtained after spidering/crawling the website. Often, the number of parameters that is displayed after spidering/crawling is different from the 'Insertion Points' that we see under the scanner. If I am right scan time is proportional to the insertion points ( and there is no relation between the parameters obtained after spidering and insertion points determined during scanning process. Reg, the second one, nature/complexity of the logic flow - if I am right, this is a qualitative factor and not quantifiable. Again, even when we are aware of the number of functions and privileges - this would break down to number of paramters/insertion points that we are going to test. Please correct if I am wrong Is there any kind of documentation available for this ?

PortSwigger Agent | Last updated: Aug 18, 2015 08:29AM UTC

Insertion points are created for various features of a scanned request, based on your insertion point configuration. There is normally a broad correlation between the number of request parameters and the number of insertion points, but these can diverge in various cases: e.g. if a parameter value contains a nested data structure into which Burp will place additional insertion points. Aspects like business logic flows and access controls are normally muich less correlated with the literal numbers of URLs and parameters. The effort required can be quantified and this is normally done as part of preparing a client proposal.

Burp User | Last updated: Aug 24, 2015 07:49AM UTC

Thanks Dafydd for your points. Based on your response, we have to consider business logic flows and access controls. Lets take a business logic flow - Making a purchase in an e-commerce site. A standard flow for making an purchase involves - (1) Browsing the website/catalog for products (2) Adding the product to the shopping cart (3) Filling up details like name, address, contact number and other related details (4) Making the payment and getting confirmation However, the number of parameters involved to achieve the flow (making a purchase) might be different with each website and other functionalities (like adding a discount code, message to the shipper) etc may or may not be present in all websites. So, in short, the same functionality - making a purchase - will be handled in different methods for each website. With that said, if a time estimate made for this application logic for one website be the same for a different website that has order processing ? Example, will time taken for testing "Making a purchase" functionality on ebay be the same for amazon ?

PortSwigger Agent | Last updated: Aug 24, 2015 10:55AM UTC

Estimating the effort required to test functionality like this normally involves an informed review of the application's functionality, and a judgement based on your experience of testing similar applications in the past. Different features of different applications will often lead to different assessments of the required effort, regardless of numbers of URLs or parameters.

You need to Log in to post a reply. Or register here, for free.