The Burp Suite User Forum was discontinued on the 1st November 2024.

Burp Suite User Forum

For support requests, go to the Support Center. To discuss with other Burp users, head to our Discord page.

SUPPORT CENTER DISCORD

Analysing a token in hex format with sequencer

Roswitha | Last updated: Jun 02, 2017 01:54PM UTC

Analysis of a token in hex format that is 4 bytes in total length, for example: AB FF 81 4E When I load a series of tokens into sequencer, it interprets the token lenght as 8, which is not the case. AB is one byte, FF is one byte and so on. How can I instruct Burp how many bytes the token consists of and that for example "AB" is one byte and not two. Thank you in advance and Kind Regards, Roswitha MacLean

PortSwigger Agent | Last updated: Jun 02, 2017 02:16PM UTC

The token length is the raw token length, so it is 8 in your case. What I think you're interested in is the number effective bits of entropy. You can see this in the bar chart that Sequencer generates when you click "Analyse now". I usually make sure to capture at least 100 tokens, and use the 1% significance level. If you require any further assistance, please send us a screenshot of the Sequencer bar chart.

Burp User | Last updated: Jun 06, 2017 03:47PM UTC

Yes, that is correct. I am interested in the number of effective bits of entropy. How would the data be loaded into the analyzer? In binary or in hex? Since my seed data originally is in hex, would I need to convert it from hex format to binary first before analysing the data? Or would I use the original format, i. e. 0xAA 0xFF 0x81 0x4E when loading the data to indicate that it is in hexadecimal format? I tried both the hex format and then converted the same data to binary before loading it into Sequencer an received slightly different results and am trying to investigate the reason for that whilest being unsure whether a conversion to binary is necessary before analyisis.

PortSwigger Agent | Last updated: Jun 07, 2017 07:05AM UTC

You don't need to convert the data. Sequencer uses character-level and bit-level analysis to estimate the entropy in the token, whatever form it is in. It doesn't surprise me that you get slightly different results, but they should be close, as long as you use enough requests - at least 100, and 1000 is better. If they are significantly different, please send a screenshot of the bar chart for both the hex and binary input, and we'll take a look.

Burp User | Last updated: Jun 07, 2017 09:12AM UTC