Exploring the Burp Suite API

 

With the release of Burp Suite Professional 2.0 came the addition of a REST API. This post will show how to interact with the API in a browser, as well as introduce a Python tool I wrote, burp_scanwalker.py, that utilizes the API to automate active scans.

Setting up the API

To configure the API, navigate to User options → Misc → REST API, then select the checkbox to start the service.

By default, the service runs on http://127.0.0.1:1337, and requires you to generate an API key to use it. To generate an API key, select New, give the key a name, copy the key to the clipboard and click OK. Note, that the API key is only available on initial generation, so if you don’t copy it to the clipboard or you lose it, you’ll have to delete the key and generate a new one (which really isn’t a big deal).

Once you have the API key, you can start using the API. The API is self documenting, so to understand the use of the API you can just browse to it and view the documentation located at http://127.0.0.1:1337/<your API key>/v0.1/ (Omit the API key if you checked the box to allow access without a key).

Endpoints

As mentioned on the PortSwigger blog , the initial release of this API supports launching vulnerability scans and obtaining the results, so there are currently three endpoints that we can interact with. To see what the requests should look like, simply click on the row, which well triggers a modal allowing you to configure required parameters, see the request as it would be used via curl, and make the request:

If you choose to send the request, the results will appear as well:

The above request and response just retrieves the scan definitions, including the issue ID, name, and description. This is the same information you can find in the GUI at Target → Issue definitions.

The Scan Endpoint

Perhaps a more interesting endpoint is the /scan/ endpoint, where we can programmatically start Burp Active scans. To list the required parameters, click the highlighted word ‘Scan’ in the Parameters column. This will bring up the following modal:

A URL is required, and there are multiple options that you can specify with additional parameters. To see exactly how these parameters are used, click anywhere on the row to configure the parameters.

URLs and Scope

While only a URL is required, adding a scope is also a good idea, as the scan will crawl the site as well.  If you do not set a scope, it is possible that the crawler will start automatically parsing external links and scanning sites you don’t have authorization to scan. To stay in the scope of the URL I want to scan, I can set a Simple Scope definition, using the same URL that I am scanning as my scope rule. In this example, I’ll scan an application on my local network, 192.168.126.131/dvwa/:

You can get more granular by choosing AdvancedScopeDef instead of SimpleScopeDef, which allows for a regex to define the scope. You can also exclude hosts from the scope as well. The strings used for the scope options are the equivalent of what you can do in the GUI in Target → Scope → Target Scope.

Application Logins

Another new feature of with Burp 2.0 is the ability to configure multiple logins with the scanner. This functionality is also included in the API, and is very simple – Just enter a user name and a password:

The scanner seems pretty intelligent about submitting the correct parameters for form based authentication, however I noticed that this application login feature does not work with header authentication, such as Basic Auth. If you want to use Basic Auth or another header authentication scheme, then I recommend starting another instance of Burp, having it listen on a different port, and then using the new Burp instance as an upstream proxy. Then you can configure the upstream proxy to perform match and replace on the headers, or use platform authentication if you wanted to do something like NTLMv2.

Scan Configurations

Burp now allows you to customize and save scan configurations, then specify configurations to use during a scan. To view the available configurations, go to Burp → Configuration library:

In the API, you specify the name of the configuration:

Resource Pools

We now have the option to create resource pool profiles, which control how many concurrent requests are made as well as the delay between the requests. The API defaults to the Default resource pool, however you can create a Resource pool by navigating to Dashboard → New scan → Resource pool, and then create a new resource pool.

Here, I’ve created a new pool, Throttled Pool, that I can then specify by name in the API.

Scan Callback

Lastly, you can configure a scan callback URL, which will be sent information regarding a scan. For example’s sake I set up a Netcat listener and configured it as a callback URL:

When I start the scan the callback URL starts receiving traffic:

Assigning a callback URL is not the only way to obtain the scan results. Instead you can use the /scan/ endpoint and specify the task_id of the scan. To view the task_id of an item, you can: 1) Look at the Burp Dashboard tab; 2) Note the value of the location header that is returned when you start the scan; or 3) Configure a callback URL and note the task_id in the body of the request.

Dashboard tab:

Location header:

Callback:

You can query this endpoint while the scan is occurring and note the scan_status to figure out the progress of the scan. Calling the task_id not only returns the status, but also includes the scan metadata (requests made, errors, events, etc.) as well as the detailed issues that were discovered.

This can be pretty cumbersome depending on the number of issues detected for a particular task, so you can also pass parameters that work similar to filters to help you gather only the information you need. The after parameter allows you to specify a number to specify which issue Id for the scan that you want to see. For example, if your scan has ten issues 1-10, specifying the number 5 for the after parameter would only return issues after 5, or in the case of the example issues 6-10. The issue_events paramater allows you to specify another number, limiting the issues that come back further. For example, if you wanted to see issues 6-8, you would use http://127.0.0.1:1337/<api-key>/v0.1/scan/<task_id>?after=5&issue_events=3.

Interaction Using Python

I decided to write a tool that leverages the API to scan a list of hosts. The tool, burp_scanwalker.py, is available on my GitHub, and provides options to supply a single URL, multiple URLs read from a file, or an IP range (which will then be turned into multiple URLs that will check for web services on multiple ports), that will be scanned with Burp.

The majority of the code deals with command line options, parsing and normalizing URLs, and threading, however I’d like to briefly go over the code to test the connection to the API and kick off a scan request.

Testing connection to the API

This function accepts the URL of the API, which should include the API key. It then uses the request library to make a GET request to the API URL, storing the response in the resp variable. I set the verify=False flag, assuming that the API would also be available over HTTPS (but it is not), but it’s not necessary. The response is checked to make sure a 200 OK was received (if resp.ok:) and then returns True. Any other response code indicates an issue with the API key or URL, while an exception might indicate a connection error of some sort.

Starting a Scan

Starting a scan is simple as well. This function accepts the API URL and the URL that is supposed to be scanned. The connection to the API is first tested. If the API is available, the payload of the request is populated in the data variable. This script doesn’t dive deep into the options that are configurable by the API, but instead assigns the URL passed to the function as the URL to be scanned, as well as sets the scope to that URL. The API expects the payload of the data to be JSON, so when the request is made, the data variable is specified as JSON (json=data). If the response code is 201, then the API successfully started a scan, and responds with the task_id in the Location header of the HTTP response, which I set to the scan_id variable, which is then returned and printed to the terminal. This is what it looks like when it runs:

On the left is a terminal window, which prints the task_id of each scan. On the right is Burp, showing the scans with the corresponding task_id on the Dashboard tab.

You can use the -h switch to see all of the options, but here are some usage instructions to get you started with using the tool:

Scan a single URL

python3 burp_scanwalker.py -u http://example.com -k <api_key>

Scan URLs in a file

python3 burp_scanwalker.py -uf urls.txt -k <api_key>

Scan a range of IP addresses

python3 burp_scanwalker.py -r 192.168.0.0/24 -k <api_key> or python burp_scanwalker.py -r 192.168.0.0-255 -k <api_key>

Scan URLs in a file and specify a proxy (just for the API call, not the whole scan)

python3 burp_scanwalker.py -uf urls.txt -k <api_key> -pr 127.0.0.1:8080

Feel free to look at the code to see how all of the functions are tied together. This script works in Python 3 and 2.7, and I plan on updating it occasionally to allow the user more control over the scan, such as using a callback URL or a scan configuration.

Closing Thoughts

The addition of the API to Burp is very cool, and hopefully they will continue to update the functionality so that more of Burp can be controlled via an API. That being said, I would definitely like to see HTTPS as an option in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *