Saturday, December 29, 2007

Logout via Javascript with OnBeforeUnload

One sure fire way to protect users from CSRF attacks is to minimize the window of time that a user is logged in. Current CSRF mitigation strategies focus around the topic of adding a token to each form and link in addition to timing out the session after the the user has been inactive for a relatively short window of time.

However, any third party site than can exploit a weakness in Single Origin Policy can break though these defenses (such as the iframe SOP hack we saw in the past). In addition, the web world is moving to technologies that allow cross-site-requests on purpose, both through Flash, JavaScript and other technologies for mash-up capability.

Not all users are kind enough to explicitly press the logout link or button when they are done using your site. There are three situations that we can trap via JavaScript and force the user to logout without requiring additional action of the part of the user.

1) The user simply types in or browses to a new url in a single-tabbed environment without explicitly logging out.
2) The user closes the tab or window without choosing to press the logout link or button.
3) The user browses to a new tab while staying logged in the previous tab.

The following code sample will allow a programmer to trap events 1) and 2) reliably in IE 6/7 and Firefox 2. It's trivial to fire off the logout event especially if your logout server code will allow a GET request.
<html>
<body onbeforeunload="dothis();">

<script>
function dothis() {
alert('logmeout ajax event');
}
</script>

</body>
</html>
The third situation, when a user changes a browser tab, is much more difficult of an event to trap since it does not fire a onbeforeunload or similar event. It may also harm the user experience. Browser tab changing may not be a situation where the user actually wants to log out. None the less, to accomplish this task, you will need to work with the windows' onblur event. However, this event is very chatty. Just changing a tab will fire onblur event 5 times in Firefox 2.0. You
can play with code such as:

<script>
var logout = false;
function dothis() {
if (logout == false) {
alert('logmeout ajax event');
logout = true;
}
}
</script>

But Firefox 2 will still fire the alert 2x. You will need to test and expand upon this code for each unique browser.

Logging out via JavaScript is by no means a complete CSRF mitigation, but is an excellent defense-in-depth measure to add to your current mitigation strategy.

Monday, December 24, 2007

12 Steps To Application Security

There are several holdouts in the industry who wish to trump the term "Application Security" with the term "Software Security." My Christmas wish is that we standardize on the term "Application Security" - because I think it's a more realistic term to describe the state of the industry that helps organizations design, develop, deploy, assess, maintain, retire and build procedures around Applications in a way that protects them from external and internal threats.

1) We must admit that we have a problem and that the security posture of our enterprise Applications are becoming unmanageable.

Let me start by saying that most code is insecure. This is not necessarily getting better, but seems to be getting worse.

Those of us who are professional programmers were never taught to write secure code in school. Even Michael Howard, who is hiring the best and brightest out of the worlds top universities, clearly says that these star graduates have no idea now to write a secure application.

2) We must believe that a power greater than ourselves (Application Security Service Vendors) can help us restore sanity.

It would be effective to re-write all of the worlds applications in an environment that embraces best-of-breed Application Security methodologies. But the truth is that most CIO's have several thousand applications under their responsibility, that are largely insecure. It's already built. It's already in production. We low level programmers are tasked with writing more code, faster and faster, to keep the business moving, since they depend on our work more every day. We simply do not have the luxury of time or budget to rewrite all of those applications. So we are stuck with a state of having to secure applications after the fact. This is a reality check to those in the industry who conjecture that "Software Security" is a better term because "Application Security" implies protection of software after it is built.

3) We must make a decision to turn our will and our lives over to Application Security excellence.

I also feel that the term "Software Security" is a dangerous position that both polarizes the industry and blames the coder. Software Security implies Lines Of Code. Although at the end of the day, individual lines of code need to be written using best practices (input validation, output encoding, proper access control, etc,etc,etc) it's only a small part of the entire picture. Individual coders cannot solve the problem alone.

4) We must make a fearless inventory of the security posture of our current applications.

We cannot just run Fortify, Spi, Cenzic and Watchfire and be secure. We cannot prove that an application is secure by any predicate mathematical proof. So what do we do? We (at times) slap up a WAF to stop the bleeding. We bring in pen testers, conduct code reviews and run tools for the most critical apps.

5) We must a
dmit to a higher power (our CIO), to ourselves and to other coders the exact nature of our wrongs.

We wage political warfare in our organizations to ensure that the "C" level, the project managers, the infrastructure teams, the architects and the low-level programmers are all on the same page about Application Security. Not to mention incident response. Legal issues. Risk analysis - which really has nothing to do with software - but measures a financial impact on a business.

6)
We must be entirely ready to be re-trained to remove all these defects of how we develop applications.

We re-train software engineers as quickly as possible. We start growing a dedicated internal AppSec team to conduct these reviews in-house in a more cost effective way.

7)
We must humbly ask our Vendor to help us remove our shortcomings.

There are so many activities around securing an application that does not involve lines of code - and does NOT involve software - that is seems myopic to me to use the term "Software Security".

8) We must make a list of all applications that are insecure, and become willing to make amends to them all.

No tool will answer the question of the state of our Application Security posture. It takes a village - and often several villages - to even achieve measurement of our current posture! Most CIO's have "no clue" where they are today in terms of Application Security exellence.

9) We make direct amends to our insecure Applications wherever possible by fixing the underlying code, except when it would harm the organization by spending to much to do so.

It is not cost effective to spend 100$ to re-code an application that protects 10$ worth of data. We need outside help to do proper risk analysis - and that measurement needs to be a combination of not just engineering but also non-technical business expertise which has little to do with Software.

10) We continue to take inventory of the security posture of our applications and when we are wrong we promptly admit and fix it.

Depending on a vendor alone will not set you free. The best of breed vendors encourage building AppSec teams internally - the best Vendors help accelerate your organization to achieving Application Security Independence. Continuing education is a great deal cheaper than re-education. Internal penTest expertise is a great deal cheaper than bringing in a service vendor. Using the right tools effectively is a great deal more cost effective than the shotgun approach of using whatever tool was sold to your CIO. The right Vendor will help you get there fast without disrupting the organization.

11) Through continued education and studying of industry best practices, we try to embrace that philosophy in all of our day to day engineering activities.

Once we have the knowledge, we must start building all applications with security in mind and practice from the first few days of each applications conceptual birth.

12) Having had an awakening as a result of these steps, we carry this message to other engineers, and practice these principles in all our affairs as we build new applications.

Software implies the programs that run a computer.

Application implies a solution to a problem - in the enterprise we are talking about delivering data securely.

And I think those of us who use the term "Application Security" do so because it is not the software that we are trying to fix - it's the solution to a business need that we are trying to make more robust.

Thursday, December 20, 2007

Hash Migration Strategies

I've had several engineers ask me recently about how to migrate very large number of users from an old non-salted md5 hash to SHA-512.

I can think of 2 main strategies:

1) Rolling migration: Weaker security, stronger user experience.
a) Add a new database column to your USER table that will hold the 1024 bits necessary for SHA-512.
b) Every time a user logs in, first check to see if the SHA-512 column is empty.
c) If empty, just verify the password though the old md5 hash. If that login is successful, rehash to SHA-512 and delete the md5 column.
d) If the md5 column is not empty, verify the password via SHA-512 (preferably with per-user salts and multiple iterations of the hash)

2) Mass migration: Stronger Security, weaker user experience.
a) Email users (in blocks of 10,000) that their password will be expiring soon.
b) At login time, do the same as a rolling migration except also force the user to change their password upon successful login.
c) If a user does not change their password within a limited amount of time, lock their account and force a customer service interaction in order to re-open the account - giving that user 1 hour to change their password or be locked out again.

Saturday, December 8, 2007

Input Validaton Rant

When should we do input validation in J2EE applications?

I can think of 3 scenarios all with their own trade-offs.

1) "Let's just skip validation inside the application, and apply a few J2EE filters before we deploy. "

This is the solution I've been forced down in the past. I'm not a fan. It's not fair to be in a situation where the coder has the responsibility, but not so much the power. J2EE filters, while still being Java code, are external to the core app. I think of J2EE filters as part of the configuration layer; not integrated deep into the app itself.

Now, there are occasions where adding a filter (such as Eric Sheridan's CSRFGuard) is completely external to the app. The programmer never even needs to think about this kind of vulnerability if CSRFGuard is deployed. However, validation of a form element to ensure that it's a proper email address really seems like programmer responsibility to me. But adding a configuration filter like CSRFGuard to modify all forms by adding form keys really does not seem like programmer responsibility to be, but the platforms responsibility. When are we going to see work like CSRFGuard and the OWASP ESAPI project integrated deeper into J2EE, Sun?!

2) "Let's just start using Struts XML ActionForm configuration, have programmers completely skip doing any kind of validation, and have a AppSec regex professional work with our architect to set up configuration."

This has significant benefits, and I'm a fan of this methodology for big teams. But do not be lulled into a false sense to security just because you might have your input validation dialed in. Strong input validation does not protect you from security design flaws and a host of other attack vectors. But still, Struts input validation configuration at the XML level can be very powerful if done completely across the entire app. (each and every form element.) But you better have some serious regEx experience in-house, and have a regEx expert who is very much willing to take the time to learn the application as deeply as the folks who wrote it.

3) Let's do white list validation inside our controllers' dispatchers the moment we get data from the request.

This is my favorite, because I'm a manicoder.

[rant]
With the exception of Dinis Cruz, everyone in the industry is blaming the coders. http://reddevnews.com/features/article.aspx?editorialsid=2386 (Thank you, Dinis) Yes, we are often the scapegoat (baaaaaaaaah!) being asked to write code faster, cram more functionality in, and get it done before some arbitrary date passes. And we have wonderful people like Alan Paller "expressing frustration with the fact that everything on the [SANS Institute Top 20 Internet Security] vulnerability list is a result of poor coding, testing and sloppy software engineering."

Thanks Alan; but when are executives like you going to really invest the time, energy, money, training, Q/A resource and longer development cycles to truly allow us manicoders to engineer secure applications? Blaming the coder is an easy way out; Application Security policy, money and time needs to come from the top down. And this is a very tough sell when all you get out of it is insurance and assurance that is still very difficult to mathematically prove correct. If you have programmers in you org who are writing insecure code, I conjecture that we need to look at the "C-level" and see how much they truly care about this topic and take note if they are willing to commit to the cost and time necessary to win the battle of secure code.

We can't just blame the likes of Alan, even Gartner is telling the "C-level" that "developers need to take more responsibility" http://news.zdnet.co.uk/security/0,1000000189,39291194,00.htm thereby taking responsibility off the hands of the C's. Again, so unfair, when even Michael Howard at Microsoft with an almost unlimited hiring budget says that even the best and the brightest minds coming out of college have "no idea" now to write secure applications. http://searchsoftwarequality.techtarget.com/qna/0,289202,sid92_gci1283745,00.html?track=sy280&asrc=RSS_RSS-25_280

Let's kick it up another notch.

Right now, coders with security awareness are the "high priests" of software engineering groups. It does not have to be this way, but that is the truth in most organizations. AppSec knowledge is not integrated well into most organizations yet. And sadly, those coders who do have solid AppSec awareness and ability need to apply best-practice security guidelines **IN OPPOSITION TO UPPER MANAGEMENTS DESIRE TO DEPLOY CODE FAST**

If you really want to put the responsibility of AppSec into the hand of me, the coder, than we cannot depend on external configuration to lock down our apps. If you really want me to add IDS type logging deep within the bowels of my code, then you need to both empower me with training, tools and time to do so. This AppSec squeeze-play from the C-level needs to end.
[/rant]

Ok, back to input validation. I want control over my application at the absolute soonest possible situation when user input enters my code. I want to make sure strong whitelist validation is applied at the earliest point of entry into my code. I want to empower an auditor to easily dig through my code, look for every situation where we do request.getParemter and the like, and see whitelist validation applied right there and then, without having to dig through 10 other files or some elaborate platform technology to ensure proper validation is being done.

Thanks kindly for reading this far. For more information, contact Aspect Security for all of your appSec training, assurance and acceleration needs! :)