I just finished giving my new talk on the care and feeding of your developers in a security culture at DerbyCon, and I wanted to lay out some of the thoughts I presented in written words as well.
There is a solvable set of problems that are causing the divide between infosec and development. Almost all of it revolves around risk-averse infosec practitioners and risk-accepting developers seeing the same issue from different sides. For instance, infosec may see a feature like framability as a risk (due to UI Redress) where developers see it is a nice thing to offer users. Even when a security flaw is found, differences abound. Infosec view and reports the flaw as a vulnerability, wheras to developers it is just a defect. Severity is an issue; infosec inflates the severity of questionable vulnerabilities for legal or headline-grabbing reasons but developers just don't see the urgency. This language shift is exaserbated by the divergent personality types in infosec and development.
That's not all though. infosec fails to understand the basics of how a developer lives their life. The Software Development Lifecycle, or SDLC, be it a more traditional document-style or a newer agile-style process, rules the developer's day to day existence. Introduction of non-customer impacting bugs just doesn't have a slot in the process. Also, information security needs to understand their products' underlying platform, along with it's strengths and weaknesses.
So now that we have a grip on the problem, what do we do?
First, we automate. Some things in development just need to be autiomated. Testing, deployment, and confirmation, for instance, are three areas where good, well vetted automation will make it a whole lot easier to focus on the real hard things. Give dev and QA the tools to automate their jobs, and teach them to use them. When it comes time for application vulnerability analysis, put your static code analysis in the QA build, and let it focus on what it does best; same with the dynamic code analysis. Don't make someone have to push a button, just automate it.
But some things a human does well. You can't have a report run from an automated scanner and then jsut thriw it over the wall to the devs. Find the vulnerabilities that matter, focus on those, speak about them in terms of defects, and add them to bug tracking.
There are other things that humans do too, like pentesting and code review. This is the human side of static and dynamic analysis. It must be understood that a grasp on the platform you are testing, and the business case in question, makes all of the difference. If you take a report, especially from someone outside of the company with an imperfect understanding of your environment, the report will be of no use to the developers at all.
A third thing that humans do well is apply foresight. Use of tools like the OWASP Proactive Controls or Application Security Verification Standard will make the transition into a security culture so much easier because the developer can see a path to more secure software. They look ahead - way better for a dev than looking behind.
So what should you do today? Well, remember that devs like spicy food and expensive beer. Get your dev staff together over a few beers and don't just train - teach! A pattern that has worked well for me is to start with a 'test yourself' approach, then talk about secure coding, and then tie it down with security principles. Or start with the principles. Or start with the coding - just make sure to hit for the cycle. There are very few appsec specialists out there, so it behooves us all to make a few. Giving the developer the tools, and then teaching them how to use them, is more likely to make you an appsec person than just about anything.
We're in this fight together, folks, take it or leave it. The end goal is to make usable AND secure software, in a reasonable amount of time and under a reasonable budget. All of us do different things well, so if we agree to sit around the table and communicate, we can all get the job done. Remember that security isn't developer's first concern and shouldn't be. Give them the tools, teach them how to use them, and provide reasonable, valuable analysis of the apps, and you'll see things get better soon.
So a year ago, while debugging a SQL statement in an identity system, I jotted a stupid joke into Twitter.
It has garnered some popularity, for some reason.
I probably should use this time to discuss the ins and outs of software testing, how it integrates with security, and why there is such a response to the joke. But that's all been done.
Instead, I'd just like to take a moment to marvel at the insane power of social media. I mean, that joke has touched over four MILLION people. That's a lot. And it's still going! If I go right now and look at my notifications, 42 more folks have retweeted it.
I have had to turn off my notifications on all devices, otherwise everything buzzes constantly. It's nuts!
But take a second and compare this 15 seconds of fame to the larger issue. Take Ahmed Mohammed, who went from arrested at school to a white house invite in what, 36 hours? The whole internet stood up! I couldn't believe how aligned my timeline was. But now, we start to learn that there might be two sides to that story, and that it might have all been a setup. How about that? Someone playing the Social Network? Like a fiddle? Say it ain't so!
With great power comes great responsibility, but what if that power is distributed? And anonymous? Who bears the responsibility? There are some things you just "don't do," but they get done all the time. Someone - someone anonymous, in the network - doxxes someone who the Social Network has decided is worth contempt and then WHOOPS, we were wrong. But now their life is in shambles, and the horde moves on to the next worthy adversary.
I don't really have a solution, but having had a taste of the immense power of the network in a very small way (seriously, 610 replies!) I can just imagine what it would have been like if one of my more off-color or politically or morally charged posts caught the interest of the horde.
Be careful what you share. Make sure your family does as well. You never know what's gonna catch on.
This has been quite a year of community. I have been honored to present at a load of user groups and OWASP meetups this year, and I still have quite a few to go. Here's been some of the talks I have done so far this year:
- Weaving Security into the SDLC and Developer Security Training
- Weaving Security into the SDLC and Cracking and Fixing REST Services
The rest of the year looks to be just as eventful, and I am very much looking forward to it!
- Developers: Care and Feeding (September 25-27)
Hope to see you at one of these awesome events!
Yesterday, Troy Hunt
posted a very well written article
showing how account enumeration can cause information disclosure. Essentially, in an attempt to be useful, your site inadvertently tells an attacker who is and isn't a user of your site.
For instance, if you use email address for username, when a user logs in your site has the option to tell them if their email address or password is incorrect. If you are specific: "Email address not found" or "Password incorrect", then account enumeration is possible. I can send my list of 144 million email addresses and a password of 'asdf' to your login page, using ZAP
Fuzzer. Then, those that said the password was incorrect mean that the email WAS correct, and I have a list of valid accounts.
As an aside, 20% of user accounts use the top 100 most popular passwords. If I can bypass your account lockout procedure with timing or parameter tampering, and send those 100 passwords to each of the legit accounts that I have enumerated, I will statistically gain access to one in five. I have tried this four times, and it has worked all four times.
So, anyway, I am here to tell you that simply making sure that you post a 'Your credentials are incorrect' isn't enough., Troy found a really good subtle indicator in his post, and to find those I recommend that you send each response to a comparer - both the failed username and password - to make a bitwise comparison. I have found stylesheets slightly different, an extra linefeed, all kinds of things. But with the prevalence of hashing, I have discovered something even more interesting.
"But Hashing, Bill? I thought hashing passwords was a good thing!!"
You are right.
Let's look at a login procedure. In the POST action, you have something that looks a little like this:
user = LookupUser(username)
if user then:
if hash = user.hash then:
Follow me? If the user exists then hash the password and compare. Seems like a pretty sensible model.
There is a problem, and I discovered it because I usually run tests with F12 tools loaded up. When I passed in the wrong username, the response took about 100 milliseconds. If I passed in the right username and the wrong password, the response took around 500 milliseconds. The time it took to hash that password resulted in account enumeration.
So I tossed my list of email addresses against it with a password of "password". Those that took 100 milliseconds I discarded. Those that took 500 milliseconds were valid accounts.
What's the takehome from this? Well first, get a application vulnerability analysis from a qualified auditor. These things are hard to find. Second, very carefully review the code for your login procedure. For the average attacker, this is the only page that is available for attack. Don't give away the freebies.
Ninja edit: OK, NOW it is live!
I'm so proud to announce that you can find my application security training
on Wintellect Now!
My first course is my Developer's Guide to Security, now titled 'Writing Secure Applications, Part 1: Threats, Principles, and Fundamentals'.
Soon, I will put up the rest of my training for application developers, including threat analysis and remediation for injection flaws, information disclosure, data protection and more.
If you have enjoyed my training at CodeMash or elsewhere in the past, you should certainly check out this growing series. If you haven't heard me before, give it a try! All you can get is more secure out of the bargain.
Monday, at BSides Columbus
, I premiered a new talk
about using the OWASP Application Security Verification Standard
as the basis for a secure SDLC, or a software security test plan, or a code review guide, or anything else your company needs to get off the starting blocks with regards to application security. I think the talk was well received, and was asked to put a synopsis on 'paper' for reference.
A system for verifying security controls
The purpose for the ASVS is providing a standard of communication between software vendors and customers. The customer can ask 'How secure are you,' the vendor can answer 'THIS secure,' and everyone is on the same page.
By nature, the ASVS is platform independent and free of technical detail. It is simply a listing of security controls, subcategorized by topic and ordered by relative difficulty to implement. This lends itself tremendously well to supporting the development of an application security platform for any software - not just for communication with tool vendors.
Embedded security principles
The ASVS is tightly integrated with two projects that are core to OWASP: The Top 10
and the Security Principles Project
. The Top 10 is nothing new, but the integration of security principles into the core of a security program is strong sauce that isn't easy to make. Using the ASVS to help you integrate the core principles into your program brings a lot of value, virtually for free.
- Defense in Depth
- Positive Security Model
- Fail Securely
- Principle of Least Privilege
- Separation of Duties
- “Security by Obscurity”
- Do Not Trust the Client
Four verification levels
Since the ASVS is designed to let the tool vendor inform the customer as to 'how secure' they are, it makes sense that the would be 'levels' for the standard. These include:
- Level 0 (Cursory) indicates that the application has undergone some type of certification.
- Level 1 (Opportunistic) indicates that the application adequately defends against security vulnerabilities that are easy to discover.
- Level 2 (Standard) indicates that the application adequately defends against prevalent application security vulnerabilities whose existence poses a moderate to serious risk.
- Level 3 (Advanced) indicates that the application adequately defends against all advanced application security vulnerabilities.
Thirteen verification requirements
The thirteen verification requirements, and their sub-points - represent some of the best thinking I have ever seen on the distilling of application security principles into actionable items without specifying platform. These are the core of the standard, and should be used to map to your individual needs. From there you can build a secure SDLC, a test plan, whatever you need.
- Session management
- Access control
- Input handling
- Cryptography at rest
- Error handling and logging
- Data protection
- Communication security
- HTTP security
- Malicious controls
- Business logic
- Files and resources
- Mobile security
Going forward, I recommend a 5 step plan for getting the ASVS installed in your development process:
- Approach Management, and tell them you have a plan for application security
- Determine your starting level. I recommend Level 1.
- Match the requirements to your software - the hardest part. Go point by point and figure out where in your software you need to implement changes based on the requirements.
- Assign responsibility to development staff, even if you ahve to break out Microsoft Project.
- Implement. Pull the trigger.
At the latest OWASP meeting in Columbus, we got set up to crush some bugs in ZAProzy, the OWASP attack proxy project. ZAP is written in Java, and the project is run by Simon Bennetts and sponsored by the Mozilla Foundation.
So for the dev crowd, ZAP is an attack proxy. Attack proxies are pentesting tools, used to observe the raw HTTP requests and responses in a web application. It sits between your browser and the web server and allows you to interact with the traffic. It works just like Fiddler, except instead of the value added tools revolving around debugging, they revolve around security.
To get set up, we followed the Building OWASP ZAP using Eclipse IDE for Java document, version 3.0. I strongly recommend you start here, even if you have other Java or Subversion tools installed. I did and I am so happy. One thing if you are on Windows 8 – run Eclipse as Administrator. Just like with Visual Studio, you’ll have trouble if you don’t.
Once I followed that document, I could build ZAP. Awesome. Now I needed something to do. Off to the bug list I went. Simon has things organized well, so you can do as I did and search for ‘idealfirstbug’ to get some bugs that are good to start with. I picked Issue 1145, which was a simple tooltip. Good for me, because as you probably know, I am not a Java programmer.
Alright, I need to find that text. In Eclipse, I went to Search –> Search and clicked on the File Search tab. Then I entered the faulty text, ‘Show Tab Names and Tab Icons’ and searched. Aaaaand, I got 27 responses. That’s not good at all.
But wait! They are all properties files! And they are different languages. Guess there are only so many ways to say “Tab Names’ or something. Anyway, I figure I can search the property name ‘showNames’ and see about the logic that uses it, right?
Bingo! Searching for that get me to MainToolbarPanel.java, Right there in JToggleButton there are the properties being set, just reverse of how the logic works.
[sourcecode language='java' ]
btnShowTabIconNames = new ZapToggleButton();
I switched the two properties names, showIcons and showNames, and bang, bug is squashed.
Now I have to get it in the code base. This is where subclipse really shines – the ability to quickly create patches. If you come from a forking Github world, patches might be foreign, but they are a really simple, straightforward, manageable way to submit changes to an open source system sing subversion.
First though you have to find the file you are working on in the project explorer, This is where I discovered my new favorite button, the Link With Editor button. It syncs the view in the Package Explorer with the code view. Now the file I am editing is highlighted.
Seriously, new favorite button.
Anyway, right click on the changed file and select Team –> Create Patch from the context menu. The Create Patch dialog will confirm what files you are building a patch for,
I’d advise saving to a file, and then click Next. Then select Project as the scope (Workspace is just for multiproject workspaces, which is rare) and click Finish. That’s it, you’re done.
At that point, I went back to Google Code and added a comment to the bug, with the patch file attached. I;m not a committer on the project, so that’s what you do – just like a pull request in git. Hopefully, but the time you read this, the bug will be closed, and all will be right with that toggle button, at least.
When setting up an EC2 instance or configuring a profile, you have the choice to set the Region and Availability Zone. If you were wondering how that mattered, you aren’t alone.
What’s the difference between Region and Availability Zone?
Regions are actual physical locations of Amazon computers. While they would like us to think of the cloud as some magical server in the sky, in reality there are big buildings all over the world full of servers. The Regions, shown in Table 1, are the actual physical locations of these servers.
Table 1: Regions and Availability Zones
6 zones shared with us-west-2
6 zones shared with us-west-1
Availability zones are isolated areas inside regions that are designed to protect against failures in other availability zones. They have separate power, cooling and physical space.
Why should I pay attention?
Amazon is designed for global access. Their web site is global, and their servers are global. If you are using AWS you have the option to support a truly global architecture as well.
There are requirements that may cause you to carefully consider the location of your servers. These requirements are why you have the ability to choose your region.
There are a number of privacy laws on the books – especially in Europe and the U.S. – that restrict the passing of government data outside the bounds of a region. AWS supports this, implicitly in the regional settings, and explicitly with GovCloud.
GovCloud is a physically restricted cloud service that is designed to explicitly prevent data from leaving the borders of the US. When building a governmental web application in the U.S. that’s probably your best path.
Regions implicitly segregate data, too. While regions are connected via the open internet, if you select the EU for your S3 instance, that’s where your data will be stored.
Guarding against failure
AWS does go down. It isn’t very common, but it happens. Regions and Availability Zones were created to protect against just such a happening. You must, however, architect your application against such a happening.
Regions are not duplicated among themselves by default. In order to create a truly fault-tolerant application, you must set up something like the Cross Region Read Replicas in Amazon RDS.
Plain old bits on the wire
Of course there is a much more straightforward reason for the correct management of regions: the actual physical distance between your application and your customers. Those bits still have to travel the wire, so make sure your application is close to the folks that use it.
Under some circumstances, your content might be restricted to users only living in one geographic area. For instance, some content can’t – by law – be exported outside of the European Union.
I for one was surprised that the Region can’t be used for this restriction. CloudFront offers a geographic restriction, but it is one a list by country basis, and you set it up separately from the region.
How to make a plan
Long before you set up that EC2 instance, take careful stock of your situation. Consider your user location, your needs for fault tolerance and the legal landscape if your application. Then you can map out your regions to best make use of the AWS servers.
Yesterday, I refurbed an old joke for a current time and problem I had faced. This is 'edge case' testing; posting values to a system that really don't belong there. It came to mind because of a problem I had encountered in a system I was working on earlier. There is a procedure that accepts a FirstName and a LastName, and generates a UserName for Active Directory for it. The original code read:
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(50),
@LastName as VARCHAR(50),
@output VARCHAR(8) OUTPUT
And the new code read:
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(7),
@LastName as VARCHAR(7),
@output VARCHAR(8) OUTPUT
The engineer who altered the code was just trying to make it better. It was written very loosely for a stored procedure, and he simply tightened it up. That's not a problem, but the front end designers didn't know, and most importantly - all of our test names were under 7 characters. We would have never found this in testing.
As it turns out, there are a lot of people who are all about this. The replies to my tweet over the last 24 hours covered a lot of ground, but by far were those that wanted to push the edge testing to the max - and I love it.
Oh, and then, of course: Surprising how many people like a little violence with their testing: Some QA engineers piped in with thoughts on structured testing: A lot of security folks follow me, so a lot of application pentesting tweets showed up as well. If you are a developer and don't recognize these, shoot me an email: And some of the business people had some things not-so-nice things to say about the QA process, most of which I agree with: But what, of course, really happens? And that's kind of the crux of it. Making your testing as real world as possible is an important part of QA. Don't let those tell you otherwise. Be it unit testing, integration, QA or pentesting, assuring that all tests push the edges of what happens in the real world will make your software better. And your Twitter timeline!
Amazon offers a large selection of security products that help with compliance, privacy and data protection. IAM, intra-VM encryption and a swath of other products help make your users and your auditors breathe easier. There is still the problem of key storage. CloudHSM brings a reliable solution to that problem.
Exactly what is CloudHSM
CloudHSM is a dedicated hardware security appliance in the Amazon cloud that provides security key storage and cryptographic operation to a specific user.
A hardware appliance
Most of Amazon Web Services is based on virtualization. Virtualization allows for a software-only instance of something – like a server, router, or switch – to be created within a larger computing infrastructure. Cloud HSM is not virtualized – it is a standalone piece of hardware that only you have access to.
Specifically, CloudHSM is a Luna SA HSM appliance from Safenet. The Luna SA is Federal Information Processing Standard (FIPS) 140-2 and Common Criteria EAL4+ standard compliant.
A storage in the Amazon cloud for your encryption keys
CloudHSM provides a cryptographic partition for the storage of keys related to your AWS infrastructure. For instance, if a particular application requires a key to access a database stores in S3, it can retrieve that key from the hardware appliance.
How does is help with compliance?
Various regulatory agencies have very strict requirements when it comes to encryption.
Separation of concerns
With most AWS systems, Amazon has credentials to the underlying server that could allow an administrator access to the data. Not so with the CloudHSM. Amazon has administrative credentials that would allow them to repurpose the device, but those credentials cannot be used to retrieve the keys on the device. That privilege is only for the client user.
Simply, put, PCI has remarkably strict key management standards. CloudHSM is one of the list of AWS services validated for the 2013 PCI DSS compliance package. Specifically, just using CloudHSM in your key storage program will meet the requirements for PCI 3.5 and 3.6.
In order to meet the HIPAA requirements for storage of personal medical data, data at rest must be encrypted. This previously required a local storage component for personally identifiable information, significantly slowing any cloud initiative. Adding CloudHSM to the mix allows for data at rest within the Amazon cloud to be safely encrypted and still meet the key storage requirements of HIPAA.
What do I need to know?
There are always a few caveats to any new technology and CloudHSM is no different.
You need to have a VPC
CloudHSM doesn’t work on the open cloud. You’ll need to be using a Virtual Private Cloud to make it all come together. Fortunately a VPC is very easy to set up, and you night already be using one. It is part of the package for a number of AWS suite systems.
It is possible to use CloudHSM with your custom applications
You bet! Many of the AWS applications have capabilities to use keys from CloudHSM. EBS Volume encryption and S# object encryption are two that have the most obvious benefit for custom applications.
CloudHSM helps with security compliance
A reliable hardware appliance, well implemented, will help with your security compliance. Getting CloudHSM configured and integrated requires some effort, but the end result is as secure as your own data center.