Taking it to the people

This  has been quite a year of community. I have been honored to present at a load of user groups and OWASP meetups this year, and I still have quite a few to go. Here's been some of the talks I have done so far this year:

CodeMash 2.0.1.5 - Developer Security Training
Cleveland OWASP - Cracking and Fixing REST Services
Central Ohio Infosec Summit - Developer's Guide to Pentesting
The State of Security Podcast - Working with Developers 
Arena Tech Night - Why the Web  is Broken
Columbus ISSA - Weaving Security into the SDLC
CodePaLOUsa - Weaving Security into the SDLC and Developer Security Training
CircleCityCon - Developer Security Training
BSides Cleveland - Why the Web is Broken
Great Lakes Area .NET User Group - Developer's Guide to Pentesting
Converge Detroit - Weaving Security into the SDLC and Cracking and Fixing REST Services
Pittsburg OWASP - Cracking and Fixing REST Services

The rest of the year looks to be just as eventful, and I am very much looking forward to it!

Central Ohio .NET User Group - Developer's Guide to Pentesting (August 27)
DerbyCon - Developers: Care and Feeding (September 25-27)
DogFoodCon - Why the Web is Broken (October 7-8)
OSU CyberSecurity Day - Developer Security Training

Hope to see you at one of these awesome events!



Timing attacks in account enumeration

Yesterday, Troy Hunt posted a very well written article showing how account enumeration can cause information disclosure. Essentially, in an attempt to be useful, your site inadvertently tells an attacker who is and isn't a user of your site.

For instance, if you use email address for username, when a user logs in your site has the option to tell them if their email address or password is incorrect.  If you are specific: "Email address not found" or "Password incorrect", then account enumeration is possible. I can send my list of 144 million email addresses and a password of 'asdf' to your login page, using ZAP Fuzzer. Then, those that said the password was incorrect mean that the email WAS correct, and I have a list of valid accounts.

As an aside, 20% of user accounts use the top 100 most popular passwords. If I can bypass your account lockout procedure with timing or parameter tampering, and send those 100 passwords to each of the legit accounts that I have enumerated, I will statistically gain access to one in five. I have tried this four times, and it has worked all four times.

So, anyway, I am here to tell you that simply making sure that you post a 'Your credentials are incorrect' isn't enough.,  Troy found a really good subtle indicator in his post, and to find those I recommend that you send each response to a comparer - both the failed username and password - to make a bitwise comparison. I have found stylesheets slightly different, an extra linefeed, all kinds of things.  But with the prevalence of hashing, I have discovered something even more interesting.

"But Hashing, Bill? I thought hashing passwords was a good thing!!"

You are right.

But.

Let's look at a login procedure.  In the POST action, you have something that looks a little like this:

checkCreds(username, password)
    user = LookupUser(username)
    if user then:
        hash= hashPass(password)
        if hash = user.hash then:
            makeSession(user)

Follow me? If the user exists then hash the password and compare. Seems like a pretty sensible model.

There is a problem, and I discovered it because I usually run tests with F12 tools loaded up. When I passed in the wrong username, the response took about 100 milliseconds. If I passed in the right username and the wrong password, the response took around 500 milliseconds. The time it took to hash that password resulted in account enumeration.

So I tossed my list of email addresses against it with a password of "password". Those that took 100 milliseconds I discarded. Those that took 500 milliseconds were valid accounts.

What's the takehome from this? Well first, get a application vulnerability analysis from a qualified auditor. These things are hard to find. Second, very carefully review the code for your login procedure. For the average attacker, this is the only page that is available for attack. Don't give away the freebies.

Some exciting news - now find my Application Security training on Wintellect Now

Ninja edit: OK, NOW it is live!

I'm so proud to announce that you can find my application security training on Wintellect Now!

My first course is my Developer's Guide to Security, now titled 'Writing Secure Applications, Part 1: Threats, Principles, and Fundamentals'.

Soon, I will put  up the rest of my training for application developers, including threat analysis and remediation for injection flaws, information disclosure, data protection and more.

If you have enjoyed my training at CodeMash or elsewhere in the past, you should certainly check out this growing series. If you haven't heard me before, give it a try! All you can get is more secure out of the bargain.

Using the OWASP ASVS for secure software development


Monday, at BSides Columbus, I premiered a new talk about using the OWASP Application Security Verification Standard as the basis for a secure SDLC, or a software security test plan, or a code review guide, or anything else your company needs to get off the starting blocks with regards to application security. I think the talk was well received, and was asked to put a synopsis on 'paper' for reference.

A system for verifying security controls
The purpose for the ASVS is providing a standard of communication between software vendors and customers. The customer can ask 'How secure are you,' the vendor can answer 'THIS secure,' and everyone is on the same page.


By nature, the ASVS is platform independent and free of technical detail. It is simply a listing of security controls, subcategorized by topic and ordered by relative difficulty to implement. This lends itself tremendously well to supporting the development of an application security platform for any software - not just for communication with tool vendors.

Embedded security principles

The ASVS is tightly integrated with two projects that are core to OWASP: The Top 10 and the Security Principles Project. The Top 10 is nothing new, but the integration of security principles into the core of a security program is strong sauce that isn't easy to make. Using the ASVS to help you integrate the core principles into your program brings a lot of value, virtually for free.
  • Defense in Depth
  • Positive Security Model
  • Fail Securely
  • Principle of Least Privilege
  • Separation of Duties
  • “Security by Obscurity”
  • Do Not Trust the Client
Four verification levels
Since the ASVS is designed to let the tool vendor inform the customer as to 'how secure' they are, it makes sense that the would be 'levels' for the standard. These include:
  • Level 0 (Cursory) indicates that the application has undergone some type of certification.
  • Level 1 (Opportunistic) indicates that the application adequately defends against security vulnerabilities that are easy to discover.
  • Level 2 (Standard) indicates that the application adequately defends against prevalent application security vulnerabilities whose existence poses a moderate to serious risk.
  • Level 3 (Advanced) indicates that the application adequately defends against all advanced application security vulnerabilities.

Thirteen verification requirements
The thirteen verification requirements, and their sub-points - represent some of the best thinking I have ever  seen on the distilling of application security principles into actionable items without specifying platform. These are the core of the standard, and should be used to map to your individual needs. From there you can build a secure SDLC, a test plan, whatever you need.

  • Authentication
  • Session management
  • Access control
  • Input handling
  • Cryptography at rest
  • Error handling and logging
  • Data protection
  • Communication security
  • HTTP security
  • Malicious controls
  • Business logic
  • Files and resources
  • Mobile security
Going forward
Going forward, I recommend a 5 step plan for getting the ASVS installed in your development process:
  1. Approach Management, and tell them you have a plan for application security
  2. Determine your starting level. I recommend Level 1.
  3. Match the requirements to your software - the hardest part. Go point by point and figure out where in your software you need to implement changes based on the requirements.
  4. Assign responsibility to development staff, even if you ahve to break out Microsoft Project.
  5. Implement. Pull the trigger.


Crushing bugs in OWASP ZAProxy

At the latest OWASP meeting in Columbus, we got set up to crush some bugs in ZAProzy, the OWASP attack proxy project. ZAP is written in Java, and the project is run by Simon Bennetts and sponsored by the Mozilla Foundation.

So for the dev crowd, ZAP is an attack proxy. Attack proxies are pentesting tools, used to observe the raw HTTP requests and responses in a web application. It sits between your browser and the web server and allows you to interact with the traffic. It works just like Fiddler, except instead of the value added tools revolving around debugging, they revolve around security.

To get set up, we followed the Building OWASP ZAP using Eclipse IDE for Java document, version 3.0. I strongly recommend you start here, even if you have other Java or Subversion tools installed. I did and I am so happy. One thing if you are on Windows 8 – run Eclipse as Administrator. Just like with Visual Studio, you’ll have trouble if you don’t.

Once I followed that document, I could build ZAP. Awesome.  Now I needed something to do. Off to the bug list I went. Simon has things organized well, so you can do as I did and search for ‘idealfirstbug’ to get some bugs that are good to start with. I picked Issue 1145, which was a simple tooltip. Good for me, because as you probably know, I am not a Java programmer.

image

 

 

Alright, I need to find that text. In Eclipse, I went to Search –> Search and clicked on the File Search tab. Then I entered the faulty text, ‘Show Tab Names and Tab Icons’ and searched. Aaaaand, I got 27 responses. That’s not good at all.

But wait! They are all properties files! And they are different languages. Guess there are only so many ways to say “Tab Names’ or something. Anyway, I figure I can search the property name ‘showNames’ and see about the logic that uses it, right?

Bingo! Searching for that get me to MainToolbarPanel.java, Right there in JToggleButton there are the properties being set, just reverse of how the logic works.

[sourcecode language='java' ]
btnShowTabIconNames = new ZapToggleButton();
btnShowTabIconNames.setIcon(new ImageIcon(MainToolbarPanel.class.getResource("/resource/icon/ui_tab_icon.png")));
btnShowTabIconNames.setToolTipText(Constant.messages.getString("view.toolbar.showIcons"));
btnShowTabIconNames.setSelectedIcon(new ImageIcon(MainToolbarPanel.class.getResource("/resource/icon/ui_tab_text.png")));
btnShowTabIconNames.setSelectedToolTipText(Constant.messages.getString("view.toolbar.showNames"));
setShowTabIconNames(Model.getSingleton().getOptionsParam().getViewParam().getShowTabNames());
[/sourcecode]

 

I switched the two properties names, showIcons and showNames, and bang, bug is squashed.

Now I have to get it in the code base. This is where subclipse really shines – the ability to quickly create patches. If you come from a forking Github world, patches might be foreign, but they are a really simple, straightforward, manageable way to  submit changes to an open source system sing subversion.

First though you have to find the file you are working on in the project explorer, This is where I discovered my new favorite button, the Link With Editor button. It syncs the view in the Package Explorer with the code view. Now the file I am editing is highlighted.

image

 

 

 

 

Seriously, new favorite button.

Anyway, right click on the changed file and select Team –> Create Patch from the context menu. The Create Patch dialog will confirm what files you are building a patch for,

image

 

 

 

 

 

I’d advise saving to a file, and then click Next. Then select Project as the scope (Workspace is just for multiproject workspaces, which is rare) and click Finish. That’s it, you’re done.

At that point, I went back to Google Code and added a comment to the bug, with the patch file attached. I;m not a committer on the project, so that’s what you do – just like a pull request in git. Hopefully, but the time you read this, the bug will be closed, and all will be right with that toggle button, at least.

Why Geography Matters When Using Amazon Web Services

When setting up an EC2 instance or configuring a profile, you have the choice to set the Region and Availability Zone. If you were wondering how that mattered, you aren’t alone.

What’s the difference between Region and Availability Zone?

Regions are actual physical locations of Amazon computers. While they would like us to think of the cloud as some magical server in the sky, in reality there are big buildings all over the world full of servers. The Regions, shown in Table 1, are the actual physical locations of these servers.

Table 1: Regions and Availability Zones

Code

Location

Region

Availability Zones

us-west-1

Northern California

US West

6 zones shared with us-west-2

us-west-2

Oregon

US West

6 zones shared with us-west-1

us-east-1

Virginia

US East

5 zones

ap-northeast-1

Tokoyo

Asia Pacific

2 zones

ap-southeast-1

Singapore

Asia Pacific

2 zones

ap-southeast-2

Sydney

Australia

2 zones

eu-west-1

Ireland

EU

3 zones

sa-east-1

Sao Paulo

South America

2 zones

Availability zones are isolated areas inside regions that are designed to protect against failures in other availability zones. They have separate power, cooling and physical space.

Why should I pay attention?

Amazon is designed for global access. Their web site is global, and their servers are global. If you are using AWS you have the option to support a truly global architecture as well.

There are requirements that may cause you to carefully consider the location of your servers. These requirements are why you have the ability to choose your region.

Legal considerations

There are a number of privacy laws on the books – especially in Europe and the U.S. – that restrict the passing of government data outside the bounds of a region. AWS supports this, implicitly in the regional settings, and explicitly with GovCloud.

GovCloud is a physically restricted cloud service that is designed to explicitly prevent data from leaving the borders of the US. When building a governmental web application in the U.S. that’s probably your best path.

Regions implicitly segregate data, too. While regions are connected via the open internet, if you select the EU for your S3 instance, that’s where your data will be stored.

Guarding against failure

AWS does go down. It isn’t very common, but it happens. Regions and Availability Zones were created to protect against just such a happening. You must, however, architect your application against such a happening.

Regions are not duplicated among themselves by default. In order to create a truly fault-tolerant application, you must set up something like the Cross Region Read Replicas in Amazon RDS.

Plain old bits on the wire

Of course there is a much more straightforward reason for the correct management of regions: the actual physical distance between your application and your customers. Those bits still have to travel the wire, so make sure your application is close to the folks that use it.

Geographic restrictions

Under some circumstances, your content might be restricted to users only living in one geographic area. For instance, some content can’t – by law – be exported outside of the European Union.

I for one was surprised that the Region can’t be used for this restriction. CloudFront offers a geographic restriction, but it is one a list by country basis, and you set it up separately from the region.

How to make a plan

Long before you set up that EC2 instance, take careful stock of your situation. Consider your user location, your needs for fault tolerance and the legal landscape if your application. Then you can map out your regions to best make use of the AWS servers.

On Testing

Yesterday, I refurbed an old joke for a current time and problem I had faced. This is 'edge case' testing; posting values to a system that really don't belong there. It came to mind because of a problem I had encountered in a system I was working on earlier. There is a procedure that accepts a FirstName and a LastName, and generates a UserName for Active Directory for it. The original code read:
 
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(50),
@LastName as  VARCHAR(50),
@output VARCHAR(8) OUTPUT
AS
BEGIN
 
And the new code read:
 
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(7),
@LastName as  VARCHAR(7),
@output VARCHAR(8) OUTPUT
AS
BEGIN
 
The engineer who altered the code was just trying to make it better. It was written very loosely for a stored procedure, and he simply tightened it up. That's not a  problem, but the front end designers didn't know, and most importantly - all of our test names were under 7 characters. We would have never found this in testing.
 
As it turns out, there are a lot of people who are all about this. The replies to my tweet over the last 24 hours covered a lot of ground, but by far were those that wanted to push the edge testing to the max - and I love it.
Oh, and then, of course: Surprising how many people like a little violence with their testing: Some QA engineers piped in with thoughts on structured testing: A lot of security folks follow me, so a lot of application pentesting tweets showed up as well. If you are a developer and don't recognize these, shoot me an email: And some of the business people had some things not-so-nice things to say about the QA process, most of which I agree with: But what, of course, really happens? And that's kind of the crux of it. Making your testing as real world as possible is an important part of QA. Don't let those tell you otherwise. Be it unit testing, integration, QA or pentesting, assuring that all tests push the edges of what happens in the real world will make your software better. And your Twitter timeline!

How the AWS CloudHSM Eases the Pain of Security Audits

Amazon offers a large selection of security products that help with compliance, privacy and data protection. IAM, intra-VM encryption and a swath of other products help make your users and your auditors breathe easier. There is still the problem of key storage. CloudHSM brings a reliable solution to that problem.

Exactly what is CloudHSM

CloudHSM is a dedicated hardware security appliance in the Amazon cloud that provides security key storage and cryptographic operation to a specific user.

A hardware appliance

Most of Amazon Web Services is based on virtualization. Virtualization allows for a software-only instance of something – like a server, router, or switch – to be created within a larger computing infrastructure. Cloud HSM is not virtualized – it is a standalone piece of hardware that only you have access to.

Specifically, CloudHSM is a Luna SA HSM appliance from Safenet. The Luna SA is Federal Information Processing Standard (FIPS) 140-2 and Common Criteria EAL4+ standard compliant.

A storage in the Amazon cloud for your encryption keys

CloudHSM provides a cryptographic partition for the storage of keys related to your AWS infrastructure. For instance, if a particular application requires a key to access a database stores in S3, it can retrieve that key from the hardware appliance.

How does is help with compliance?

Various regulatory agencies have very strict requirements when it comes to encryption.

Separation of concerns

With most AWS systems, Amazon has credentials to the underlying server that could allow an administrator access to the data. Not so with the CloudHSM. Amazon has administrative credentials that would allow them to repurpose the device, but those credentials cannot be used to retrieve the keys on the device. That privilege is only for the client user.

PCI

Simply, put, PCI has remarkably strict key management standards. CloudHSM is one of the list of AWS services validated for the 2013 PCI DSS compliance package. Specifically, just using CloudHSM in your key storage program will meet the requirements for PCI 3.5 and 3.6.

HIPAA

In order to meet the HIPAA requirements for storage of personal medical data, data at rest must be encrypted. This previously required a local storage component for personally identifiable information, significantly slowing any cloud initiative. Adding CloudHSM to the mix allows for data at rest within the Amazon cloud to be safely encrypted and still meet the key storage requirements of HIPAA.

What do I need to know?

There are always a few caveats to any new technology and CloudHSM is no different.

You need to have a VPC

CloudHSM doesn’t work on the open cloud. You’ll need to be using a Virtual Private Cloud to make it all come together. Fortunately a VPC is very easy to set up, and you night already be using one. It is part of the package for a number of AWS suite systems.

It is possible to use CloudHSM with your custom applications

You bet! Many of the AWS applications have capabilities to use keys from CloudHSM. EBS Volume encryption and S# object encryption are two that have the most obvious benefit for custom applications.

CloudHSM helps with security compliance

A reliable hardware appliance, well implemented, will help with your security compliance. Getting CloudHSM configured and integrated requires some effort, but the end result is as secure as your own data center.

We want YOU at the CodeMash Security Track

Earlier this year I was asked by the incomparable Rob Gillen to manage the Security track at CodeMash 2.0.1.5. That's a pretty big deal, seeing as how developer outreach is such a big deal for the security community.

CodeMash is as well known for its awesome content as it is its awesome community. The vibe there is just beyond compare, and it's because so many awesome people get together in one place and just relax.

If you, gentle reader, have application security research that you would like to present, I would ask you to put in a submission on the CodeMash Call For Speakers. I'd like to build a value-filled track. This is a great chance to get in front of 2000 developers from all over the midwest, and talk the good talk.

Cracking and Fixing REST APIs

REpresentational State Transfer, or REST, is more of a force on the web than most think. It is essentially a Web Service implementation of the HTTP protocol that runs the entire world wide web. I’m not hear to talk about REST, though, others have done that better than me.

I’m hear to talk about breaking REST.

When pentesting, I see the same pattern over and over. Organizations that went with a Service Oriented Architecture in the 2005 time frame had all of their business logic available as services in 2010 when the mobile boom hit. To make sure the IPhone app had the same functionality as the web app, they pushed the services through the DMZ without sufficient testing.

In this post, I’ll cover some of the common vulnerabilities that I find in REST APIs, and how to fix them. There are three main messages I want to get across: REST can be attacked like the rest of the web, REST can be attacked in special ways, and REST has special architectural considerations.

REST Can be attacked like the rest of the web

A REST API isn’t much different from a website. You start with a URL:

https://www.googleapis.com/language/translate/v2?key=sfo37ehf3olvmo8&source=en&target=de&q=Hello%20world


And then you get some markup back. Unlike normal web sites, however, we get JSON back rather than HTML.

{
    "data": {
        "translations": [
            {
                "translatedText": "Hallo Welt"
            }
        ]
    }
}

This means that we can use all of the attack vectors that one would use on a normal website. They might look a little different, but they end with essentially the same result.

Injection

SQL Injection and Cross Site Scripting (Browser injection) are possible because the parameters of a REST API call are what we would usually think of as directories in a normal web request. As long as we remember to check,

Parameters themselves can be tested too, if they are used by the API. You never know how they might be used.

https://www.googleapis.com/language/translate/v2?key=sfo37ehf3vmo8&source=en
&target=de&q=Hello%20world%3Cimg%20src%3D%27%23%27%20onerror%3Dalert(1)%20%2F%3E

 

How can this be fixed? On the SQL side, parameterized queried, as usual. XSS is a little tougher, but really it is just using the same techniques that one would use for a usual website.

Information Disclosure

REST responses use the same header format as regular web browser responses. Leaving unneeded information in those headers leads to information leakage:

HTTP/1.1 200 OK
Date: Thu, 07 Aug 2014 17:09:34 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.7l DAV/2 PHP/5.2.15
Last-Modified: Mon, 07 May 2012 17:58:32 GMT
ETag: "1a273c-37e-4bf7604a7e200"
Accept-Ranges: bytes
Content-Length: 894
MS-Author-Via: DAV
Keep-Alive: timeout=15, max=500
Connection: Keep-Alive

The same is true of error messages. REST services should use HTTP response codes, and avoid pushing default web server errors to the clients, like this:

image

Authentication

Authentication in REST is a bit of a pain. When there is No Human Involved, you can’t just use a username and password in a SSL protected POST. There has to be something at play that is automated.

One thing you don’t want to do is put the secret key right in the URL. Even under SSL this is a bad idea.

image

 

 

What you do want to do is look into HMAC.

image

 

 

 

 

 

 

HMAC, or Hashed Message Authentication Code, is a process where the client and server both know a public key and a secret key. The client creates a request, concatenates it with a secret key, and hashes it. Then the request (without the secret key of course) is sent, along with the hash. The server will them accept the request, use the public key to look up that secret hey, concatenate the request and key and hash.

Session Management

Session Management is hard in web development, but the server just has to know a little about you to give you a smooth browsing experience. This isn’t true for REST API calls. There is simply no reason to keep a session alive.

image

 

 

 

 

 

Just authenticate every time. It will save you so many headaches.

REST Can Be Attacked In Special Ways

Bad enough that all of the usual tricks work with REST, but there are special attacks and weaknesses.

Management of Secrets

The API key that acts effectively as a password needs to be protected, and this is very hard when you are running a JavaScript only application. For instance, in a Windows 8 app or an Angular.JS site, you might just end up leaving your keys hanging out in the open:

twitterTimeout = 20000;
var twitterClientSecret = "kXFKUW9t2spHa3zgJtYX77aaRKfT1swvF9yfFC2tX34";
var twitterConsumerKey = "3NgwT8Xc0BcHJtH60h4cvw";
var twitterAppsUrl = "https://twitter.com/settings/applications";
(function () {
    "use strict";
    var roamingSettings = Windows.Storage.ApplicationData.current.roamingSettings;

    WinJS.Namespace.define("Twitter", {
        Service: WinJS.Class.define(function Twitter_ctor() {

So what are we going to do? We need to exchange the secret key for a single session token on the server, then write it to the JavaScript. Facebook does this very wellFacebook does this very well. When the client app requests the login page the server generates a unique token based on information sent in the request. The information used is always something the server knows, something the client knows, and something both know. So for example the server can generate a unique key based on User agent + current time + secret key. The server generates a hash based on this information and then stores a cookie containing only the hash on the client machine.

image

CSRF

CSRF is a big topic that is handled very well by OWASP. I’ll let them explain it if you aren’t familiar, but you need to know that if your REST API depends on the site’s session cookies, it is completely and totally susceptible to CSRF. Just don’t use state on your REST APIs.

Unused HTTP Verbs

REST uses the HTTP Verbs, usually GET, PUT, POST and DELETE as the action words in your APIs domain language. Trouble is, there are a lot of HTTP verbs and we probably aren’t going to use them all. That said, if you had to open up PUT and DELETE on your server, you might have opened up others (like, all of them.)

Once others are opened, if they aren’t specifically in your authorization configuration, then we can use a verb like HEAD and bypass authentication and perhaps get a token:

telnet www.example.com 80
HEAD /admin/pageIwannaSee.aspx HTTP/1.1

So what do we do? Turn off the unused verbs, or include them in your configuration.

Direct Object Reference

Direct object reference is not specific to REST but it is particularly relevant to REST. For instance, take a look at a call to the Facebook API at https://graph.facebook.com/v1.0/1138975845:

{
  "id": "1138975845", 
  "first_name": "Mary", 
  "gender": "female", 
  "last_name": "Loaiza", 
  "link": "https://www.facebook.com/mary.loaiza.921", 
  "locale": "es_LA", 
  "name": "Mary Loaiza", 
  "updated_time": "2014-06-08T01:32:22+0000", 
  "username": "mary.loaiza.921"
}

And then check out the next integer, https://graph.facebook.com/v1.0/1138975846

{
  "id": "1138975846", 
  "first_name": "Chase", 
  "gender": "male", 
  "last_name": "Krywaruchka", 
  "link": "https://www.facebook.com/chase.krywaruchka", 
  "locale": "en_US", 
  "name": "Chase Krywaruchka", 
  "updated_time": "2013-09-17T03:17:00+0000", 
  "username": "chase.krywaruchka"
}

With parameters in the URL at this level, you need to be especially careful to use unique identifiers, like GUIDs perhaps, for your object references. Otherwise you run the risk of someone downloading your entire user list – not to give you any ideas.

Mass Assignment Vulnerability

The Mass Assignment Vulnerability is a special flaw in ActiveRecord, a database access methodology commonly used in REST APIs. For instance, take this Person object, right from Uncle Bob:

image

 

 

 

 

It is possible, in many languages, to instantiate a new Person in such a way that it just sucks in all of the correctly named form fields. However, a malicious user can change the client side JavaScript to set other fields that might not be on the form:

params[:person] = { isFlaggedForAudit: true}


This is because ActiveRecord will autogenerate the underlying class on the API side, and it won’t distinguish between fields that the user should be able to set than those the user shouldn’t have write access to. For publically accessible classes, it’s recommended that you explicitly exclude sensitive fields, as shown.

[Bind(Exclude = “IsFlaggedForAudit")]
public class User
{
  public int FirstName{ get; set; }
  public string LastName{ get; set; }
  public string NumberOfDependents{ get; set; }
  public bool IsFlaggedForAudit{ get; set; }
}

REST has special Architectural Considerations

Aside from the general good practices and specific code protection, there are a few overriding considerations that should go into planning an API.

Carefully Consider Your Authentication

Consider the audience for your API and then plan for authentication early. If you have a small number of discrete users, consider using digital certificates for authentication. They are a pain to set up, but it doesn’t get more secure. If you have a public audience, then look into HMAC. It is supported on most platforms.

Treat your API Keys like PKI

If you go with HMAC, make sure your user base understands to treat their secret key like a private key in a PKI environment. It is literally the key to the kingdom. I can’t even begin to tell you haw often I have found the secret key in the URL (not secret) or the comments of a JS file (also not secret).

Treat Your URL Like A Method

In REST, the URL is your method signature. Plan them that way. Name them intelligently, and make sure they aren’t leaking information that shouldn’t be in there.

image

 

 

 

 

 

 

Burp even has a special tool for fuzzing REST style parameters. Just because we don’t have the parameter names doesn’t mean they can’t be fuzzed.

Treat Your API Like A Web Site

Finally, don’t assume that because your web site was tested, that you can just go and expose previously internal services as external REST APIs. The API needs to be tested and reviewed separately. Treat it like a site of its own.

Building a tool for REST testing

I am working on a BURP plugin for checking for several of these. You can find the project on Google Code. If you’d like to be in on the project, please let me know and you can take a piece and work on it.

Who the heck is Bill Sempf?

Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

INETA Community Speakers Program

Month List

Paying the bills

 

I spoke at

I'm speaking at Black Hat Europe

AppSec USA 2013