Application Security Weekly for March 25

HSTS tracking beats even incognito mode in browsers, and it more and more often used by advertisers.  In the most recent edition of OSX, Safari has two mitigations in place for this issue.  Let's hope other browsers follow suit shortly.


Here's a really good writeup by as researcher that discovered an XML External Entity vulnerability in Windows Remote Assistance.


Dropbox and Netflix join the growing group of large technology organizations promising not to sue white hat security researchers.


Here's another application vulnerability analysis procedure, well written and organized.

A new blog series: Application Security Weekly

No, I haven't given up on my OTHER blog series about application vulnerability assessment but an opportunity opened up to start publishing my client newsletter on my blog.  It's just usually four stories about appsec that I think are particularly important this week.  Not even a lot of commentary, but if you only have so much time to absorb appsec news, then this could be a great way to fit some news in.

Enough chatting, this weeks stories:


Any authenticated user on a Samba 4 Active Directory can change any other users' password via LDAP.  A patch is available.


Ass we all surmised, there was an app that leveraged Open Graph to download profiles from Facebook for the purposes of crafting the election advertising.

I spend a lot of time talking about the Facebook Open Graph, here I am three years ago at Cleveland BSides:


Abusing Certificate Transparency logs to get subdomains from an HTTPS website:


A nice primer on breaking encryption from MalwareBytes:


Happy hunting!

Live Webinar: Come talk Application Vulnerability Analysis with me and WintellectNOW

I'll be doing a live webinar on Application Vulnerability Analysis on February 8 at 2PM EST - 1 month from today - and it will be a lot of fun! You can hang out in the afternoon and hack some stuff, at least the east coast folks.  West coast people can start their afternoon early.  Europe - you are on your own.  India - you should be sleeping.

Here is the link:

We'll talk about a few principles and then work through using an attack proxy to look for insecure direct object references, forced browsing, injection vulnerabilities, and whatever else comes to mind.

Thanks to WintellectNOW for having me on.

Hop to see you there!

Day 6 of C# Advent - Coding for an encrypted service

Welcome to the 6th day of the C# Advent! Let's encrypt some malware.

That sounds horrible, but in security testing, sometimes you have to use the tools of the bad guy to make sure you aren't likely to be susceptible to any attacks. There are many such tools, but a new one - one that takes instructions from an HTTP web server - was developed by Dave Kennedy of TrustedSec. Called TrevorC2, it is a Python Command and Control server with a variety of clients.  The attacker would install the client on the target workstation, and have a server delivering commands.  In this case, it is over HTTP.  And one of the clients is in C#.  And encrypted in AES.

Wait, what?  AES?  Really?  

Yes, really.  Companies look for certain strings that are common in command and control servers moving across their networks.  So, the attackers encrypt things! How do we find it then?  Well, that's not our problem at the moment - we just need to figure out how to replicate what the bad guys are doing.  So, that's the task for this day of the C# Advent - implement AES in C# that will talk nice to a Python command an control server over HTTP.  

Fortunately, most of it I have written.  The communication bits in C# are pretty straightforward, but the encryption piece is not at all.  When I started, I wanted to use System.Security.Cryptography.Aes, but guess what?  It ain't that easy.  The Python client encryption method looks like this:

    def encrypt(self, raw):
        raw = self._pad(AESCipher.str_to_bytes(raw))
        iv =
        cipher =, AES.MODE_CBC, iv)
        return base64.b64encode(iv + cipher.encrypt(raw)).decode('utf-8')


My initial take was something like this for an encrypt method:

        static string Encrypt(string target)
            byte[] result = null;
                if (target == null || target.Length <= 0)
                    throw new ArgumentNullException("plainText");

                using (Aes aesAlg = Aes.Create())
                    aesAlg.Key = Cipher;
                    ICryptoTransform encryptor = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV);
                    using (MemoryStream msEncrypt = new MemoryStream())
                        using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
                            using (StreamWriter swEncrypt = new StreamWriter(csEncrypt))
                            result = msEncrypt.ToArray();
            catch(Exception e)
            return Encoding.ASCII.GetString(result);

That didn't work.  So, I did what any good senior developer would do.  I buckled down, focused, and looked for a library written by someone smarter than me. Fortunately, Adam Caudill is out there with, a C# implementation of the well known NaCl library. There's the solution I needed. is a living breathing example of how complicated encryption can be.  NaCl is a very well known implementation of the main encryption protocols ... in C.  Yeah, C.  You can use it in other languages but C is what NaCl is for.

The chem majors among us will recognize that NaCl is table salt.  Thus, Sodium.  Libsodium is a higher level library for C++ and ilk, and then Adam's implementation uses the .NET native libraries to perform the same tasks.  It's a reasonable legacy, and you should consider it if you have to write encryption code, as, well, I do right now.

Anyway, the interface is SUPER easy to use.  The SecretBox holds everything you need, including the abilityto generate the nonce required for the AES encryption, and to do the actual work.

So, my NEW method, after adding, looks like this:

        static string Encrypt(string target)
                var nonce = SecretBox.GenerateNonce();
                var result = SecretBox.Create(target, nonce, Cipher);
            catch (Exception e)
            return Encoding.ASCII.GetString(result);

Tune back in next week, after I get my Trevor server up and running, and we'll get everything configured and working. (Remember, play with malware on a network that is not your employer's network!) 



Reconnaissance means something different for pentesters as it does from vulnerability analysts.  It is, truthfully, the first obvious break between the two forms of testing.  For vulnerability analysts, reconnaissance means doing all of the research that is required to understand what one apps does, and how it works.  For pentesters, reconnaissance means doing the vulnerability analysis.  Remember, their job is to exploit the vulnerabilities.  Finding the vulnerabilities is part of the reconnaissance! For vulnerability analysts, though, we have to go a little deeper into research the application itself, in order to find every possibly vulnerability.

I have quite a list - reconnaissance usually takes me a whole day of the week long test.  I recommend not just running a scan - the idea is to get an idea of what the app does, and how well it does it.  If I can, I schedule a chat with the dev lead to talk about what the app should do, and then I write the application summary of the report after I have indexed the app.  But I am getting ahead of myself.

Beginning reconnaissance starts ahead of the application.  Usually when I start, I just have a URL and credentials, just like an attacker would (we'll assume he stole the credentials from a user, cause that's pretty easy).  The absolute first thing I want to do is look at the network and the server. Fortunately, there are two awesome free tools that will help you do this.

However, in order to do these, you will need an environment in which to do so.  If you read my earlier posts you will know that I like Windows 10 for testing - putting me at odds with most of the industry.  I do, however, acknowledge the weaknesses of the platform for some kinds of testing, and Perl and Python scripts are included in that.  Therefore, I usually use a Kali Linux VM for scripts, because nearly everything I use is preinstalled.  And there is a new tool on the horizon, PentestBox, which does everything in a modified command prompt right in Windows.  It's pretty slick.

Start with the network (for one thing, this will give you the IP of the server). Really, we will probably just "network scan" one machine, depending on your scope, but that's OK. The tool you want to use is nmap. It's a fantastic open source vulnerability scanner for the network surrounding the web server in question.  I got a favorite set of parameters from the awesome Jon Welborn, and I recommend it:

nmap -sS -Pn -v --script=default,vuln,safe -oA nameOfOutputFile IPofTarget

For the web server itself, there are a couple of options, and I still like Nikto.  I like Nikto because I have a custom ruleset for Nikto, and no you can't have it because it has client information in it.  That said, there are a LOT of other tools, and there is a very interesting and more updated tool for Windows (see my earlier posts for my thoughts on that) that might suit you better.  That said, Nikto is easy to run, and it does catch a lot of stuff, especially on older installs, and even kinda not that old installs.  It's still pretty slick.

nikto -host IPofTarget

Another part of recon is getting details about the encryption certificate being used to protect communicate with the browser.  If the site is public facing, I use Qualys's free SSL Test.  If the site is in an internal network, SSL Test can't see it, though, so you have to use another script.  My usual go-to is SSLScan, which is also super easy in either Kali or PentestBox.

SSLScan IPofTarget

OK, enough of running scripts.  Next I turn to my proxy.  There are two I use, Burp Suite and ZAP.  Burp Suite is a paid, closed source application with a company backing it, complete with researchers and devs and everything.  It is well supported and has lots and lots of features. ZAP is a free, open source application with a company backing it, complete with researchers and devs and everything.  It is well supported and has lots and lots of features. It's your call.  However, for whatever reason, if you are working for an organization with a standard vulnerability program, the results are expected to be turned in using Burp.  Otherwise, ZAP is a fantastic tool.

What we are going to do with the proxy is index the site.  This works differently in Burp and ZAP, so I am just going to talk generalities here.  First, with the proxy running, exercise the application completely.  Some folks like to make a separate file from the proxy for each role, but I just use the labeling feature to label important interactions with their role.  This is the admin login.  This is the Admin adding a user.  This is a user adding a user.  WHOOPS.  Shouldn't be able to do that.  You get the idea. 

Next we want to write the Application Summary.  This is part of the report that tells what the application does.  Why do we need to tell the developer what the application does? To make sure we are on the same page.  You would not believe the number of times I have had the developer say "Didn't you test the Admin screens?"  WHAT admin screens? That wasn't in the scope? Well, it was supposed to be.  What you do business wise from there is up to you, but it lays a level set as to what the scope really was.

Then it is time to brute force.  I use FuzzDB to get a list of known web directories and files, and then use the brute force tools to implement them.  In ZAP, it's just called Forced Browse in ZAP. but for Burp you need to use Intruder and get fancy.  I load up the first GET (for /) and then put the weird § signs right after, and run a directory scan, then a file scan.  Then I run a file scan on any interesting looking directories. You will not BELIEVE what you will find, sometimes. There are 30,000 words in that Medium directory listing. 

Finally, spidering.  This is just like the old days, or like the Google spider - the proxy will look for any URLs, and attempt to follow them.  Nice thing about attack proxies, they will look in comments, JavaScript, CSS, text files, anything it can for URLs.  Then it adds them to the site map.

Once we have a solid look at the application, we scan.  I don't suggest just right clicking on the host and selecting Active Scan.  It takes forever, makes a crapload of extra stuff in the Burp file, and won't earn you much.  Instead, look for interesting POSTs, or GETs with neat stuff in the URL.  Stuff that gets edited. That's where the magic is.

While the scanning is happening, we do the "insight" portion of the recon.  First, get the comments. In Burp, that's engagement tools, and ZAP uses an addon. Read them. Look for developer names, open source packages, interesting stuff. The hit Google.  No, I'm not kidding.  Look for existing, known vulnerabilities.  Do they have a vulnerable version of JQuery? Look up the devs.  Their LinkedIn, Facebook.  Then get their StackOverflow profile.  Asked any security questions recently?  Any code in there? Are you smelling what I'm cooking here?

This is also the time when you look for weird stuff.  Is there file upload?  Notate it.  Method name in a URL?  Point that out.  Redirects, like a URL in a querystring? Make a note. Encoded strings? Weird hidden HTML INPUT fields? Cookies you've never seen?  Those are all Things That Are Weird.  You need to add them to the file, and check on them in the analysis.

Last thing - need to take apart anything binary.  Flash? Java applets?  ActiveX (pleaseno)? Take them apart.  There are decompilation tools out there, and at least run strings (it's in Kali) to see what's in there.  You'd be amazed.

That's just about it for recon.  I store everything in Evidence, and then step away from the app, even if just overnight.  Fresh eyes and all that.  The analysis part will likely take a bit, so we will break that over several posts.

Funny artifacts of security testing

Being a vulnerability analyst has a few humorous artifacts. This takes a few different forms, but in general it's like having 100 projects - with all the related notifications - but not being responsible for them anymore.

As an adhoc member of the team for just a few weeks, you tend to get full access to everything, and then sometimes never lose it. For instance, I get TestFlight notifications for iOS applications I tested two years ago. I go unsub when I see them, but I have tested so many iOS applications, and some of them update less frequently, so I get a notification and think "Oh yeah, that app!".

Also, I regularly get put on code repositories.  Now, those get edited more often, so I usually can go and remove myself, but sometimes they are internal, so I am geting alerts but can't change the settings.  I'll email the dev lead, get the "oh yeah I'll fix that" and then get a task assigned to me randomly two weeks later.  It's a lot of fun.

The best is Jira.  I'll get added to Jira as a source for bugs, and then questions will get directed back to me.  As many readers know, the average for remediation of security flaws hovers around 360 days. Often, I'll get a ticket that has me as the source assigned to a junior developer, and get an email from an (often internal) Jira system saying "What the heck does any of this mean?!??!"  Those are always a good time too.

My very most favorite is when developers get a vulnerability on a card sometime down the road, and look me up months or even years later for clarification.  This is exactly the kind of thing I like to see.  Honestly, often the environment has changed around the vulnerability, but it restarts the conversation.  This kind of followup is something that is going to help save appsec, and we need more folks like that.

Information Organization in Vulnerability Analysis

All of this fancy organization and lists are just tools for the goal - making a list of everything that is wrong with an application.  When I start a test, I get a URL, a description of how the application works, credentials for two user accounts for each role, and a pat on the back.  This effectively emulates an attacker on “the inside” - they have already found a way in as they will.  Now they have to take the next step.  That’s what I do: take that next step before the attacker finds the app, and list all of the problems in the app that will give the attacker a leg up.

Truth is, though, I have 40 hours and they have all the time they need.  So, information organization is or the most importance to me.  I take copious notes, use full test plans, and track statistics to keep the apps I test ahead of the attackers.  So let’s talk about how I organize information to run a test.

The information about the test has to be protected.  When possible, I use a VPN protected repo to control the information, evidence, and reports related to an application.  I check out my details at the beginning of the day, check them in the end of the day.  Sounds like a dev?  Yup.  Worse, I use Subversion.  Yeah, go ahead and hate.  Remember, I don’t have branching and merging, so the SVN commands are a lot simpler.  Some of the folks I work with still use CVS so it could be worse.

Quick note: I have a pretty solid network in my home office.  I use a business class router, a grid wireless network, and I get help from people who know networking to set things up.  Do this.

All of the information for a test is stored with this structure:







I argue with myself about the order of year and application all the time, and even year and client.  Information architecture is a pain in the ass.  Did you know that the study of information architecture is called “ontology” and the study of treating cancer is called “oncology”?  There is a reason.  Anyway, let’s talk about what goes where.

  1. The Information folder is stuff that the client gives me about the test.  Sometimes it is empty.  Sometimes it has an email, WDSL files, documentation, the APK or IPA binary, source code, or osint I have performed.
  2. The Evidence folder is everything that I have that supports my findings.  The Burp state, screen shots, the notes and test plan (more on that in a sec), anything I stole, Nikto and SSLtest and other scans, databases, like that.
  3. The reports folder has twinkies.  No, that’s where I put the reports.  Mostly, it’s just THE report, but sometimes I save Burp or Nessus reports, summaries, code reviews - basically anything that is to be turned over to the client.

There are three files that are very important and they make up the core of the information architecture for an assessment that we start with.

  1. The Test Plan.  This is a big Excel spreadsheet with tests to run.  It is based on the OWASP Testing Guide, and The Web Application Hackers Handbook, Volume 2. I’ll probably release it later in the series. It goes in the Evidence folder.
  2. The Notes.  This is a text file, with  three sections.  At the top is the URL and credentials, plus any other salient information. Then there is a Notes section, where I put anything that I find that is interesting as I am doing recon.  “This URL has a filename as a parameter.”  or “Here is a file upload.”  Finally there is a findings section.  This should match with the test plan failures, and contain the textual evidence. This usually is a request and response pair, but we will cover it later in the series.
  3. The Report template.  I do have a report template, but unlike most, I don’t have a lot of text in there about my testing process.  No one has complained yet.  If you are a client, and hate the fact that I don’t double the size of the report with process text, let me know.  But, I do use a template, and I like it. I’ll cover what’s in it in the reporting chapter.

To make sure I have a good, clean set of evidence, I reset everything I use to default values before I start.  I use FireFox only for testing, not browsing or anything else, so clearing cookies is no big deal every test.  I clear the browser history, the cookies, the webdb, everything after each test.  

Application Security is a Solved Problem

The vulnerabilities you hear about aren’t really the problem much of the time.

I don’t want to dismiss the OWASP Top 10, because that’s what started the focus on Application Security.  And that’s important - it really is.  That said, we are kinda past that now, and while there are applications that harbor the flaws described in the Top 10, and many of the vulnerabilities described in the Top 10 that matter, the way that list is derived is not relevant to the way the world works anymore. If you dig into the data that makes up the list, you’ll find that 98% of it is unexploited static analysis findings.  Anyone who has spent time with static analysis will assure you that the only way static analysis if workable in a development environment is with triage.  And I’m sure you have guessed by now how much application owner triage goes into these results: 0%.  These findings might be legit, but they are probably not exploitable.

What’s number one on the top ten?  Injection.  I agree, injection sucks.  Anytime you can make an application run some code that the developer didn’t intend to run, you gonna have a bad day.

In reality? It’s a unicorn these days.  SQL injection, command injection, browser injection (which is explicitly a separate entry in the Top 10), doesn’t matter, vulnerability analysts just don’t find exploitable versions that much.  99.4% of the injections vulnerabilities cited in the data that makes up the Top 10 are from static analysis.  Did they break in? No, it was just possible.  Was there a mitigating factor - the fabled Security Onion? We’ll never know.

“OK, Bill, what ARE the Top 10 then?”

I don’t know. I’m an N=1 case, so I can just speak for me, and what I read.  That said, seems like a lot of people are using social engineering to get access to things.

“But Bill, that’s not an application security problem!”

Actually, it is. About a third of common application security vulnerabilities can be exploited with a social networking attack, and yet that’s the third we most frequently dismiss because it’s hard to demo during a BlackHat demonstration.  And guess what?  Those are the vulnerabilities I see most frequently.

But it isn’t vulnerabilities that are the problem.  It is the bugs.  Let’s look at the bugs.  We are always talking about the outcomes, not the causes, but here I want to talk about causes.  We can talk about the outcomes later.  Let’s look at a breakdown:

  1. Out of date components: By far the biggest one I see is jQuery, which is arguably a big deal.  Mostly that is a DOM XSS problem, or info disclosure, and often the application isn’t using the feature that is exposed.  That said, there are a lot of holes here.  It is VERY hard to test all of the DOM XSS possibilities.  It’s far easier to just get rid of the sources and sinks.  
  2. Information Disclosure: Not really on the Top 10 at all, but used by attackers to build the phishing messages that make the sysadmins answer emails. A6 Sensitive Data Exposure doesn’t cover it - this is an account number in the URL, or returning a password on a change screen.  I am talking about commenting out a block of JavaScript because it is “causing a problem on the backend” or leaving a developer name in the HTML comments.  
  3. Cross Site Request Forgery: CSRF is a very complicated vulnerability with a very complicated exploit that has the simplest fix ever, and I have no idea why the fix isn’t part of every framework on the planet.  We have a session cookie already.  This is the problem. If we also have a session variable in the form post, the session can’t be forged (unless there is XSS).  Bang.  Done.  Why don’t we all do this?  Well, multihoming makes it hard, that’s for sure for starters, but I’ll have more on this later.
  4. Cross Site Scripting: Yeah, OK, it’s in the Top 10 and it is still a problem  But you know where I see it? In the DOM! These never make it back to the server at all, no logging, no encoding, no nothing, what a pain in the butt.  And they still trash your CSRF protection.  XSS does not make me happy, which is why I put it on the report even if I can’t write a POC (which I rarely have time for).  Yes, I know that is grouchy and makes you dig through your JavaScript.  Sorry.
  5. Insufficient cookie protection: There is absolutely no reason to fail to add SECURE and HttpOnly to your session cookie.  It’s like one line of config code. Oh, I’m sorry, you have a fancy JavaScript session management scheme?  Too bad, rewrite it.  It’s likely broken anyway (from the security perspective).  Let your servers manage session, stop doing the JavaScript thing when it comes to your sessions.
  6. Vertical privilege escalation: The problem with VPE is that it requires some existing knowledge of the application.  In a 100% custom written application, that isn’t likely, aside from an insider attack.  The thing is, there aren’t that many 100% custom written applications.  Most projects start SOMEWHERE that is known. The authorization system is understood (along with weaknesses) or the framework has known page URLs (like WordPress) or something of the sort.  If the attacker knows where they are going and the authorization isn’t perfect, people can get to the administrator pages.
  7. Unpatched servers: So I probably don’t need to say Equifax but … Equifax.  Seriously, if the Struts flaw doesn’t convince you that actively exploited flaws in your framework aren’t a risk, then nothing will.  When your vendor - open source or otherwise - tells you that you need to patch right now, you need to patch right now.  Not after your test cycle.  Not when management gives the OK.  Right now.  I’m a dev, I know it doesn’t work like that, but it has to, and soon.  This is getting bad.
  8. Horizontal privilege escalation: When a developer keeps important account information somewhere that an attacker can edit it, and that information is used to decide what an attacker is looking at, they might be able to look at things they shouldn’t see.  Appropriate authorization solves this problem but it is very hard to do right,  It’s a lot better to not give the user a chance to change this value at all.  Insecure Direct Object Reference is the flaw in question, and it’s still out there.
  9. Lack of Input Validation: A positive security model is a requirement for every application that faces the internet (and really internal ones as well, but that’s another topic). Every time you can, you should be checking the input against all of the possible inputs.  Can it not be negative?  Is it?  Reject it.  Is there a list?  Is the input on it? No?  Reject it.  Is it a free text field?  Fine.  Use the HTMLEncoder by OWASP.  Everyone (myself included) needed to do a better job looking for flaws in the validation of inputs, and removing them.
  10. Weird stuff: There is so much weird stuff.  I got an application to give me account details because of a malformed USER AGENT.  Found the user’s role tacked onto the session ID in Base 64.  Discovered an application that parsed a word document - including a call the the template at a random IP on the internet - in order to allow editing.  From there to here, from here to there, funny things are everywhere.  If you find yourself thinking “Hey, that’s a weird neat compelling way to solve that problem” look for something simpler.  Complexity is the enemy of security.

I should probably talk about mobile.  The biggest thing that developers need to understand about mobile is that the compiled app is not invulnerable to being analyzed.  Don’t put anything in there that you wouldn’t want the user to have.  API keys, private encryption keys, and paths to functions the user shouldn’t have are three of the most common.  Just the other night someone stole the keys to send alerts from an app and sent random messages to all 100,000 users.  At 3AM.  I was not amused.

Anyway, this is just my take.  Again, I am not ripping on the Top 10, I love the Top 10.  It’s just useful to get the perspective from other-than-static-analysis companies.  Manual dynamic analysis has its place.  So in that spirit, I’d like to take you on a tour of how I go about performing vulnerability analysis - looking for the bugs that I have listed here. I’m no expert BUT I do it every day, so you might be able to glean something interesting out of my stories.

On Application Vulnerability Analysis

We live in a world where applications run the technology that we all use.  There was, once, a time where hardware was custom developed to solve certain problems, but these days we have general use hardware and applications designed to solve our problems.  Everything from apps on our phones to websites to alarm systems to the management screens for our internet gateways are applications, coded in common languages, using common protocols.

For every 100 lines of code, there are five security vulnerabilities.

The average application is 15,000 lines of code.

Let’s take a minute to talk about pentesting.  Pentesting, or “penetration testing” to expand the vernacular, is the art and science of finding a path through the security of a system in order to achieve a goal.  It is different from red teaming, because rather than making for a continuous series of tests like contemporary attackers would, penetration testing is a scheduled event that is designed to move from point A to point B.

Vulnerability analysis is different.  The goal is to take an application and find every single thing that could be used to circumvent the security of that application, then report on those items and provide a solution.  It is similar to penetration testing because it is a scheduled event. It is is easier than penetration testing because you are far less likely to go to jail for the night.  It is harder than pentesting because you don’t have to find one flaw, you have to find all of the flaws.

Quality assurance and vulnerability assessment have a lot in common. Both practices have the goal of making sure the application in question is as good as it can be.  Quality assurance is focused on the end user experience.  To that end, the both use a test plan. The plan has a starting state, an end state, and steps to get from one to the other.  With QA it is best if those tests succeed.  With vulnerability analysis, it is best if those tests fail.

That test plan is key.  In quality assurance, the business owners give a detailed description of how the application should respond under every circumstance.  If the user submits a valid application, it should be sent to the processing center.  If it has an invalid date, then this error will be presented.  If it fails in processing, then this message will be sent.

In vulnerability analysis, the test plan is determined by the attackers.  Whatever the flavor of the week is, it’s added to twenty years of attacks on the HTTP Protocol, language specifics, side channel attacks, and other weirdness.  The analyst will, rather than checking how the application responds to valid business requests, check how the application responds to this huge collection of known threats.  Not just the ones that work.  All of them.  The goal is the find all of the vulnerabilities.

It is true, not all of the vulnerabilities can be found. There are a lot of tests, and a lot of fields, and a lot of POSTs, and a lot of URLs.  They can’t all be checked  and fixed with machines (at least not yet). There isn’t the time or money to check them all by hand. Things will be missed, and that’s how we end up with 81% of breaches in the last 10 years having an aspect of application security involved. This is why we still have SQL injection, even in the age of ORMs.  This is why we still see CSRF even though it is a well understood vulnerability.  There are far too many legacy applications and far too few talented developers.

So why am I writing about this?  I performed my first vulnerability analysis in 2002.  That seems like a recent date to me - I wrote my first paid application in 1986.  But in the arena of application security this is a recent date.  The folks that wrote the Internet didn’t design it for security. That wasn’t the goal - sharing was the goal. I am writing this because I participated in the process of the web becoming the hub for commerce and communication - where security mattered.

And I, along with a boatload of others, failed miserably.

I got paged at 11PM on a Saturday in 1997 by a trigger I’d set up on a web server because the hard drive was full.  Long story short: the drive was full of German porn because of a SQL injection flaw I’d written into an application running on that server. Someone had broken in and set up a convenient FTP server.

At the time I didn’t know SQL Injection was even possible.

I’ll leave the path from then to now to the reader, but suffice it to say, I was the security guy on every project from then on.  I still strive to teach developers how to write more secure code - I’m the Security Track advisor to three conferences, and I speak to developers monthly about security awareness.  I train a thousand folks a year on secure coding standards.

That’s not why I am writing today, though.  There are way too few people checking applications for vulnerabilities, and OWASP isn’t making things obvious enough.  They have a greater reach than I, and that’s awesome, but I wanted to put together this book to lay out how I see vulnerability analysis in plain language, in hopes that it would help a few other folks get into the field.

What’s in this guidance certainly isn’t the only way.  It probably isn’t the best way.  It might be a bad way. I’m not sure, but it has worked for me, and I’m including things that you won’t hear in some breakdowns, like client management and report writing. Feel free to ignore everything, or take just the pieces that you like.  And send me feedback!  I’m more public than I should be on Twitter (@sempf) and Linkedin.  My Skype is if you want to tell me how awful it was privately. I’ll take your perspective in any form.

ABC interviewed me about being on the good guys team

Bryant Maddrik at ABC6 interviewed me and Todd Whittaker at Franklin about the plight of the good guys in the information security wars. Here's the link to the post:

We met at the Idea Foundry, where I am a member, and it went well, I thought.  The Smart Columbus kickoff was happening in the main room though, and they had a live band! Can't hear it at all in the recording though.

Love you hear your thoughts about who is winning the battles.

Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.



profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites