Funny artifacts of security testing

Being a vulnerability analyst has a few humorous artifacts. This takes a few different forms, but in general it's like having 100 projects - with all the related notifications - but not being responsible for them anymore.

As an adhoc member of the team for just a few weeks, you tend to get full access to everything, and then sometimes never lose it. For instance, I get TestFlight notifications for iOS applications I tested two years ago. I go unsub when I see them, but I have tested so many iOS applications, and some of them update less frequently, so I get a notification and think "Oh yeah, that app!".

Also, I regularly get put on code repositories.  Now, those get edited more often, so I usually can go and remove myself, but sometimes they are internal, so I am geting alerts but can't change the settings.  I'll email the dev lead, get the "oh yeah I'll fix that" and then get a task assigned to me randomly two weeks later.  It's a lot of fun.

The best is Jira.  I'll get added to Jira as a source for bugs, and then questions will get directed back to me.  As many readers know, the average for remediation of security flaws hovers around 360 days. Often, I'll get a ticket that has me as the source assigned to a junior developer, and get an email from an (often internal) Jira system saying "What the heck does any of this mean?!??!"  Those are always a good time too.

My very most favorite is when developers get a vulnerability on a card sometime down the road, and look me up months or even years later for clarification.  This is exactly the kind of thing I like to see.  Honestly, often the environment has changed around the vulnerability, but it restarts the conversation.  This kind of followup is something that is going to help save appsec, and we need more folks like that.

Information Organization in Vulnerability Analysis

All of this fancy organization and lists are just tools for the goal - making a list of everything that is wrong with an application.  When I start a test, I get a URL, a description of how the application works, credentials for two user accounts for each role, and a pat on the back.  This effectively emulates an attacker on “the inside” - they have already found a way in as they will.  Now they have to take the next step.  That’s what I do: take that next step before the attacker finds the app, and list all of the problems in the app that will give the attacker a leg up.

Truth is, though, I have 40 hours and they have all the time they need.  So, information organization is or the most importance to me.  I take copious notes, use full test plans, and track statistics to keep the apps I test ahead of the attackers.  So let’s talk about how I organize information to run a test.

The information about the test has to be protected.  When possible, I use a VPN protected repo to control the information, evidence, and reports related to an application.  I check out my details at the beginning of the day, check them in the end of the day.  Sounds like a dev?  Yup.  Worse, I use Subversion.  Yeah, go ahead and hate.  Remember, I don’t have branching and merging, so the SVN commands are a lot simpler.  Some of the folks I work with still use CVS so it could be worse.

Quick note: I have a pretty solid network in my home office.  I use a business class router, a grid wireless network, and I get help from people who know networking to set things up.  Do this.

All of the information for a test is stored with this structure:

/client

 /year

   /application

     /information

     /evidence

     /reports

I argue with myself about the order of year and application all the time, and even year and client.  Information architecture is a pain in the ass.  Did you know that the study of information architecture is called “ontology” and the study of treating cancer is called “oncology”?  There is a reason.  Anyway, let’s talk about what goes where.

  1. The Information folder is stuff that the client gives me about the test.  Sometimes it is empty.  Sometimes it has an email, WDSL files, documentation, the APK or IPA binary, source code, or osint I have performed.
  2. The Evidence folder is everything that I have that supports my findings.  The Burp state, screen shots, the notes and test plan (more on that in a sec), anything I stole, Nikto and SSLtest and other scans, databases, like that.
  3. The reports folder has twinkies.  No, that’s where I put the reports.  Mostly, it’s just THE report, but sometimes I save Burp or Nessus reports, summaries, code reviews - basically anything that is to be turned over to the client.

There are three files that are very important and they make up the core of the information architecture for an assessment that we start with.

  1. The Test Plan.  This is a big Excel spreadsheet with tests to run.  It is based on the OWASP Testing Guide, and The Web Application Hackers Handbook, Volume 2. I’ll probably release it later in the series. It goes in the Evidence folder.
  2. The Notes.  This is a text file, with  three sections.  At the top is the URL and credentials, plus any other salient information. Then there is a Notes section, where I put anything that I find that is interesting as I am doing recon.  “This URL has a filename as a parameter.”  or “Here is a file upload.”  Finally there is a findings section.  This should match with the test plan failures, and contain the textual evidence. This usually is a request and response pair, but we will cover it later in the series.
  3. The Report template.  I do have a report template, but unlike most, I don’t have a lot of text in there about my testing process.  No one has complained yet.  If you are a client, and hate the fact that I don’t double the size of the report with process text, let me know.  But, I do use a template, and I like it. I’ll cover what’s in it in the reporting chapter.

To make sure I have a good, clean set of evidence, I reset everything I use to default values before I start.  I use FireFox only for testing, not browsing or anything else, so clearing cookies is no big deal every test.  I clear the browser history, the cookies, the webdb, everything after each test.  

Application Security is a Solved Problem

The vulnerabilities you hear about aren’t really the problem much of the time.

I don’t want to dismiss the OWASP Top 10, because that’s what started the focus on Application Security.  And that’s important - it really is.  That said, we are kinda past that now, and while there are applications that harbor the flaws described in the Top 10, and many of the vulnerabilities described in the Top 10 that matter, the way that list is derived is not relevant to the way the world works anymore. If you dig into the data that makes up the list, you’ll find that 98% of it is unexploited static analysis findings.  Anyone who has spent time with static analysis will assure you that the only way static analysis if workable in a development environment is with triage.  And I’m sure you have guessed by now how much application owner triage goes into these results: 0%.  These findings might be legit, but they are probably not exploitable.

What’s number one on the top ten?  Injection.  I agree, injection sucks.  Anytime you can make an application run some code that the developer didn’t intend to run, you gonna have a bad day.

In reality? It’s a unicorn these days.  SQL injection, command injection, browser injection (which is explicitly a separate entry in the Top 10), doesn’t matter, vulnerability analysts just don’t find exploitable versions that much.  99.4% of the injections vulnerabilities cited in the data that makes up the Top 10 are from static analysis.  Did they break in? No, it was just possible.  Was there a mitigating factor - the fabled Security Onion? We’ll never know.

“OK, Bill, what ARE the Top 10 then?”

I don’t know. I’m an N=1 case, so I can just speak for me, and what I read.  That said, seems like a lot of people are using social engineering to get access to things.

“But Bill, that’s not an application security problem!”

Actually, it is. About a third of common application security vulnerabilities can be exploited with a social networking attack, and yet that’s the third we most frequently dismiss because it’s hard to demo during a BlackHat demonstration.  And guess what?  Those are the vulnerabilities I see most frequently.

But it isn’t vulnerabilities that are the problem.  It is the bugs.  Let’s look at the bugs.  We are always talking about the outcomes, not the causes, but here I want to talk about causes.  We can talk about the outcomes later.  Let’s look at a breakdown:

  1. Out of date components: By far the biggest one I see is jQuery, which is arguably a big deal.  Mostly that is a DOM XSS problem, or info disclosure, and often the application isn’t using the feature that is exposed.  That said, there are a lot of holes here.  It is VERY hard to test all of the DOM XSS possibilities.  It’s far easier to just get rid of the sources and sinks.  
  2. Information Disclosure: Not really on the Top 10 at all, but used by attackers to build the phishing messages that make the sysadmins answer emails. A6 Sensitive Data Exposure doesn’t cover it - this is an account number in the URL, or returning a password on a change screen.  I am talking about commenting out a block of JavaScript because it is “causing a problem on the backend” or leaving a developer name in the HTML comments.  
  3. Cross Site Request Forgery: CSRF is a very complicated vulnerability with a very complicated exploit that has the simplest fix ever, and I have no idea why the fix isn’t part of every framework on the planet.  We have a session cookie already.  This is the problem. If we also have a session variable in the form post, the session can’t be forged (unless there is XSS).  Bang.  Done.  Why don’t we all do this?  Well, multihoming makes it hard, that’s for sure for starters, but I’ll have more on this later.
  4. Cross Site Scripting: Yeah, OK, it’s in the Top 10 and it is still a problem  But you know where I see it? In the DOM! These never make it back to the server at all, no logging, no encoding, no nothing, what a pain in the butt.  And they still trash your CSRF protection.  XSS does not make me happy, which is why I put it on the report even if I can’t write a POC (which I rarely have time for).  Yes, I know that is grouchy and makes you dig through your JavaScript.  Sorry.
  5. Insufficient cookie protection: There is absolutely no reason to fail to add SECURE and HttpOnly to your session cookie.  It’s like one line of config code. Oh, I’m sorry, you have a fancy JavaScript session management scheme?  Too bad, rewrite it.  It’s likely broken anyway (from the security perspective).  Let your servers manage session, stop doing the JavaScript thing when it comes to your sessions.
  6. Vertical privilege escalation: The problem with VPE is that it requires some existing knowledge of the application.  In a 100% custom written application, that isn’t likely, aside from an insider attack.  The thing is, there aren’t that many 100% custom written applications.  Most projects start SOMEWHERE that is known. The authorization system is understood (along with weaknesses) or the framework has known page URLs (like WordPress) or something of the sort.  If the attacker knows where they are going and the authorization isn’t perfect, people can get to the administrator pages.
  7. Unpatched servers: So I probably don’t need to say Equifax but … Equifax.  Seriously, if the Struts flaw doesn’t convince you that actively exploited flaws in your framework aren’t a risk, then nothing will.  When your vendor - open source or otherwise - tells you that you need to patch right now, you need to patch right now.  Not after your test cycle.  Not when management gives the OK.  Right now.  I’m a dev, I know it doesn’t work like that, but it has to, and soon.  This is getting bad.
  8. Horizontal privilege escalation: When a developer keeps important account information somewhere that an attacker can edit it, and that information is used to decide what an attacker is looking at, they might be able to look at things they shouldn’t see.  Appropriate authorization solves this problem but it is very hard to do right,  It’s a lot better to not give the user a chance to change this value at all.  Insecure Direct Object Reference is the flaw in question, and it’s still out there.
  9. Lack of Input Validation: A positive security model is a requirement for every application that faces the internet (and really internal ones as well, but that’s another topic). Every time you can, you should be checking the input against all of the possible inputs.  Can it not be negative?  Is it?  Reject it.  Is there a list?  Is the input on it? No?  Reject it.  Is it a free text field?  Fine.  Use the HTMLEncoder by OWASP.  Everyone (myself included) needed to do a better job looking for flaws in the validation of inputs, and removing them.
  10. Weird stuff: There is so much weird stuff.  I got an application to give me account details because of a malformed USER AGENT.  Found the user’s role tacked onto the session ID in Base 64.  Discovered an application that parsed a word document - including a call the the template at a random IP on the internet - in order to allow editing.  From there to here, from here to there, funny things are everywhere.  If you find yourself thinking “Hey, that’s a weird neat compelling way to solve that problem” look for something simpler.  Complexity is the enemy of security.

I should probably talk about mobile.  The biggest thing that developers need to understand about mobile is that the compiled app is not invulnerable to being analyzed.  Don’t put anything in there that you wouldn’t want the user to have.  API keys, private encryption keys, and paths to functions the user shouldn’t have are three of the most common.  Just the other night someone stole the keys to send alerts from an app and sent random messages to all 100,000 users.  At 3AM.  I was not amused.

Anyway, this is just my take.  Again, I am not ripping on the Top 10, I love the Top 10.  It’s just useful to get the perspective from other-than-static-analysis companies.  Manual dynamic analysis has its place.  So in that spirit, I’d like to take you on a tour of how I go about performing vulnerability analysis - looking for the bugs that I have listed here. I’m no expert BUT I do it every day, so you might be able to glean something interesting out of my stories.

On Application Vulnerability Analysis

We live in a world where applications run the technology that we all use.  There was, once, a time where hardware was custom developed to solve certain problems, but these days we have general use hardware and applications designed to solve our problems.  Everything from apps on our phones to websites to alarm systems to the management screens for our internet gateways are applications, coded in common languages, using common protocols.

For every 100 lines of code, there are five security vulnerabilities.

The average application is 15,000 lines of code.

Let’s take a minute to talk about pentesting.  Pentesting, or “penetration testing” to expand the vernacular, is the art and science of finding a path through the security of a system in order to achieve a goal.  It is different from red teaming, because rather than making for a continuous series of tests like contemporary attackers would, penetration testing is a scheduled event that is designed to move from point A to point B.

Vulnerability analysis is different.  The goal is to take an application and find every single thing that could be used to circumvent the security of that application, then report on those items and provide a solution.  It is similar to penetration testing because it is a scheduled event. It is is easier than penetration testing because you are far less likely to go to jail for the night.  It is harder than pentesting because you don’t have to find one flaw, you have to find all of the flaws.

Quality assurance and vulnerability assessment have a lot in common. Both practices have the goal of making sure the application in question is as good as it can be.  Quality assurance is focused on the end user experience.  To that end, the both use a test plan. The plan has a starting state, an end state, and steps to get from one to the other.  With QA it is best if those tests succeed.  With vulnerability analysis, it is best if those tests fail.

That test plan is key.  In quality assurance, the business owners give a detailed description of how the application should respond under every circumstance.  If the user submits a valid application, it should be sent to the processing center.  If it has an invalid date, then this error will be presented.  If it fails in processing, then this message will be sent.

In vulnerability analysis, the test plan is determined by the attackers.  Whatever the flavor of the week is, it’s added to twenty years of attacks on the HTTP Protocol, language specifics, side channel attacks, and other weirdness.  The analyst will, rather than checking how the application responds to valid business requests, check how the application responds to this huge collection of known threats.  Not just the ones that work.  All of them.  The goal is the find all of the vulnerabilities.

It is true, not all of the vulnerabilities can be found. There are a lot of tests, and a lot of fields, and a lot of POSTs, and a lot of URLs.  They can’t all be checked  and fixed with machines (at least not yet). There isn’t the time or money to check them all by hand. Things will be missed, and that’s how we end up with 81% of breaches in the last 10 years having an aspect of application security involved. This is why we still have SQL injection, even in the age of ORMs.  This is why we still see CSRF even though it is a well understood vulnerability.  There are far too many legacy applications and far too few talented developers.

So why am I writing about this?  I performed my first vulnerability analysis in 2002.  That seems like a recent date to me - I wrote my first paid application in 1986.  But in the arena of application security this is a recent date.  The folks that wrote the Internet didn’t design it for security. That wasn’t the goal - sharing was the goal. I am writing this because I participated in the process of the web becoming the hub for commerce and communication - where security mattered.

And I, along with a boatload of others, failed miserably.

I got paged at 11PM on a Saturday in 1997 by a trigger I’d set up on a web server because the hard drive was full.  Long story short: the drive was full of German porn because of a SQL injection flaw I’d written into an application running on that server. Someone had broken in and set up a convenient FTP server.

At the time I didn’t know SQL Injection was even possible.

I’ll leave the path from then to now to the reader, but suffice it to say, I was the security guy on every project from then on.  I still strive to teach developers how to write more secure code - I’m the Security Track advisor to three conferences, and I speak to developers monthly about security awareness.  I train a thousand folks a year on secure coding standards.

That’s not why I am writing today, though.  There are way too few people checking applications for vulnerabilities, and OWASP isn’t making things obvious enough.  They have a greater reach than I, and that’s awesome, but I wanted to put together this book to lay out how I see vulnerability analysis in plain language, in hopes that it would help a few other folks get into the field.

What’s in this guidance certainly isn’t the only way.  It probably isn’t the best way.  It might be a bad way. I’m not sure, but it has worked for me, and I’m including things that you won’t hear in some breakdowns, like client management and report writing. Feel free to ignore everything, or take just the pieces that you like.  And send me feedback!  I’m more public than I should be on Twitter (@sempf) and Linkedin.  My Skype is pointweb.net if you want to tell me how awful it was privately. I’ll take your perspective in any form.

ABC interviewed me about being on the good guys team

Bryant Maddrik at ABC6 interviewed me and Todd Whittaker at Franklin about the plight of the good guys in the information security wars. Here's the link to the post:

http://myfox28columbus.com/news/local/fighting-back-against-cyber-attacks

We met at the Idea Foundry, where I am a member, and it went well, I thought.  The Smart Columbus kickoff was happening in the main room though, and they had a live band! Can't hear it at all in the recording though.

Love you hear your thoughts about who is winning the battles.

The fork debacle

Monday this week, I was at lunch with the family and Adam (my son) kept bumping Gabrielle (my wife) with his elbow while cutting up his breakfast burrito.

"You are on the left corner of the table! Why are you manspreading to the right?"

"I'm cutting!"

"With your right hand?"

"... yeaaaaahhhh?"

Then she looked at me.  I also, as a left hander, fork with my left and cut with my right. Apparently after 25 years together we never noticed that we did this differently.  Gabrielle, as a righ hander, uses the OLD european style of using the knife in the right hand then moving the fork to the right hand to eat.  She's a switcher.

So I did what I do, and asked Twitter.

It's only 24 hours later currently, and I have 70ish replies - more than enough to do some analysis, so here goes.

Not a surprise, 80% of left handers do not switch.  some folks cut left and some cut right, but most of us do not change hands while eating.  You know what they say, left handers are in their right minds.

What WAS a surprise is that 70% of RIGHT handed respondents also do not switch. This is survey bias of my followers though. My community is, well, made up of geeks.  Fully half of the folks that replied admitted that they taught themselves to not switch because it is more efficient. The other half - still bias.  They are European. Which leads to a whole new question. Why?

As it turns out, there have been a few articles written about this. Originally, in Europe, switching was cool, and then it wasn't.  But like with measuring, Americans didn't get the memo. I understand that. It used to be hip to put to spaces after a period. Things change.

What I don't understand is why, my results notwithstanding, it appears that left handed people keep their fork in one hand by default, and right handed people switch by default (at least in the U.S.) If you have any feedback on that, I'd appreciate insight in the comments.

 

The Pen Tester's Framework: Skipfish

Skipfich is another web mapping vulnerability scanner, along the lines of my preferred Nikto.  Skipfish brings three specific things to the table: performance with very large sites, super easy use, and a super well designed set of rules for edge case vulnerabilities.  Huh, I am kinda convincing myself - I should be using this more.

Written in C by Michal Zalewski, Niels Heinen, and Sebastian Roschke, Skilpfish is one of the best architected tools I have seen. There re some weird things: there is a database of tests, except this one which is in the source:

 struct lfi_test lfi_tests[] = {
  {{"/../../../../../../../../../etc/hosts",
    "file:///etc/hosts", 0
   }, "127.0.0.1", "File /etc/hosts was disclosed." },

  {{"/../../../../../../../../../etc/passwd",
    "file:///etc/passwd", 0
   }, "root:x:0:0:root", "File /etc/passwd was disclosed."},

  {{"..\\..\\..\\..\\..\\..\\..\\..\\boot.ini",
    "file:///boot.ini", 0
   }, "[boot loader]", "File boot.ini was disclosed."},
};

Like Nikto, though, mot of the tests are in an editable database so you can customize things and keep the tests up to date. It is highly configurable, too, without busting up the rulesets.  For instance, in the config you have control over the wordlists.

######################################
## Dictionary management
##################################

# The read-only wordlist that is used for bruteforcing
wordlist = dictionaries/medium.wl

# The read-write wordlist and where learned keywords will be written
# for future scans.
#rw-wordlist = my-wordlist.wl

# Disable extension fuzzing
no-extension-brute = false

# Disable keyword learning
no-keyword-learning = false

Anyway, running skipfish is (as they wanted) stupid easy.

skipfish -o ~/sempfnet -S medium.wl https://www.sempf.net

Then you get one of the best dashboards in all of open source software, in my opinion.  Seriously, if you need to look good, AND get good results, skipfish if your tool.


What's more, skipfish puts together a nice report, and I'm a big fan. I don't give clients these reports, though, I had write reports, but if you are doing quick tests this might be good enough!  Just double check for false positives.


All in all, a solid tool and a competitor to Nikto. Good addition to the PTF.

The Pen Tester's Framework: crackmapexec

As I mentioned in my intro post, I have started with the vulnerability-analysis modules and just went in alphabetical order.  

So we start with crackmapexec

Crackmapexec is a recon tool for when you are on a Windows network. It is called the swiss army knives of pentesting tools, and I agree. As the readme.md file says, there are a selection of core functions that can be called using arguments, but you can read a man page, right?

  • Credential Gathering
  • Mapping
  • Enumeration
  • Spidering
  • Command Execution
  • Shellcode injection
  • File System Interaction
So this is just a scanning tool. I am going to run it on my POINT network, and not just because the FALE VPN is down. AGAIN. (Dammit, Matt, you had. ONE. JOB.)  How much damage can it cause, right?

Oh, awesome. I ran it and nothing happened.

It turns out that the alias that PTF puts in for crackmapexec doesn't work. I am not sure why. I consistently get a 'too few arguments' error when I tried to use it.  Instead, I needed to run /pentest/vulnerability-analysis/cracmapexec/crackmapexec.py directly (as su) to make it work.

So once I got that far, everything seemed smooth. The basic commands work well and scanned my network for shares here at POINT.

sudo python crackmapexec.py -t 10 192.168.240.0/24 --shares

Interestingly, it had some problems with my NAS

[+] 192.168.240.106:445 is running  (name:BAGOFHOLDING) (domain:BAGOFHOLDING)
[-] 192.168.240.106:445 SMB SessionError: STATUS_USER_SESSION_DELETED(The remote user session has been deleted.)

If I got dig into the underlying Impacket, which drives crashmapexec, I find that there probably is an SMB issue:

    def smb2Close(self, connId, smbServer, recvPacket):
        connData = smbServer.getConnectionData(connId)
        # We're closing the connection trying to flush the client's
        # cache.
        if connData['MS15011']['StopConnection'] is True:
            return [smb2.SMB2Error()], None, STATUS_USER_SESSION_DELETED
        return self.origsmb2Close(connId, smbServer, recvPacket)

Fascinating stuff, but did it get the shares? No, I apparently need to give it credentials. That's interesting.  There aren't very many things that you can do without creds as it turns out, so you at least need to know one user's login.  That's kinda a bummer, but I guess it depends on what you are looking for. As it is, I would file this under post-exploitation, rather than vulnerability-analysis, since it is for use after you are already in the network.  Maybe that's just me.

And it really is just for Windows PCs.  It saw my three Windows 10 workstations, a client's Windows 7 box, and strangely my NAS, which is a Synology DS412+. I'm sure the SMB shares are what kicked it off, and the non-windows underlying environment is what caused the user session failure.  Interestingly, crackmapexec didn't see my Windows 10 Rasberry Pi 2, or the Windows Phones that were on my network. Wonder why.

So, in the final analysis, crackmapexec looks like a really slick tool, almost could be a management tool for Windows environments given the ability to run a command on a number of machines. If the underlying code were a little more modular then some really great stuff could be done with it at the scripting level.  And it turned me on to Impacket, with which I was not familiar.

The Pen Tester's Framework: ftpmap

It's true, FTP isn't something that you think of first when conducting an assessment We look at web servers. FTP is something that people used in the 90s, right? 

Well, no. As it turns out a lot of organizations use FTP to move files from app to app, from partner to partner, from location to location. Legacy apps exist, and they are a large percentage of the vulnerable applications out there. FTP is a reality that everyone checking security needs to consider.

ftpmap "scans remote FTP servers to identify what software and what versions they are running" according to the man file.  It's certainly something that should be in your 'hey lets look at this server' test group.  So let's give it a try.

Interestingly, it wasn't installed! The configuration file was there, but there is no ftpmap directory in the vulnerability-analysis directory for the compiled code. So, well, I'll jsut do it manually!

sudo git clone https://github.com/Hypsurus/ftpmap.git
cd ftpmap
sudo ./configure
sudo make
sudo make install

Waaaiiit a minute. Error city. This shouldn't be happening! Guess that's why the PTF didn't put it on my system - something weird is missing:

CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash /pentest/vulnerability-analysis/ftpmap/missing aclocal-1.15 
/pentest/vulnerability-analysis/ftpmap/missing: line 81: aclocal-1.15: command not found
WARNING: 'aclocal-1.15' is missing on your system.
         You should only need it if you modified 'acinclude.m4' or
         'configure.ac' or m4 files included by 'configure.ac'.
         The 'aclocal' program is part of the GNU Automake package:
         <http://www.gnu.org/software/automake>
         It also requires GNU Autoconf, GNU m4 and Perl in order to run:
         <http://www.gnu.org/software/autoconf>
         <http://www.gnu.org/software/m4/>
         <http://www.perl.org/>
make: *** [aclocal.m4] Error 127

Well, crap. A quick run of apt-get shows that I have the latest version of automake, so there must be something subtle wrong. Aah well, I don't see THAT many FTP sites out there ...


Wikistrat predictions for 2016


Some of you know that I am the curator of the Information Security desk at Wikistrat, a virtual strategy consulting company. We have fun over there, and a recent project was collating some predictions for 2016.

It's a little buzzword-rich, as these things are wont to be, but there is some cool stuff in there if you are into geopolitical stuff. I also submitted my take on the direction of cybercrime and malware, with an assist from Brent Huston.

Take a read, and let me know what you think.



Bill Sempf

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

PageList

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

MonthList