We’re going to step away from web application security and identity access management for a little while and talk about pickles. I make my own pickles, and enough people have asked me about it that I thought I should write it down.
In short form, I use pickling cucumbers, press them in a pickle press for 36 hours, then can them in widemouth quart Bell jars with dill, salt, vinegar, garlic, and peppercorns.
The pickle press
The pickle press I use is this little one from Amazon. It only holds about 12 cucumbers at once.
I cut the pickles in half first to help them lose water.
Then put three tablespoons of salt on them and shake them around. Put the press on and stir it and tighten the press every 6 hours or so.
When they are done, take the top off.
Get your stuff ready. You’ll need 2 jars with lids, 8 garlic cloves, 3 tablespoons of pepper, 6 sprigs of dill, and 4 tablespoons of salt.
Get the liquid ready
Boil some water. I use a hot pot and boil 7 cups – that’s usually enough. Then arrange the dill, garlic and pepper in the jars. By arrange, I mean dump them in there. Then start stuffing the cukes from the press in the jar.
Once you have them all in, put the jars in the sink and fill them all the way to to the top with the boiling water. Pour some on the lids too to sterilize them.
Once the water is in, pour half of it out. I know,, I know. Add the salt, divided evenly between the jars. Replace the lost liquid with vinegar of your choice. I use basic white vinegar, but rice or apple works good too. Then put the lids on and shake.
Put them immediately in the fridge.
This is not a real canning procedure. They can’t go in the pantry. If you leave them out, they will probably spoil. Fridge them, and eat them. They are better fairly fresh. Give them maybe 5 days to soak up spices and whatnot. And then enjoy!!
I'm helping OWASP with the awesome but recently neglected .NET project. There is a lot of great .NET security stuff out there <cough> troyhunt </cough> and I am helping them organize and broaden it.
There is a roadmap started and I would like the community's feedback. There is a lot of work to do and we are going to need a lot of help doing it.
Feel free to email me, use the contact form, contact OWASP, sign up for the .NET Project email list, tweet me, or do what ever makes the most sense. We need your input.
As both a software architect and a vulnerability assessor, I am often asked why bother to test applications that are inside the firewall.
It's a pretty valid question, and one that I asked a lot when working in the enterprise space. To the casual observer, network access seems to be an insurmountable hurdle to getting to an application. For years, I argued against even using a login on internal sites, to improve usability. That perspective changed once I started learning about security in the 90s, but I still didn't give applications that I knew would be internal to the firewall due rigor until I started testing around 2002.
This all comes down to the basic security concept of Security In Depth. Yes, I know it is a buzzword (buzzphrase?) but the concept is sound - layers of security will help cover you when a mistake is made. Fact is, there are a fair number of reasons to make sure internal apps meet the same rigor as external apps. I have listed a few below. If you can think of any more, list them in the comments below.
The network is not a barrier
Protecting the network is hard. Just like application vulnerabilities are hard to glean out, network vulnerabilities are hard to keep up with. Unlike application vulnerability management, handling vulnerabilities is less about ticket management and more about vendor management.
A lot of attacks on companies are through the network. Aside from flaws in devices and software, we have social attacks too.
Fact is, the network layer isn't a guarantee against access. It is very good, but not perfect. If there is a breach, then the attackers will take advantage of whatever they find. Now think about that: once I have an IP address, I am going to look for a server to take over. Just like if I am on the Internet: finding a server to own is the goal. Once I am inside your network, the goal stays the same.
People who shouldn't have access often do
You probably heard about the Target breach. If not, read up. The whole thing was caused by a vendor with evisting VPN access getting breached, and then that VPN access being used to own the Point Of Sale systems. Here's a question for you:
How did a HVAC vendor have access to the POSs?
It's possible to give very specific access to users. It's just hard. Not technically hard, just demanding. Every time something in the network changes, you have to change the model. Because there are a limited number of hours in the day, we let things go. After we have let a certain number of things go, the authentication system becomes a little more like a free for all.
Most vendors have a simple authentication model - you are in or you are out. Once you have passed the requirements for being 'in' you have VPN access and you are inside the firewall. After that, if you want to see what your ex-girlfriend's boyfriend is up to, then it is up to you. The network isn't going to stop you.
You can't trust people inside the network
In the same vein, even employees can't totally be trusted. This gets into the social and psychological sides of this business where I have no business playing, but there is no question that the people that work for you have a vested interest in the data that is stored. Be it HR data or product information, there are a number of factors that could persuade your established users to have a let us say 'gathering interest.' I know it is hard to hear - it is hard for me to write. Fact is, the people that work for you need to be treated with some caution. Not like the enemy, mind you, but certainly with reasonable caution.
Applications are often moved into the DMZ
From the developer's perspective, frankly this is the biggest issue. Applications, particularly web applications, are often exposed after time. A partner needs it, the customers need it, some vendor needs it, we have been bought, we bought someone, whatever. Setting up federated identity usually doesn't move at the speed of business, and middle managers will just say 'put it in the DMZ.'
This happens a LOT with web services. Around 2004 everyone rewrote their middle tier to be SOAP in order to handle the requests of the front end devs, who were trying to keep up with the times. Around 2011, when the services were old and worn and everyone was used to them servicing the web server under the covers, the iPhone came out.
Then you needed The App. you know that meeting, after the CIO had played with her niece's iPhone at Memorial Day, and prodded the CEO, and he decided The App must be done. But the logic for the app was in the services, and the CIO said 'that's why we made services! Just make them available to the app!
But. Were they tested? Really? Same rigor as your public web? I bet not. Take a second look.
Just test everything
Moral of the story is: just test everything. Any application is a new attack surface, with risk associated. If you are a dev, or in QA, or certainly in security, just assume that every application needs to be tested. It's the best overall strategy.
Working in primarily ASP.NET for the last 13 years, I didn't think much about Cross Site Scripting. The AntiXSS tools in ASP.NET are best of breed. Even without any input encoding, it's really, really tough to get XSS vectors into an ASP.NET site - especially Web Forms.
ASP.NET isn't the only platform out there, as it turns out. In fact, so far as the open web goes, it isn't even close to the most popular. Ruby on Rails, PHP, and JSP still show up everywhere, and they are not, by default, protected from XSS. What's more, mmisconfigured ASP.NET sites are more, not less, common.
With the power of today's browsers, XSS is more of a threat than ever. It used to be virtual spray paint; a method for defacing a site. Not it can be used to steal credentials, alter the functionality of a site, or even take over parts of the client computer. It's a big deal.
You can make it all go away by simply encoding your inputs and outputs. There are some simple rules to help make this happen.
First, never put untrusted data in a script, inside an HTML comment, in an attribute name, in a tag name or in styles. There is no effective way to protect those parts of a page,, so don't even start.
OK, now that we have that covered, sometimes you DO need to put untrusted data in an HTML document. If you are putting data into an HTML element, such as inside a div, use the HTML encoding that is built into your platform. In ASP.NET it is Server.HtmlEncode. Just do it. Build it into your web controls, whatever. Assume that the data coming in and going out is bad, and encode it.
That's for HTML content. How about the attributes? Width or color or whatnot? Attribute encoding. There is a good reference implementation in ESAPI.
In general, the idea is this: different parts of HTML pages require a different encoding style. The canonical reference for this is the OWASP XSS Prevention Cheat Sheet. When you are looking at your user control library for that next project, or a current one, or an old one, whatever: take a look. Does it encode inputs? Does it encode them correctly?
I had a number of requests for my slides at CodeMash for the Applied Application Security class.
Those slides are from the deck that I use to give my paid training, so I don't toss them around lightly. They have more detail in them than we had time to get into at the con.
Instead, I though I would edit up my whitepaper that I provide to see that it covers the materials that we went over. You can find it attached to this post.
Codemash 2014 Applied Application Security Whitepaper.pdf (657.56 kb)
The exercises were all completed using the OWASP Web Goat on the SamuraiWTF virtual machine.
if you have any questions, feel free to contact me using the contact form above.
On a recent vulnerability assessment, I discovered an unsecured file upload and was challenged by the client.
"Was the file uploaded to the webroot, where it was executable?"
"Well, no." I said. "I don't know where it was uploaded to, but there weren't the necessary protections in place, I can assure you that."
"What can happen if it isn't executable by a web user, then?"
Oh, let me count the ways.
What we are talking about
Allowing file upload from a web page is a common activity. Everything form web email clients to interoffice management systems allow for file upload. Profile pictures are uploaded form client machines. Data is upload to support websites. Anytime you click a Browse button and look at your local hard drive from a web browser, you are uploading a file.
Keep in mind, file upload by itself isn’t a security risk. The web developer is taking advantage of built-in browser functionality, and the site itself doesn’t have any insight into your PC’s internals. File upload is governed by RFC 1867 and has been around for a long time. The communication is one way.
The code for a file upload just has to invoke an enctype of multipart/form-data, and that gives us an input type of ‘file’ as seen in this example snippet:
Please select a file:<br>
<input type="file" name="datafile" size="40">
<input type="submit" value="Send">
If the upload isn’t intrinsically bad, then what is? Well, what you upload can be bad – aiming to exploit something later, when the file is accessed.
The classic case – upload to the web root
The case that my clients were interested in is the classic problem. Back in the day, we used to let users upload files into the web directory. So, for instance, you might be able to save an image directly to the /img/ directory of the web server.
How is that a problem? If I upload a backdoor shell, I can call it with http://yourserver/image/mywebshell.php and you are pwned. Anytime I can write and execute my own code on your server, it is a bad thing. I think we can all agree on that. As a community of people that didn’t want our servers hacked, we stopped allowing upload of files to the web root, and the problem (largely) went away.
But the problem of unrestricted file upload didn’t go away. There are still a number of things that pentesters and bad guys both can do to get your server under their control. Here we’ll look at a handful of them.
Uploading something evil for a user to find later
The first, most common and probably most dangerous act is most likely the uploading of malware. Often, business and social sites both allow for uploading of files for other user’s use. If there is insufficient protection on the file upload, an attacker just uploads evilmalware.docx.exe and waits until the target opens the file. When the file is downloaded, the .exe extension isn’t visible to the user (if on Windows) and bang, we got them.
This attack isn’t much different from phishing by sending email messages. Since the site is trusted by the user, however, the malware executable might have a little more change of getting executed.
Mitigation is fairly straightforward. First, in a Windows environment, check extensions. If you are expecting a docx file, rename the file with the extension. Whitelist the extensions you expect. Second, run a pattern analyzing virus scanner on your server. There are a number of products that are designed to be run on web servers for just this case.
Taking advantage of parsers
A far more subtle attack is to directly affect the server by attacking file handlers on the host machine itself. Adobe PDF, Word and images are all common targets of these kinds of attacks.
What happens, is some flaw in the underlying system – say a PDF parser that reads a PDH form and uses the information to correctly forward the document – is exploited by a malware author. Then documents can be created that exploit this flaw and uploaded somewhere where the attacker knows they will be opened.
Let’s talk about just one of these, as an example. MS13-096 is a Microsoft security advisory that warns us that TIFF images could cause a buffer overflow in Windows. This overflow, then, could be used to execute arbitrary code on the user’s context. Remember what we said before: anytime I can write an execute my code on your machine, bad things will happen.
All of that said, doesn’t’ take a lot of technical expertise to write an exploit for that? I mean, you would need to make an image that exploited the flaw, safely put it into a Word document, then write more code to be injected into the overflow, and then have something that takes advantage of whatever we ran.
Well, the answer to that is yes, it is hard to write. So enter Metasploit. This penetration testing tool has a framework for the enablement, encoding, delivery and combination of exploits. Specifically, this one vulnerability can be baked in with already existing tools to deliver, run and get a remote shell using this exploit.
In fact, one already has.
Stealing hashes with UNC pathed templates
Not everything that Metasploit does revolves around a flaw in a software system. Sometimes you can use Metasploit to take advantage of the way software is SUPPOSED to work. For example, did you know it was possible to build a word document that uses a template that is on a UNC path? Neither did I!
Of course, in order to use that feature, Word will have to send it’s NTLM session hashes to the share. That’s only a problem if we have a listener collecting them on the other end of the wire. That’s exactly what Metasploit will do for us.
For this particular example, I use the word_unc_injector Metasploit module. This module allows me to create a Word document that calls to a template at an address of my choosing; I usually use an AWS instance for the target. This example is done in my local lab.
Once the document is made, I try and upload it into the site, as shown in Figure 1. Since it is expecting a Word document, and it got a Word document, we are in good shape.
Now we wait. When the user opens the file, Word will call out to the template as seen in Figure 2.
If it doesn’t find it, it just gives up after a while. Meanwhile, though, the NTFS session hashes are bring stores by my ‘template’ server, like Figure 3.
So what do we do about this and other document parser problems? Honestly, there isn’t much you can do. General security principles are best here:
- Whitelist so you only are allowing the document types you want uploaded.
- Don’t process files on the server.
- Run virus scanners on all workstations, and the web server.
DoS with massive files
Sometimes installing or stealing things isn’t the best path at all. Sometimes, I just want to make your server freak out and crash, so I can see the error messages. I keep a Digital Download of Disney’s Brave on my testing box just for that purpose – 3.4 gigabyte file to upload a few times, just to see how your box handles it. Usually it isn’t good.
Only the beginning
These core categories of attacks are the most common, but they are 1) not everything and 2) much broader than presented here. There are several dozen parser exploits in Metasploit, and those are just the ones that are deemed worth the effort. It is much safer to just follow slightly more-strict-than-usual security policy when it comes to file upload and save yourself the trouble later.
January 2014, at CodeMash, I'll be presenting my new 4 hours Applied Application Security seminar as a precompiler. It is Tuesday afternoon, on the first day of the conference. A Tuesday precompiler ticket is required for attendance, but there is no additional charge.
We will be covering both testing for the vulnerabilities that I feel developers need to know the most about, and defensive methods that work in today's market. It is a language neutral class - samples will be in Java, C#, PHP, Ruby and occasionally Perl. The topic breakdown is:
- Information disclosure (spilling to Google, exception management, server ops)
- Injection (SQL, OS, Browser, LDAP, AD)
- Authentication and session management
- Data protection
This is a participatory session. To be prepared for this session, please have a virtual machine manager loaded with Samurai WTF
. This is a training VM in Linux that has both the training sites, and the tools for testing, preinstalled.
If you are planning on attending and have any questions, please don't hesitate to email me at firstname.lastname@example.org or call me at 614-402-7207. I'll be glad to fill you in.
Hope to see you there.
I had to count SLOC and files for a POC of a static analysis tool
The SLOC were counted using:
ls * -recurse -include *.aspx, *.ascx, *.cs, *.ps1 | Get-Content | Measure-Object -Line
The files were counted using
(Get-ChildItem -force -recurse).Count
Guddam PowerShell. Rocks. So. Hard.
I am probably going to pass 30,000 tweets with the message that this post will trigger.
When I started using the Internet, NetNews was the way people publicly communicated. Used to be UUCP, thankfully replaced by NNTP, the newsgroups are almost dead now thanks to the proliferation of the web. You used to be able to read my posts from '92 on Google, harking back to when I was a Clarinet major (and Unix sysadmin) at OSU, but I was posting even before then, back in '89. Probably those are around somewhere.
In the mid nineties, I made use of IRC a lot for public conversation. Thankfully, there are no (public) logs of these conversations.
When TB-L first envisioned HTTP, he saw it as a collaborative medium at the outset, but it took nearly twenty years before we truly achieved collaborative web. Twitter is a public conversation, a collaborative web, and I enjoy being part of it. Are there problems? Sure. Does it sometimes fall short of human decency? You bet. Mostly, however, it is a nice chat amongst friends, with the occasional injection of local, national, or international tidbits of interest, and I can't complain about that at all.
I am working on an app for Facebook right now, and I came across this gem:
Note that because this request uses your app secret, it must never be made in client-side code or in an app binary that could be decompiled. It is important that your app secret is never shared with anyone. Therefore, this API call should only be made using server-side code.
There is another method to make calls to the Graph API that doesn't require using a generated app token. You can just pass your app id and app secret as the
access_token parameter when you make a call:
The next advice is less good. Never, ever put a password, multi-use token or a secret of any kind in the URL. The servers on both end of the communication with cache it in the HTTP server log, the routers will cache it, and it is visible on the wire, even under SSL. Just don't to it.
What do you do instead? Put your secret in the POST data. Don't use, or allow a GET. POST bodies are encrypted in SSL, and are not logged.