Why Geography Matters When Using Amazon Web Services

by Bill Sempf 29. September 2014 14:47

When setting up an EC2 instance or configuring a profile, you have the choice to set the Region and Availability Zone. If you were wondering how that mattered, you aren’t alone.

What’s the difference between Region and Availability Zone?

Regions are actual physical locations of Amazon computers. While they would like us to think of the cloud as some magical server in the sky, in reality there are big buildings all over the world full of servers. The Regions, shown in Table 1, are the actual physical locations of these servers.

Table 1: Regions and Availability Zones

Code

Location

Region

Availability Zones

us-west-1

Northern California

US West

6 zones shared with us-west-2

us-west-2

Oregon

US West

6 zones shared with us-west-1

us-east-1

Virginia

US East

5 zones

ap-northeast-1

Tokoyo

Asia Pacific

2 zones

ap-southeast-1

Singapore

Asia Pacific

2 zones

ap-southeast-2

Sydney

Australia

2 zones

eu-west-1

Ireland

EU

3 zones

sa-east-1

Sao Paulo

South America

2 zones

Availability zones are isolated areas inside regions that are designed to protect against failures in other availability zones. They have separate power, cooling and physical space.

Why should I pay attention?

Amazon is designed for global access. Their web site is global, and their servers are global. If you are using AWS you have the option to support a truly global architecture as well.

There are requirements that may cause you to carefully consider the location of your servers. These requirements are why you have the ability to choose your region.

Legal considerations

There are a number of privacy laws on the books – especially in Europe and the U.S. – that restrict the passing of government data outside the bounds of a region. AWS supports this, implicitly in the regional settings, and explicitly with GovCloud.

GovCloud is a physically restricted cloud service that is designed to explicitly prevent data from leaving the borders of the US. When building a governmental web application in the U.S. that’s probably your best path.

Regions implicitly segregate data, too. While regions are connected via the open internet, if you select the EU for your S3 instance, that’s where your data will be stored.

Guarding against failure

AWS does go down. It isn’t very common, but it happens. Regions and Availability Zones were created to protect against just such a happening. You must, however, architect your application against such a happening.

Regions are not duplicated among themselves by default. In order to create a truly fault-tolerant application, you must set up something like the Cross Region Read Replicas in Amazon RDS.

Plain old bits on the wire

Of course there is a much more straightforward reason for the correct management of regions: the actual physical distance between your application and your customers. Those bits still have to travel the wire, so make sure your application is close to the folks that use it.

Geographic restrictions

Under some circumstances, your content might be restricted to users only living in one geographic area. For instance, some content can’t – by law – be exported outside of the European Union.

I for one was surprised that the Region can’t be used for this restriction. CloudFront offers a geographic restriction, but it is one a list by country basis, and you set it up separately from the region.

How to make a plan

Long before you set up that EC2 instance, take careful stock of your situation. Consider your user location, your needs for fault tolerance and the legal landscape if your application. Then you can map out your regions to best make use of the AWS servers.

Tags:

On Testing

by Bill Sempf 24. September 2014 14:53
Yesterday, I refurbed an old joke for a current time and problem I had faced. This is 'edge case' testing; posting values to a system that really don't belong there. It came to mind because of a problem I had encountered in a system I was working on earlier. There is a procedure that accepts a FirstName and a LastName, and generates a UserName for Active Directory for it. The original code read:
 
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(50),
@LastName as  VARCHAR(50),
@output VARCHAR(8) OUTPUT
AS
BEGIN
 
And the new code read:
 
CREATE PROCEDURE [dbo].[GetNextADUserName]
-- Add the parameters for the stored procedure here
@FirstName as VARCHAR(7),
@LastName as  VARCHAR(7),
@output VARCHAR(8) OUTPUT
AS
BEGIN
 
The engineer who altered the code was just trying to make it better. It was written very loosely for a stored procedure, and he simply tightened it up. That's not a  problem, but the front end designers didn't know, and most importantly - all of our test names were under 7 characters. We would have never found this in testing.
 
As it turns out, there are a lot of people who are all about this. The replies to my tweet over the last 24 hours covered a lot of ground, but by far were those that wanted to push the edge testing to the max - and I love it.
Oh, and then, of course: Surprising how many people like a little violence with their testing: Some QA engineers piped in with thoughts on structured testing: A lot of security folks follow me, so a lot of application pentesting tweets showed up as well. If you are a developer and don't recognize these, shoot me an email: And some of the business people had some things not-so-nice things to say about the QA process, most of which I agree with: But what, of course, really happens? And that's kind of the crux of it. Making your testing as real world as possible is an important part of QA. Don't let those tell you otherwise. Be it unit testing, integration, QA or pentesting, assuring that all tests push the edges of what happens in the real world will make your software better. And your Twitter timeline!

Tags:

How the AWS CloudHSM Eases the Pain of Security Audits

by Bill Sempf 15. September 2014 19:02

Amazon offers a large selection of security products that help with compliance, privacy and data protection. IAM, intra-VM encryption and a swath of other products help make your users and your auditors breathe easier. There is still the problem of key storage. CloudHSM brings a reliable solution to that problem.

Exactly what is CloudHSM

CloudHSM is a dedicated hardware security appliance in the Amazon cloud that provides security key storage and cryptographic operation to a specific user.

A hardware appliance

Most of Amazon Web Services is based on virtualization. Virtualization allows for a software-only instance of something – like a server, router, or switch – to be created within a larger computing infrastructure. Cloud HSM is not virtualized – it is a standalone piece of hardware that only you have access to.

Specifically, CloudHSM is a Luna SA HSM appliance from Safenet. The Luna SA is Federal Information Processing Standard (FIPS) 140-2 and Common Criteria EAL4+ standard compliant.

A storage in the Amazon cloud for your encryption keys

CloudHSM provides a cryptographic partition for the storage of keys related to your AWS infrastructure. For instance, if a particular application requires a key to access a database stores in S3, it can retrieve that key from the hardware appliance.

How does is help with compliance?

Various regulatory agencies have very strict requirements when it comes to encryption.

Separation of concerns

With most AWS systems, Amazon has credentials to the underlying server that could allow an administrator access to the data. Not so with the CloudHSM. Amazon has administrative credentials that would allow them to repurpose the device, but those credentials cannot be used to retrieve the keys on the device. That privilege is only for the client user.

PCI

Simply, put, PCI has remarkably strict key management standards. CloudHSM is one of the list of AWS services validated for the 2013 PCI DSS compliance package. Specifically, just using CloudHSM in your key storage program will meet the requirements for PCI 3.5 and 3.6.

HIPAA

In order to meet the HIPAA requirements for storage of personal medical data, data at rest must be encrypted. This previously required a local storage component for personally identifiable information, significantly slowing any cloud initiative. Adding CloudHSM to the mix allows for data at rest within the Amazon cloud to be safely encrypted and still meet the key storage requirements of HIPAA.

What do I need to know?

There are always a few caveats to any new technology and CloudHSM is no different.

You need to have a VPC

CloudHSM doesn’t work on the open cloud. You’ll need to be using a Virtual Private Cloud to make it all come together. Fortunately a VPC is very easy to set up, and you night already be using one. It is part of the package for a number of AWS suite systems.

It is possible to use CloudHSM with your custom applications

You bet! Many of the AWS applications have capabilities to use keys from CloudHSM. EBS Volume encryption and S# object encryption are two that have the most obvious benefit for custom applications.

CloudHSM helps with security compliance

A reliable hardware appliance, well implemented, will help with your security compliance. Getting CloudHSM configured and integrated requires some effort, but the end result is as secure as your own data center.

Tags:

Husband. Father. Pentester. Secure software composer. Brewer. Lockpicker. Ninja. Insurrectionist. Lumberjack. All words that have been used to describe me recently. I help people write more secure software.

Find me on Mastodon

profile for Bill Sempf on Stack Exchange, a network of free, community-driven Q&A sites

MonthList

Mastodon