Air gaps and secure networks

I am a person with many interests. In one conversation, I’ll introduce myself as a new filmmaker. In another, I’m a seasoned theatre actor. Sometimes I give talks on Microsoft’s data platform products, SQL Server and Azure SQL Database. There’s another strong field of interest I have, which I don’t speak much about, and that’s information security (often shortened to infosec).

By no means am I a security expert. I use 1Password for managing my passwords and secure data between members of my company, and I use Cloak VPN when I connect to public Wi-Fi networks. My MacBook Pro’s hard drive is encrypted using FileVault. We have a guest Wi-Fi network at home to prevent non-residents gaining access to our smart lights and my NAS. I have passwords on my two computers’ screensavers.

I know I don’t do enough to keep everything as secure as possible, but I try.

Recently I’ve been watching the Pluralsight course Ethical Hacking (CEH Prep) (login required).

The very first thing in the course is setting up a secure lab environment, so that any tools used in the course are contained in that secure environment. This is the right way to do it, and I am using Hyper-V on my venerable Asus laptop to host these socially unacceptable virtual machines, away from my home network and away from the Internet.

This is called an “air gap”. Theoretically speaking, the virtual machines (guests) have no network access, and therefore there is air between them and a machine that is online. Practically speaking, this is untrue, because my laptop, which is hosting the guests, is online, but Hyper-V has segmented them on their own private network.

So this raises a question: How do I install applications on these guests if I’m not giving them access to any network? I don’t want to connect the guests to the Internet to download anything, firstly because of automatic updates (the lab environment must be predictable), and secondly because the guests may be compromised already, and it would be improper for me to expose compromised machines to the Internet (and my home network).

The answer, as with any air gap, is to do it the way we used to twenty years ago: burn files to CD-ROM or a USB drive, and then access that device from the guest.

Using Hyper-V (and other modern virtualization technology), it is trivial to connect a CD-ROM, USB drive, or disk image to a virtual machine. My challenge, in this “clean-room” laboratory that I’ve set up, is that I have no software on my host operating system. All I want to do is download the files to the host and then make them available to the guest virtual machines.

In the Pluralsight course, the way the presenter did this was to make a network share available from the host to the virtual machines. I decided against this, because it does not keep the lab environment completely separate.

The only other machine I can use is my MacBook Pro, which means having to copy files over the network to my host, which I want to avoid.

I decided to create a virtual hard drive (VHD), which is natively supported on all modern versions of Windows, as well as Hyper-V.

On the host, I created a VHD using the DISKPART command, which is built into Windows.

From the commandline, type diskpart to open it. At the DISKPART> prompt, type:

DISKPART> create vdisk file="D:\Temp\airgap.vhd" maximum=1024

This creates a 1GB virtual hard drive on my host, which I will use to store files that I want to install on the guest. However, before I can use this container file, I have to partition and format it.

Attach the new virtual disk we have just created:

DISKPART> attach vdisk

Create a primary partition on the virtual disk:

DISKPART> create partition primary

Select the partition:

DISKPART> select partition 1

Assign it a drive letter (I used Z: but you can use any available drive letter):

DISKPART> assign letter=z

At this point, Windows will pop up a dialogue box informing us that an unformatted drive has been detected. We can use this box to format the drive, using the default values.

Once the drive is formatted, we can transfer downloaded files to this new VHD.

The next step is using the VHD on the guest. To do that, once the files have been copied, we can detach the disk from the host using DISKPART again.

DISKPART> detach vdisk

This closes all open handles to the VHD, so that we can access it elsewhere.

In Hyper-V, or whichever virtualization host you’re using, the first thing to note is that VHDs cannot be added to virtual machines while they are running, so I have to make sure the guest is shut down.

We then add the VHD as a second hard drive to the guest’s configuration and start up the virtual machine. The VHD is already formatted, and it will receive a drive letter automatically when the operating system has started, so it will be accessible immediately. We can just run the applications or installation from that new drive as if we had downloaded the files directly to the guest.

To make changes to the VHD, we have to shut down the guest and then use DISKPART to attach to the VHD again. It is good practice to take all necessary precautions when attaching the VHD to your host again, because although the guest is turned off, there’s no guarantee that the VHD hasn’t been infected with something.

If you need to add new files, it would be better to create a new VHD instead. Treat the VHD with the same level of trust as a USB drive you find in a parking lot.

If you’d like to read additional technical posts, check out my blog on Born SQL.

On the Internet of Things

I hate the term “Internet of Things”. I say “Internet of Things” and “IoT” because everyone else does, but I don’t like it.

Not everything needs to be online. There’s a Twitter account called @InternetOfShit which posts regularly on this topic.

The first and most important reason why this is a bad idea is security. The Internet is planet-wide, where everyone with a connection is a potential threat. You really don’t want personal and private devices viewable, and possibly controlled, by just anybody.

The second problem is ownership, which also touches on privacy to an extent. Buying devices that rely on third-party providers (including the “trustworthy” ones like Google) means your private stuff is viewable by employees of those companies.

It also means that you could end up with useless equipment that cost a lot of money if the company shuts down or gets sold, and now a marketing firm who buys the assets has access to pictures of the inside of your house, which they will sell indiscriminately to defray expenses. If that doesn’t terrify you, please stop reading, turn off your computer or mobile device, and never visit my site again.

I won’t even get into how white-label Internet-connected cameras (manufactured inter alia by Sony, famous for being hacked many times) need to be rounded up and set alight, because their passwords are 888888, cannot be changed, and can be easily accessed remotely. The prevailing advice from security experts, in fact, is to toss these devices out.

A lot of crime is opportunistic. A database containing credit card numbers, personal information, nude photos, gamer tags, digital currency, whatever it may be, is attractive to certain people who have the technical skills to exfiltrate that information and sell it on to someone else. Rarely will you find that the people stealing this data are the ones using it. Firstly, that’s liable to get you caught, and secondly, the exfiltrator (or “hacker”, an unfortunate use of a perfectly cromulent word) just needs the money, or to prove his or her skills. The big paydays come from large data dumps. Sony, LinkedIn, Tumblr, Adobe, Yahoo, Yahoo (yes, twice), the list is endless.

So what’s the solution? With the number of devices growing exponentially every year, there’s no getting away from the convenience of having lights that turn on when you get home or a video camera to see if anyone is stealing your FedEx packages.

Ironically, the perception of extra safety and security provided by the devices is thwarted by their inherent lack of security. Manufacturers are racing to get into the market, forgoing testing. “Let the customer be the beta tester” is Google and Facebook’s mantra. Many companies are to blame; I’m not just singling these two out. Legal fees are clearly cheaper than research and development.

Which brings me to the point of this piece: building your own “Thing”. Do you want a camera watching your home while you’re away? Build one. Companies like Raspberry, Adafruit, and Arduino (to name just a few) have brought extremely cost-effective computing to the mass market, and relatively easy-to-use programming guides if you’ve never written a line of code in your life. Some of them even run Microsoft Windows!

Of course, building your own camera means that you can’t rely on Google or Sony to record and store your video, so you need what we call a DVR, or Digital Video Recorder. It didn’t help that cable and Internet-based television companies produced devices they also called DVRs, so there is some confusion when you search online. Fortunately the television kind is now called a PVR.

A DVR in this case is a hardware device with a number of connectors for security cameras to plug into. In the last ten years, this industry has seen impressive advances, where cameras are now wireless, connecting via the network, and DVRs can just be software.

In other words, you can buy a relatively inexpensive network-attached computer for your home (traditionally called a NAS, or network attached storage), run DVR software on it, and connect your Raspberry Pi Zero to that. Then, using a (free!) VPN (virtual private network) connection from your phone over the Internet to the NAS, you can view the images or video from the camera, and it’s all sitting in your home, away from prying eyes.

What you’ve done, then, is made your information harder to find. Security by obscurity. The chance of being hacked is still there, but the payday is much lower for a hacker if you use a good VPN server and follow best practices when it comes to securing your network.

One example is the principle of least privilege. What that means is, your camera that you’ve built will have its own security settings, away from, say, your desktop computer. If someone did happen to break into your VPN, and then break into your camera, they wouldn’t be able to hop onto yet another device on the network without doing a bit more work, because the security settings won’t work on any other device on the network (think of it like having a separate password for each device). Breaking this security is time-consuming, and working on stealing billions of passwords from Yahoo is much more lucrative.

Another example is layering. You don’t want to keep your Things on the same network as your computers. Even if they’re physically connected to the same Ethernet network, or everything is on Wi-Fi, there are inexpensive ways to partition your devices onto separate wireless networks, using the same router (which of course you’ve purchased separately, and sits between your cable company’s router and your network).

Yes, it takes a bit of effort to set up and secure your network. Yes, it costs more money. Yes, it’s inconvenient to trade security with ease of use, but the benefits are numerous, and I would suggest that it’s the minimum price of being part of the modern connected world. The manufacturers are responsible for creating ways to safely secure their devices, but we need to make sure we’re using those tools to keep ourselves safe from 7 billion (and counting) potential hackers.

It’s why in South Africa, to avoid being burgled, we just had to make sure we made our wall higher than the neighbours’ walls. If you’re going to be the family with the pretty little fence, welcome mat, and unlocked front door, don’t be surprised when you find a stranger going through your sex toys.

Defensive SSL security in Windows and IIS

In my previous post, I wrote about how SSLMate has made my life easier.

I also mentioned how SSL-based attacks like POODLE and Heartbleed have forced us into using TLS.

Which is all very well, except that Microsoft’s whole premise in their product line is backward compatibility.

This means that a lot of older security protocols are on by default in Internet Information Service, even on Windows Server 2012 R2. As demonstrated by the recent vulnerabilities in the SSL protocol, this is not a good thing.

The recommended solution is to manually disable each of the older protocols using the registry editor.

IIS Crypto

Instead of this risky method, I discovered a free tool called IIS Crypto, by Nartac Software.

And so too, apparently, did @SwiftOnSecurity.

IIS Crypto is a free tool that allows configuring TLS protocols, ciphers, hashes and key exchange algos on WinServer https://www.nartac.com/Products/IISCrypto

This is how it looks:

iiscrypto

My recommended settings

I installed the .NET 4.0 GUI version. You can install the command-line version instead, but given that you’ll only run this application once or twice in the lifetime of the server, and you need to deselect some items, the GUI is easier to navigate.

Once you’ve installed IIS Crypto on your web server, run it and choose the Best Practices option (located under the Templates section).

You will then need to uncheck the Diffie-Hellman Key Exchange, on the top right, like so:

iiscryptodh(Click to enlarge)

Now you can click the Apply button, which will prompt you to restart your server.

In my own experimentation, I just issued an iisreset command to restart IIS, but it’s probably a good idea to restart the server anyway, as this tool makes changes to the Windows Registry.

Warning

According to the Qualys SSL Labs Test (which you can access from IIS Crypto in the URL field at the bottom of the screen), you will get a best score of an A-minus with these settings.

To achieve an A or higher, follow the instructions from the test result.

Coincidentally, because my company has more than one website served on the same IP address (common with virtual hosts), I achieved an A score by enabling SNI (Server Name Indication) on my website’s SSL bindings.

By default, this forces incompatibility with older browsers, who will be served a default SSL/TLS certificate, so keep this in mind.

Summary

I hope that this tool will make your life easier, by keeping only the most secure protocols and cyphers active on IIS.

This is just one aspect of security in depth. You should also look at the rest of the top 10 vulnerabilities, as collated by OWASP, to see how else you can protect your web applications.

SSLMate and IIS – a love story

I am a part-owner in a company based in South Africa. Our headline act, if you will, is a website that customers log into to manage certain aspects of their business.

This website needs to be secure for obvious reasons. The most basic requirement for a secure website is an SSL certificate (Secure Sockets Layer), or more accurately, TLS (Transport Layer Security). This is the padlock in the address bar of your browser, next to the https: the s means secure.

If you feel like exploding your brain, check the Wikipedia article about TLS and SSL.

For a number of reasons, which Troy Hunt is vastly more qualified to explain to you, we have to ensure that only the most recent browsers are supported by our website and its SSL/TLS certificate.

Older software was not designed with security in mind. The early Internet was about sharing information as easily as possible. Only with Microsoft’s security drive in the early 2000s did we start to see software becoming secure by default. Most recently, news about POODLE and Heartbleed means that even SSL isn’t secure anymore. That is why we have to focus on TLS instead.

It is therefore imperative that we at my company inconvenience users of older software in the best interest of keeping our website as secure as we can. Our SLA (Service Level Agreement) states a minimum version for operating system and web browser.

To this end, I will talk about my new favourite SSL/TLS certificate provider, SSLMate. They allow you to order and renew SSL/TLS certificates from the command line. Even better, unlike most other providers, they tell you when an SSL/TLS certificate is about to expire and renew it for you. I cannot even begin to tell you how convenient this is.

Last year I was travelling out of the country when one of my websites’ certificates expired. The issuer did not warn me (their position is that it’s not their responsibility, and I have to take blame). But, as evidenced by Apple, and Microsoft, and Google, we ALL make this mistake.

SSLmate takes the hassle out of remembering. I of course have created a new workflow to remind me a month before each of my certificates expires, but now that they are all managed by SSLmate, I know they have my back as well.

This all sounds great. I open up a command line prompt and type:

computer~$ sslmate buy example.com

That’s it. After an exchange of email to the appropriate approved address and a confirmation link, I can download four files:

  • example.com.chained.crt — Domain and Intermediate Certificate
  • example.com.chain.crt — Intermediate Certificate
  • example.com.crt — Domain Certificate
  • example.com.key — Private Key

Now comes the tricky part. Internet Information Server, or IIS, needs to import a PFX file. PFX stands for Personal Information Exchange Format and is also known as PKCS #12.

None of these files from SSLMate is in the right format. In fact, if you try importing one of the *.crt files, it will vanish from inside IIS. It needs to be signed by the Private Key.

Confused yet?

On my Mac (or on Windows), I need to use OpenSSL to sign the certificate with the private key, to generate a PFX file that I can import into IIS.

computer~$ openssl pkcs12 -export -out iis_cert.pfx -inkey example.com.key -in example.com.crt -certfile example.com.chain.crt

The output will be iis_cert.pfx, which I can then import into IIS and bind to the website I want to secure. In this example, there are two input files because SSLMate uses intermediate certificates in the chain.

Next time, I will tell you about an easy way to make sure IIS is the most secure it can be.

Two-Factor Authentication for WordPress

There’s been a major attack on all WordPress (and Joomla) websites worldwide. When I last read about it, over 150 000 IP addresses were involved in the botnet.

My (fantastic) service provider, Site5, with whom I host all of my customers’ sites as well as my own, mitigated the attack through the course of a few days.

However, this is yet another reminder that standard username and password security is a miserable failure.

ADN user @zero posted a link to the Google Authenticator for WordPress yesterday, and after using it for a short time, I have to recommend it for all WordPress users.

There are some caveats though:

  • Don’t have spaces in your Description field, when generating the QR Code. Some implementations of the Google Authenticator don’t like that.
  • Generate an application password if you use WordPress on a mobile device. Write this down before pressing “save” because it is hashed when saved.
  • If your server and your mobile device are out of sync by a few minutes, there’s a setting called “relaxed mode” which will allow for 4 minute drift either way.

That’s pretty much it! As for 1Password (which I use), I have to recommend disabling auto-submit, for obvious reasons. That said, I’ve heard a rumour that they’re building in two-factor authentication at a later stage.

Even so, with 1Password and the Google Authenticator mobile application, this is a fairly simple solution to a massive security risk, and I recommend it.