Starting with Terraform, Windows and Azure Part 1

This is a series of blog posts going in to the setup of Terraform and building your first Azure deployment. First we are going to the local installation. Terraform is able to run on a variety of operating systems:

  • MacOS
  • FreeBSD
  • Linux
  • OpenBSD
  • Solaris
  • Windows

Since this blog Is mostly about Microsoft, Windows and Azure related stuff I’m going to cover the Windows version. Fist of all, download the Terraform executable for your Windows installation (32 of 64 bit) right here.

Extract the zip package to a location on your computer, for instance:

c:\Terraform

I would also recommend to add this location to the ‘PATH’ environmental variable in Windows so you can actually run this from any location so you don’t have to type extensive paths every time you are doing deployments.

To make it easy I’ve devised a PowerShell script:

This should allow you to run terraform from any path on your machine. You can try this opening a PowerShell session and running Terraform:

image

So the local terraform is all set up. That wasn’t too hard. In the next post I will visit the tools I use to write Terraform templates (and a lot of other scripts/things).

DirTeam bloggers at TechEd Europe 2013

From Monday June 24 2013 to Friday June 28 2013, Microsoft organizes TechEd Europe at the Feria Internacional de Madrid (IFEMA) in Madrid, Spain. With a much warmer climate than Amsterdam (TechEd Europe 2012) and Berling (TechEd Europe 2009 and TechEd Europe 2010) and Microsofts convenient repositioning of this event in June, this event should be packed with IT Pros and Developers from across Europe.

To represent the DirTeam.com / ActiveDir.org Weblogs at TechEd Europe, I will be present with fellow blogger Sander Berkouwer and OGD colleague Maarten de Vreeze.

We will be staying in the AC Hotel Madrid Feria by Marriott on Via de los Poblados, just a few blocks from the convention center.

Our flight in from Amsterdam Schiphol Airport (AMS) leaves late Saturday afternoon June 22, and we will be making a short stop in Paris (CDG) on our way to Madrid Barajas Airport(MAD). On our way back we will again be making a short stop in Paris (CDG) on Saturday evening and arriving at Amsterdam Schiphol Airport (AMS) late, in what we hope would be a similar temperature as Madrid…

We’re looking forward to TechEd and to seeing you there!

Backing up a Threat Management Gateway using Backup Exec

Everyone can have some trouble using Backup Exec to backup their Threat Management Gateway 2010. TMG uses a different range of dynamic ports from the standard Windows Server installations.

Since Windows Vista the new default start port is 49152. The default end port is 65535. Earlier versions of Windows used 1025 through 5000. The new range gives you 16384 ports. You can Check this with the netsh command.

  • netsh int ipv4 show dynamicport tcp
  • netsh int ipv4 show dynamicport udp
  • netsh int ipv6 show dynamicport tcp
  • netsh int ipv6 show dynamicport udp

Now when you execute the command on a machine running TMG 2010 you’ll probably find that the start port is 10000. This can cause problems with Backup Exec.

Backup Exec’s remote Agent uses the Network Data Management Protocol. This necessary to create the backup data stream. The NDMP utilizes port 10000 . Normally this is not an issue. On a TMG however the dynamic range is changed and wininit.exe will seize the first of the Dynamic ports. There are two solutions to this problem.

you can change the port the backup agent uses

Open Notepad in administrator mode and open c:\windows\system32\drivers\etc\services

add the following line to services

  • ndmp 9000/tcp #Network Data Management Protocol

This will change the port to 9000. Don’t forget that you’ll have to do this on the media server as well, and thus on every server you want to back up. Sounds like fun when you have +100 server.

You can change the Dynamic Port Range on your Threat Management Gateway

On your TMG open an elevated command prompt and run the following command:

  • netsh int ipv4 set dynamicportrange tcp startport=10010 numberofports=30000

Now reboot the TMG server

this will free up the first 10 ports of the dynamic range so that NDMP can make use of it. Reboot and make a test run. Beats reconfiguring +100 servers.

You can verify after the reboot if everything went well. If you execute the following command

  • netstat -ao |find /i “listening”

This will give you a listing of the listening ports and the corresponding Process ID. You’ll should find 0.0.0.0:10000 listened to by a process ID that should be the same ID as the Beremote.exe process as obtainable through the Windows Task Manager

TMG Compression broke my site

Microsoft Threat Management Gateway (TMG) should make publishing websites easy. Generally it is. We had a configuration as shown below:

This should work like a charm. Unfortunately it did not in Internet Explorer 9. Upon testing the published site we noticed that some of the SharePoint functionality was not working as intended; Menu functions were not correctly created in the published page. If you visited through an InPrivate session the problem disappeared. Other browsers, such as Chrome and Firefox did not seem to suffer. Also the situation was a little more complicated:

When we connected from to the published site from the webserver, there was no problem. When we modified the hosts file to bypass the TMG there was no problem. So it seems that the TMG was altering something. And it did.

Since the bulk of the users was connected through a satellite connection which had narrow bandwidth we used some compression methods on the webserver.

Upon testing extensively we determined that the default.css remained empty. This clearly was a caching problem resulting from the TMG configuration.

Eventually we narrowed it down to the Web access policy and the Web Compression Filter on the TMG. turning those off made the problem disappear on the clients.

Since we wanted the Compression Filter to work for some of the websites we had to come up with another solution than simply disabling the filter. After some searching we came across a MSDN article describing the SendAcceptEncodingHeader. The VBscript below can be run on the TMG. It sets the SendAcceptEncodingHeader property to true for a specific publishing rule on the TMG. This will allow compressed content from the webserver to reach the clients correctly.



By default a web publishing rule instructs the TMG to delete all Accept-Encoding headers sent to the webserver. However the webserver answers with compressed responses. The TMG in turn will not forward the compressed responses. That’s when, for instance, the piece of java that makes up your SharePoint menu items brakes.

Conclusion

Let me point out that this will not be an issue when you are not using compression on the webserver. If you do however, and do not want to turn off all of the compression on TMG then you might find the script helpful.

I’d like to see this property of a web publishing rule to be an option in the GUI. In my opinion, especially considering the fact that a lot of clients, including mobile devices, benefit from compression, this would be a nice option which should be more accessible. Maybe a checkbox in the publishing rule wizard or properties.

How not to create redundancy in your Exchange

When I was at a client the other day I encountered the following:

As you can see the Exchange environment in itself already contains a single point of failure. Namely the Exchange-01 server who solemnly functions as a client access and transport hub. The two database servers however are both made high available through the use of the failover-cluster feature introduced in Windows server 2008. This in itself is a good idea. Beside the fact that this way you can create redundancy within your database hosts this also allows you to  use multiple redundant databases on both servers in a database availability group. You can even reboot one in the middle of production. For instance to  update some compromised certificates. The production reboot should notify clients to restart their outlook, but hey, your exchange is safe and up to date again.

It is a bad idea to install this failover cluster on a failover VMware cluster. The problem arises when an actual failover needs to take place. In a perfect world (where you wouldn’t even need failover since your servers would never break) failover would happen automatically if one server for whatever reason stops functioning. In the case of my client something very interesting happens.

First the database is going to be transferred to the other exchange server. All is well. At the same time, VMware steps in and fails over the Exchange server to another host or whatever it is that VMware does to keep guests alive and restores the system to it’s previous state. So the server that went down is restored with database connection while the windows failover-cluster transferred database access to the other exchange database server. With both servers wanting to access the database neither will be able to and that’s when your exchange database failover-cluster fails. This usually results in a lot of people calling the helpdesk to ask why they can’t access their mail.

This is not due to the fact that either VMware or Failover-clustering is a poor feature, this is because someone implemented a solution without proper testing.

So if you want to make Exchange redundant, only use one method and not two or more stacked methods or it will come around to byte you like an attack dog.