Category Archives: Work

vSphere 5 Network Config

As part of moving our production server environment to a colo facility and the coinciding upgrade from ESX 4.1 (fat) to ESXi 5, I get to basically rebuild my entire vSphere environment from the ground up. It’s a great opportunity as I’ve definitely learned a lot over the past 3 years or so of using VMware on a regular basis and I’ve been itching to change some things that I’ll hopefully go in to in some posts later on in this process.

My task today is nailing down my network configurations. I’ve got 8 NICs total at my disposal in each of my Dell R710 servers – the four embedded Broadcom 5709 (2 separate dual-port controllers by design) and an additional four on an add-in Intel I340-T4. I want to make the iSCSI as fast as possible and the rest of the networking as redundant as possible. I’ve not bonded ports in my vSphere config before, but thinking that’s where I want to go at least with the production network side.

I have some ideas already, but I’m curious – what would YOU do?

Merge Free Space on Dell PowerVault MD3000i

I recently ran into an issue while trying to manipulate some virtual disks on our MD3000i SAN at GCC. I know lots of other CITRT folks either have one of these “inexpensive” SANs, so I thought I’d document this here. Apologies to all my friends and family who have been haggling me to post something NON-TECHIE, but this is NOT that post.

A quick bit of background: We’re using our MD3000i strictly as media file archive and really want the SAN to be one really large LUN/partition so that the space is the most flexible. When we first implemented the MD3000i, we were limited to 2TB per LUN/partition and were splitting things up by year. 2007 was about 1.8TB of data and with one month remaining in 2008, the 2008 LUN was already full. You can see where we were already having an issue with 2TB limit.

Dell recently released new firmware for the MD3000i that supports a LUN size of larger than 2TB. I applied that before leaving the office on Friday and quickly started deleting the virtual disks that didn’t have any real data on them yet so I could create a new, larger partition and begin moving data around. Much to my surprise, I saw this in the Dell Modular Disk Storage Manager:

The stupid controllers left the Free Space from the virtual disks I removed in their respective physical locations on the disk. This was stupid, so I Tweeted about it. I exchanged a few tweets back and forth with Derek Mangrum and he hooked me up! He had ran in to the same issue before and sent over a rather handy list of commands he’d used on his own array as well as the SMcli reference guide from SANtricity, who apparently is the actual manufacturer of the Dell-rebranded MD3000i. I had to tweak the command a little because we have dual controllers, but this is the final recipe:
C:\Program Files\Dell\MD Storage Manager\client>SMcli controller_0_IP  controller_1_IP -p yourarraypassword -c “start diskGroup [1] defragment;”

If you only have a single-controller, you can eliminate the second IP and be sure to replace “yourarraypassword” with uh, your array password. Also, if you have more than one diskGroup, replace the 1 after diskGroup with the diskGroup you wish to “defragment.” And yes, the brackets around the diskGroup number MUST STAY or you’ll get syntax errors.

For what it’s worth, I despise the SMcli tool. I’d much rather have a REAL command-line interface directly on the array controllers. My opinions aside, SMcli is insanely powerful and you can do a lot more through that tool than you can through the Dell MDSM GUI tool.In a few days, I’ll post again about how to use SMcli to expand a LUN. Stay tuned!

Setup a DNS Relay using BIND

I’ve had a few inquiries regarding how I setup BIND as a DNS relay for my remote offices. It’s really not as complicated as it sounds. I’ve standardized all my Linux stuff around Ubuntu LTS, so these instructions may need to be tweaked somewhat if you’re on a different platform. The BIND9 configuration stuff should be the same, but the location of the configuration files may (and probably will) differ.

I started with a clean install of Ubuntu Server 8.04 LTS inside a VMWare virtual machine. During the installation, I selected the “DNS Server” option and proceeded. Once the install was finished and the virtual instance had rebooted, I ran “apt-get update” and installed all updates and again, rebooted. If you already have a working Ubuntu system and want to add BIND, it should be as simple as typing “sudo apt-get install bind9″ on your terminal.

Now, here’s the good stuff. Open /etc/bind/named.conf.options in your favorite editor and make some adjustments. Here’s what my basic configuration looked like:

options {
directory “/var/cache/bind”;
allow-query { any; };
allow-recursion { any; };
query-source address * port 53;

forwarders {
172.17.0.5;
};

auth-nxdomain no;
listen-on-v6 { any; };
};

Make these changes to the config and restart BIND. Test that the lookups are being properly forwarded to the upstream nameserver. Once you verify it’s working, you can make additional changes, such as implementing a BIND access control list (ACL). Add something similar to this to your /etc/bind/named.conf.options file:

acl my-subnets {
172.17.0.0/16; //headquarters
172.18.1.0/24; //office01
172.18.2.0/24; //office02
};

Once you’ve added the definition for the ACL, change your allow-query and allow-recursion to the name of the ACL:

allow-query { my-subnets; };
allow-recursion { my-subnets; };

As usual, restart the BIND service and you’re all done!

Moving On – Joining GCC

I am absolutely, 100% pumped to finally be able to share some incredible news with the blogopshere. I’m joining the IT Team at Granger Community Church! I couldn’t be more excited to work with Jason Powell, Ed Buford, and Matt Metzger. I’ll be the “IT Specialist” – primarily responsible for help desk, but also assisting Ed with network-related things, handling phone system maintenance, and the scariest thing on the job description “other duties as assigned by the IT Director.”

So yes, for those who can’t read between the lines, we’re leaving the comforts of Western North Carolina winters and heading north sometime around August 4th. There’s a ton of emotions involved in leaving behind our family, friends, and everything else we know and love about North Carolina, but God has truly had his hand in this from the very beginning. He has confirmed in at least 100 ways that this is definitely  our next step in serving Him. Bonnie and I both should be blogging about some of those things soon.

The whole GCC team has been incredible through this entire process. We had an amazing interview weekend in Granger a month or so ago and can’t thank the team enough for what you’ve already done to make us feel welcomed and a part of the family. Jason and his wife Kim have truly gone above and beyond the call of duty and opened their home to us on multiple occassions as we’ve visited the area. Not only do we feel like we’re joining the GCC team, we even feel like we’re an extension of the Powell family! Jason and Kim – we certainly owe you a big, public thank you (and probably some free child care at some point). Thanks a million times over for being so welcoming and supportive during this entire process – I’m not sure we could’ve done it without you!

As I said earlier, we’ll both be writing more about this in the coming weeks leading up to our move, but in the meantime, we could sure use your prayers. Thanks!

SonicWALL Hardware VPN – Just Do It

If you’ve been following all my posts about SonicWALL, you know that I’ve purchased a ton of gear to construct a nice, widespread VPN. This network consists of the SonicWALL NSA 3500 at my corporate office and a TZ150 at 20 of our remote offices.

Today, I began production deployment to remotes and I have to say, am absolutely, 100% satisfied with the SonicWALL VPN solution. It really is a thing of beauty and it “just works” like you would hope and expect. I’ve had a TZ150 at my house for about a week and I can move my laptop from the office to my coffee table at home, and aside from the latency browsing network shares and such, I still feel like I’m physically connected to the Corporate LAN.

I’ve struggled for the past 12-18 months with deploying our Aastra 9112i VoIP telephones to branch offices for a number of reasons, but primarily because of all the NAT problems associated with SIP packets. The other nagging issue was provisioning and maintaining updates to all these phones once in the field. All the config files and firmware for the Aastras reside on the Asterisk server at the Corporate HQ and are accessed via TFTP and I wasn’t really keen on opening up that port to the entire world. The VPN solves all of these problems! I did the initial provisioning by plugging the phone in to my VOIP LAN at Corporate. The phone pulled down it’s config (which now contains ONLY the internal addresses in the config file) and the latest firmware update as well. Once that was complete, the phone rebooted and I made a successful test call. I took the phone home with me last night, plugged it in to my home network which has that VPN tunnel to Corporate already up, and the phone linked up to Asterisk right away with no additional finagling. Color me impressed!

So now I only have 19 more devices to deploy over the next several weeks and our IT infrastructure will certainly be exponentially more secure than it was a week ago. The VPN is something that has been needed for a while, but funding it was always an issue. Considering the nature of our business, all the personal information we deal with associated with that, and the rising rate of identity theft, we finally realized the time was right and the risk too great to continue operating the way we were any longer. I’ll continue to post updates as the deployment progresses.

Swimming in SonicWALL

Since posting last week about my troubles treading in to SonicWALL water, I think my issues have all been resolved and things are really humming right along. Truth be told, the problem was really a combination of user-error and user-ignorance. If you follow me at all on Twitter, you already know that I had major, show-stopping issues over a couple of days last week. WAN->LAN traffic would flow perfectly fine for a while, and then at some completely arbitrary point, it would stop passing traffic to certain internal hosts. Like I said, this was show stopping.

You might like some background on our implementation. When ESI received our first IP assignment from BellSouth many years ago on our fractional T-1, it was a short and simple /29 network which yielded six usable addresses, minus one that had to be assigned to the ISP provided Cisco router, leaving us with five addresses we could use. A couple years ago (three to be exact), we outgrew that, and in an attempt to keep a 1:1 ratio on NAT rules, we acquired an additional subnet – but this time a significantly larger /28 was assigned. For whatever reason, we never bothered moving all the services to the new subnet. With Watchguard, it was never an issue and was easy enough to run them all simultaneously. The first subnet was configured on the WAN interface and the additional addresses were added as “virtual addresses” on that interface and they were then available to create rules/policies with.

Enough with the history lesson – let’s move on to the present. Personally, I’ll take 99% of the blame for the user-error part. Upon receiving the NSA 3500, I was anxious to get started, so I unboxed it and started exploring the web admin interface. It was confusing and quite different for someone like me, who prior to this, had only had experience with Watchguard gear. After questioning many SonicWALL users whom I respect, I started firing off the Public Server Wizard and creating all my NAT Policies and Firewall Rules. This was apparently mistake number one. While it does indeed work, I later found out that Mark doesn’t suggest that method. Creating them manually gives you a bit more granualar control, leaves a little less cruft in the auto-generated NAT Policies, and the real kicker – you end up gaining a much better understanding of how the Firewall Rules and NAT Policies work together.

Moreno and I made plans to work on this last Thursday morning. I went in to the office early and decided I’d put the SonicWALL NSA back in production while we finished up. Around 7:30 AM, I made the swap and rebooted the Cisco from AT&T to flush any routing related issues out. A few services came online with the NSA and Mark continued cleaning up the rest of the NAT rules and chasing rabbits. By 8:15 or so, we had everything working except one Linux server which was non-mission critical, so we decided to hold-off on that one for a while and see how things went. Things went extremely well throughout the day, but as soon as we finished watching the LOST season finale on Thursday night around 11:15 PM, my Treo started buzzing with down alerts. I started checking things, and finally at this point, it dawned on me that all the services that were failing were on the newer /28 subnet. Surely that must bare some significance. I rebooted the NSA to no avail. After about an hour, traffic magically started flowing to all hosts again so I called it a night, texted Mark to let him know about the issue, and set the alarm for 6:00 AM.

I crawled out of bed around 6:30 Friday morning and got to the office around 7:15. At this point, I had resolved to myself that it was either do or die. I was not going to revert back to Watchguard a third time. If we couldn’t fix the SonicWALL by noon, I was going to wash my hands of it and let Moreno have his gear back. I chatted with Mark around 8:00 and he was bumfuzzled and en route to meet with another client but promised to look at my logs if I’d send them over and also open a ticket with SonicWALL to get the issue resolved ASAP.

I kept hammering away. Sitting idly and waiting on Mark and SonicWALL was not part of my playbook and for some reason, neither was calling support directly. I’m just not the kind of guy most of the time. For some reason, I’d rather spend several  hours resolving an issue on my own than just calling and asking someone. Maybe it’s pride? Regardless, around 9:00 I turned to Google and the SonicWALL website and within 15 minutes, I discovered SonicWALL KBID 3726 titled “SonicOS: Configuring Multiple Subnets Using Static ARP with SonicOS Enhanced” which outlines these simple instructions:

Follow these instructions to create a second subnet on an interface:

  • Create a static ARP assignment. Enable the “publish entry” check box.
    1. Login to the SonicWALL’s Management page.
    2. Select Network > ARP.
    3. Click the ADD button under Static ARP Entries.
    4. IP Address – Specify the IP address to which the SonicWALL should be assigned on the additional subnet.
    5. Interface – Specify the interface (LAN / WAN / OPT / WLAN) where the additional subnet resides.
    6. Publish Entry – Enabling this option causes the SonicWALL to respond to ARP queries for the specified IP address with the SonicWALL’s MAC address. This box must be checked when creating additional subnets.
    7. Click OK.
  • Select Network > Routing.
  • Select Add. Create the following new route policy:
    • Source: ANY
    • Destination: Create new address object
      • Name the object for your secondary subnet
      • Zone Assignment of your secondary subnet
      • Type: Network
      • Network: Enter the Network address of the secondary subnet
      • Netmask: Enter the Subnet mask of the secondary subnet
      • Click OK
    • Service: ANY
    • Gateway: 0.0.0.0
    • Interface: Select the interface the secondary subnet resides on
    • Metric: 20
    • Comment: Label policy so it can be identified at a later date
    • Click OK
  • A ssecondary subnet on the LAN interface will use the default NAT Policy & Access Rules. Access rules & NAT policies may be added.

As soon as I created the ARP assignment, traffic started flowing but I went ahead and created the static route as well for good measure.

I’ve got a couple more SonicWALL posts to come. One about how AWESOME the hardware-based VPN is and another about exactly how to configure the Firewall Rules and NAT Policies without using the Wizard. Tomorrow (Tuesday) morning, I’m heading to Charlotte for the SonicWALL Roadshow and then in the afternoon, I’m deploying my first TZ 150 endpoint to a remote office, along with another special piece of hardware. I’ll try to get some pictures of all that and the VPN post up up by the weekend.

Overall, I’m quite happy with the NSA 3500 and my SonicWALL VPN solution. For now, I’ll withhold my formal review and recommendation on Moreno until I see his final invoice – if he cuts me some slack, I’ll cut him some on the blog.

Treading in to SonicWALL Waters

As previously mentioned, we’re switching away from a one-year old Watchguard Core x750e to SonicWALL NSA 3500 at my place of employment in order to deploy a nice, widespread (geographically speaking), and expensive VPN.

I received the first half of my gear from Mark Moreno two weeks ago and immediately unboxed the NSA. It’s quite a purdy device! It’s sleek, silver, and has a very bright blue LED on the front. I powered it up and upon logging in to the web management interface, I was equally impressed by how shiny and web 2.0 the web UI was. Sadly, that’s where my enthusiasm ends for SonicWALL right now. I started digging around and was just overwhelmed at the options and difference in terminology between the NSA and the Watchguard. After talking it up in the CITRT IRC channel, I was informed that the “public server wizard” was the way to go with configuring NAT policies since SonicWALL  actually needs THREE rules to create one NAT rule. Not only the the NAT policies have to be defined, but then there is the firewall policy. Best I can tell, to NAT one port to one service would require the following steps without the wizard:

  1. Create “Address Objects”
  2. Create “Service” or “Service Group” if not predefined
  3. Create Firewall rule
  4. Create the three NAT policies

While four steps seems simple, it’s a lot of clicking and a lot of digging around, and so far, I’m not a fan. The wizard did a good enough job for some of my rules, but others don’t work right (will work for a few hours and then stop) and others don’t even work at all. At this point, the firewall is doing WAY too good of a job at blocking services from the outside world!

I’m sure it’s a PEBKAC or maybe even an ID ten T error, because so many people just love their SonicWALL stuff. A few minutes ago, I said this in the IRC channel, and I think it’s fairly accurate at a certain level:

<wantmoore> i’d almost go out on a limb and say “windows is to linux as watchguard is to sonicwall”
<DavidSzp>    wantmoore: That’s an interesting analogy
<wantmoore>    watchguard: much easier to do stuff and make it work. sonicwall: a lot more flexibility, but not nearly as straightforward
<stephensflc>    I would totally agree with that statement at this point
<wantmoore>    the analogy doesnt stick where cost is concerned though ;)
<wantmoore>    in that regard, watchguard is a WHOLE lot cheaper. sonicwall will nickel and dime you to death

And I’ll stand by those statements for now. I’m sure that Moreno will help me get my issues resolved and I’ll join the Happy SonicWALL Club soon enough. Until then, I really miss my Watchguard and I’ll be hanging out in the corner with my friend Ed talking about our plans to startup and anti-SonicWALL user group.

Installing GoDaddy SSL Root Certificate on Windows Mobile 5

A few months ago, we migrated to Kerio MailServer at work and I’ve been absolutely in love with the fact that it natively supports Microsoft’s ActiveSync. This means I can sync my mail, contacts, calendar, and to-do lists directly to my WinMo5 based Palm Treo 700w over-the-air. The only complaint I’ve had, was that I’ve been doing it all via HTTP – yes, sans-SSL.

So, a few weeks ago, I set out to remedy the problem. I hopped around a few sites and did a little research and eventually decided to buy a two-year certificate from Go Daddy for $53 (I think). Getting it installed in Kerio was easy so then I tried changing ActiveSync on my Treo to use SSL. It failed. Miserably. Turns out, some of the reviews weren’t as accurate as I’d hoped and the new Go Daddy root certificate is not installed in Windows Mobile 5 by default as a trusted authority.

I searched and read and read some more to figure out how to do it. I found this slightly outdated knowledgebase article and started following the instructions. It didn’t work. In the process, I discovered that you can just copy the .cer file to the mobile device (I used an SD card) and open the .cer file from Explorer and you’re prompted to import it. Armed with this knowledge, I tried both the old “Valicert Root – DER Format” and the new “Go Daddy Class 2 Certification Authority Root Certificate – DER Format” with mixed results. One loaded and the other did not. However, I still couldn’t sync via SSL. A little bit more of my Google-fu and I found Go Daddy certs on certain phones by The SBS Diva. At the very bottom of her post is a jewel valicert_class2_root.zip.  It’s the binary versions of the Go Daddy root certificates. You can export these yourself from IE by following the instructions there if you don’t trust them. Otherwise, just download the zip file, extract the two files from the archive and get them copied over to your WinMo5 device somehow and execute them.

I can sleep a little easier tonight knowing my data is fully encrypted from my device back to the Kerio virtual machine.