Allan Hurst
  • Home
  • About
  • Blog
  • Portfolio
  • Solutions
  • Presentations
  • Contact

What SHOULD you do with your Domain Controllers?

2/8/2017

0 Comments

 
Picture
Just because you CAN do something on a given type of Windows server, that doesn't mean you SHOULD.

In this case, I'm talking about Active Directory Domain Controllers.

For many years, Microsoft has sold a product called Small Busines Server, which rolls into a single physical box the functions of Domain Controller, File Server, Print Server, and Exchange Server. And for 4 or 5 users, Microsoft SBS works perfectly well. 

When you scale up from 4 or 5 users to, say, 10 or 15 users, the story changes a bit. 

Active Directory is the logical backbone of every Microsoft-based network. It contains accounts, passwords, certificates, software keys, and related sensitive information. Performance issues with non-dedicated DCs aside, one of the basic tenets of network security is to separate out sensitive data to make it easier to secure. This is why our firm's best practice is to use only dedicated domain controllers.

A dedicated domain controller should NOT run:
  • File Services
  • Print Services
  • Database Services
  • Applications (e.g., SharePoint)
  • Web Services (e.g., IIS or Apache)

A list of common services that CAN run on dedicated domain controllers includes (but isn't limited to): 
  • Domain Services
  • DNS
  • DHCP
  • NTP Source (DC holding the PDC FSMO role only)
  • Certificate Services
  • KMS
  • Active Directory Federation Services
  • Azure Active Directory Connnect (replaces the old Azure Active Directory Sync product)
  • Third-Party SSO integration modules (such as Barracuda's AD Agent for its web filter product)
  • Backup agent

That's pretty much it. DCs should run only products and agents that are directory-function-specific (or backup-enabling). 

Once you've configured your domain controllers, there are two more things you need to remember to do with them:
  1. Secure them with an antivirus product of your choice. Sounds obvious, but I've lost count of how many times I've walked into a new site and found no antivirus on some or all of the domain controllers.
  2. Back them up using a product which will back up the System State (meaning the Active Directory database). While restoring Active Directory from a System State backup after a disaster is ugly, it's a lot less ugly than not having a backup to restore from.

In addition to the above, I like to use Microsoft's BGINFO utility to automatically put up wallpaper on the desktop of each DC, giving, at a minimum:
  • System Name 
  • Type of System and Functions Supported (e.g., "Physical DC (DNS, DHCP, All FSMO Roles. NTP Server, KMS)".)
  • IP address and Network Information (Gateway, DNS, DHCP server, etc.) 
  • OS Version
  • Last Boot Date/Time

I usually create a batch file to run BGINFO, and place a shortcut to the batch file under the Startup directory so it'll run whenever I log on.

Using BGINFO wallpaper saves me a lot of time when I'm bouncing between multiple DCs during, for example, an Active Directory health check. 

0 Comments

A Great Reason to Migrate your Domain Controllers to Windows Server 2012 R2.

1/15/2016

0 Comments

 
There are a number of my clients who are still running Windows Server 2008 R2 based domain controllers ("DCs"). Several years ago, when Windows Server 2012was still new, waiting to see how the new OS shook out wasn't an imprudent idea. At this point, the 2012 R2 OS has proven to be stable, and it's certainly safe to use.

Keep in mind that Windows Server 2008 R2 mainstream support ended on 1/15/2015. If you don't believe me, check out Microsoft's Server 2008 R2 lifecycle page for yourself:
  • https://support.microsoft.com/en-us/lifecycle?p1=14134

All is not lost, however. Extended support--for 2008 R2 customers with valid service agreements--runs through 1/14/2020.

Still, as of this writing, that's only about four years away. And in case you're wondering, Server 2012 R2 mainstream support ends in 2018 (superseded by Server 2016), and extended support ends in 2023.

Others in my client base don't want to migrate to Server 2012 R2 because they dislike the GUI. I certainly can sympathize; it's just different enough from 2008 that the learning curve is steeper than I'd have liked. However, that's not a reason to not utilize a more up-to-date server operating system.

The "killer app" for getting 2012 R2 installed in my client sites, however, has proven to be DHCP High Availability.

With Server 2008 R2, there was no such thing as high availability for DHCP. You could set up two servers to hand out a split DHCP range, or set up servers in a failover cluster, but that was about it.

In Server 2012 R2, it's quite easy to set up two servers for DHCP high availability You simply set up DHCP on the primary server, then activate DHCP fault tolerance on the secondary server.

2012 R2 provides two modes DHCP fault tolerance: Hot Standby and Load Balance. The two modes are detailed in this Microsoft tech note:
  • https://technet.microsoft.com/en-us/library/dn338976.aspx

Microsoft also has provided a great step-by-step guide to configuring DHCP failover:
  • https://technet.microsoft.com/en-us/library/hh831385.aspx

In my network designs, since I usually have one physical DC (to provide network time sync and DNS in the event of a virtualization cluster malfunction) and one or more virtual DCs, it's maybe a 10-15 minute job to set up DHCP failover. Even in small business networks, I generally create a pair of 2012 R2 DCs (again, one physical and one virtual) and set them up for either Hot Standby or Load Balancing mode. 
​
0 Comments

Ransomware (Part 2 of 2)

12/15/2015

0 Comments

 
In part 1 of this post, I discussed how one of our clients recovered from a ransomware attack using Microsoft Volume Shadow Copy Service (VSS) to revert a file server's volumes to an earlier snapshot.

How do you detect and prevent ransomware?

Unfortunately, unlike standard computer viruses from 10 or 20 years ago, contemporary malware doesn't always provide a conveniently scannable file signature. Some variants modify Windows system files. These variants aren't necessarily detectable by a standard antivirus program.

There are a several things you can do proactively.

First, use a malware scanning program such as Malwarebytes to scan every Windows based server and workstation on your network. The free version requires manual execution on each machine. There's a paid version that has a centralized console allowing you to push out the program to all systems. Run this periodically in addition to your regular antivirus software.

Second, look at firewalls that incorporate packet inspection of some kind. For example, both SonicWALL and Fortinet have security features that will scan for viruses and malware in real time internet traffic, preventing users from downloading malware.

If you have such a firewall but haven't activated those services yet--and a lot of people don't activate them out of ignorance or uncertainty--activate those services one at a time, waiting a couple of days between each new service activation to ensure that your network production isn't interrupted by accident. 

If your firewall doesn't happen to have anti-malware packet inspection features, consider third party systems from vendors such as Barracuda or FireEye. These are generally network-level appliances that scan all internet traffic in real time, blocking malware and other questionable content. Some vendors even offer cloud based versions of such services. 

Finally, if you have any reason to believe that your systems may have been compromised, I strongly recommend that you perform a deep-level offline scan of your systems. By "offline", I mean creating a bootable linux-based CD or USB containing a product such as BitDefender or ClamAV, and booting each Windows system from the CD/USB. This will enable you to scan for malware that hides in the Windows operating system.

Even if you don't think you've been compromised, if you intend to put stricter antimalware controls in place such as I've described above, then it's still a good idea to perform a deep-level offline scan of all systems to make sure that you know you're starting with a clean network.

The final piece of this puzzle is end user education. Train your users to not blindly click on links in email, and to carefully read potential phishing emails. Granted, you should already have an anti-spam system of some kind in place, but no system can catch everything. Assume that a few emails with links to malware sites might still make it through your anti-spam system, and educate your users accordingly.
0 Comments

Ransomware (Part 1 of 2)

11/16/2015

0 Comments

 
One of my clients was recently struck by ransomware. In this case, it was a variant of Cryptolocker called "Le Chiffre" (after the major villain in the James Bond story, "Casino Royale"). Every single useful file (Microsoft Office files, image files, sound files, etcetera) was encrypted. A set of three files (a marker file, a text file containing the public key, and an HTML file with decryption ransom instructions) was inserted into every directory...on a server with 20,000+ files.

Ransomware is nasty stuff. It encrypts every file it can find (or an entire hard disk), and displays contact information where you can send money to decrypt your files. Generally, a long enough encryption key is utilized that it's impractical to try cracking the code. 

The challenge with ransomware is that it "mutates" quickly. There's not just one version of Cryptolocker, there are dozens, perhaps hundreds. "Script kiddies" can download kits which enable create customized malware. 

Frustratingly, since there are so many customized signatures, most antivirus programs aren't terribly effective at catching ransomware. One popular antivirus vendor's support tech bluntly told me that their software won't detect it, but that it could be set to detect and block the type of activity that an encryption virus perpetrates. I wasn't impressed; this was like telling me "Oh, well, yes, half your barn burned down, but we figured out there was a fire and soaked down the unburned half so it wouldn't catch on fire."

In my client's case, my security team never figured out exactly how Le Chiffre entered the network, but based on file time stamps, they managed to narrow it down to one of two workstations whose users either clicked on a link to an infected website or opened an infected downloaded file.

Unless you really want to throw money at cyber criminals who may or may not actually decrypt your files--and who knows if the decrypted files will be intact or contain some other piece of malware?--the only option to recover from a malware attack is to restore from backup...after making sure that you've eradicated all traces of the ransomware. (I'll talk more about that in Part 2 of this post.)

In my client's case, we happened to have not one, but two avenues for file restoration.

For primary file backup, we had installed a Barracuda Backup unit, which did a terrific job of backing up files as they were created or changed. The backed-up files were stored locally, then streamed up to the Barracuda Cloud for offsite safekeeping.

When we set up a Windows 2008 based file server for this particular client, I was already familiar with their end users' behaviors, which included multiple "oops!" file deletions each week. 

This knowledge led to our creating a secondary safeguard, which was to enable Microsoft Volume Shadow Copy Service ("VSS") on the file server's data volume. 

If you're not familiar with VSS, it allows a system administrator to set aside space to create periodic "snapshots" of a volume's file system. I set up VSS at our client to create snapshots at 0700 and 1700 each day, keeping several days' worth of snapshots on file. 

When LeChiffre struck, after making sure we'd removed all traces of the virus from all systems, rather than running what would probably be at least an overnight restoration of files to the server, all we had to do was use VSS to go back in time to the last snapshot before the files were encrypted.

One caution about using VSS: even after we deleted all of the encrypted files (all of which were renamed to have a .lechiffre extension, making the process of searching out and killing encrypted files quite convenient), and then emptying the recycle bin on each volume, we found that we needed to add extra space to the volumes to enable us to revert to (think "restore from") our VSS backup. Unhepfully, Windows 2008 didn't tell us we needed extra space until it ran out during the reverting process.

Once we'd reverted to the older VSS snapshot--which took several hours, because there were many tens of thousands of files to be processed--users were able to log in and work as they normally would.

The two lessons we learned from this:
  1. Don't trust a normal antivirus program to pick up rapidly evolving malware such as ransomware.
  2. In addition to daily backups of all systems, enable VSS to save time in case a mass restoration is needed. 

In the next post, I'll talk about detecting and preventing ransomware. 

For reference, several antivirus vendors have prepared more detailed explanations of how ransomware works, including:
  • http://www.trendmicro.com/vinfo/us/security/definition/Ransomware
  • http://us.norton.com/yoursecurityresource/detail.jsp?aid=rise_in_ransomware
  • http://www.mcafee.com/us/security-awareness/articles/how-ransomware-infects-computers.aspx
  • https://blog.kaspersky.com/cryptolocker-is-bad-news/3122/

0 Comments

​Do you trust your CPE vendor?

10/12/2015

0 Comments

 
Picture
Many of my clients utilize managed network services. That is, they purchase the service of a point-to-point WAN link which is managed by a vendor such as AT&T or Verizon. The vendor is responsible for the end-to-end operation of the link, including providing and maintaining the various Customer Premises Equipment (CPE) components such as routers, network modems, or switches.

Here's the big question: What do you do about purchasing either hardware maintenance or on-site spares for that Customer Premises Equipment at both ends of the link?

It's not an easy answer (or I wouldn't have needed to write this article).

MAINTENANCE, SPARES, OR BOTH?
  • Should you purchase maintenance on the equipment provided by the vendor?
  • Should you purchase spare versions of the equipment?
  • Should you do both?

FACTORS TO CONSIDER:
A number of factors must be taken into account when debating whether or not to purchase CPE maintenance.

Terms - Are the terms of the CPE maintenance reasonable, or are there multiple exclusions that would give the company an excuse to fail to deliver in accordance with the perceived SLA?

Equipment Criticality - Is the CPE serving a critical network link? Is there a backup link in case of failure? 

Location - If the CPE will be located in a geographic area which is convenient to the network vendor's local parts depot, a CPE is very possibly warranted. If the CPE will be located in a remote area, a set of "warm spares" (duplicate CPE which has been preprogrammed to match the production CPE) kept at the CPE site is a better idea. 

Cost of Replacement - How expensive is the CPE? The cost of CPE equipment has dropped steadily in recent years. 

Trust of Response - How trustworthy is the vendor in terms of service delivery? Have you had a generally good or bad set of support experiences in the past? 

Cost of Downtime - What function does the link in question serve? If this is a critical link, a set of warm spares is generally a good idea to ensure that the link be kept up at all times. (However, it should be noted, we suggest that especially critical network links should ideally have a backup link with automated link balancing/link failover.) 


WHEN A SET OF SPARES IS BETTER:
In general, clients are better off purchasing CPE spares in situations where a network location is remote or difficult to access, and/or where a network link is so critical that a 2-, 4-, or 8-hour SLA to restore connectivity is insufficient to business needs. 

Occasionally, we've worked with some clients who simply didn't have a high level of trust in their network vendor to provide adequate response, and purchased CPE spares for their own comfort.

Just purchasing the spares isn't enough, by the way. They need to be configured with a copy of the running configuration on the active equipment, so that the spare can be cold-swapped into place during a network outage. Each time the running configuration is updated, the spare needs to be updated. 

A copy of the updated configuration should also be kept in electronic form somewhere on-site in case it needs to be reloaded into the spare (or original) equpiment. 


WHEN MAINTENANCE IS BETTER:
Clients are better off purchasing CPE maintenance when there are insufficient internal technical resources to diagnose and/or swap out failed network components. 

We have some clients who have purchased CPE maintenance simply to offload/outsource the exception handling of failed network equipment, whether or not the client maintained CPE spares. 


WHEN YOU NEED BOTH SPARES AND MAINTENANCE:
For especially critical links, we have observed some clients utilize a "boots and suspenders" strategy, in which a CPE maintenance contract is paired with a set of onsite spares, as the network vendor's onsite engineer can make use of the spare more quickly than diverting to a parts depot on the way or leaving the client site to retrieve a spare from the local depot. 

If a contract requires that CPE be purchased from the network vendor, we suggest that purchasing CPE maintenance is a good idea to eliminate the possibility of vendor "finger pointing" during link outages.

In this case, as outlined above, if a link is especially critical or there is doubt that the vendor may be able to replace CPE quickly in an emergency, we recommend purchasing spare CPE from the vendor, so there can be no question of the CPE's suitability or provenance. 


HOW IMPORTANT IS THIS LINK?
Everything in this discussion boils down to: How critical is the link supported by the CPE? 

The more critical the link, the higher the probability that a set of CPE spares is required in addition to maintenance.

There are cases where a given link may be critical but DOES have an automated balancing/failover mechanism in place. In these cases, a CPE maintenance contract may still be useful, specifically to provide vendor-executed replacement of bad equipment after the link has failed over, especially if there's no IT staff normally available to handle the situation. 

If a link's function is critical enough to justify automated link balancing/failover, we suggest that the balancing/failover mechanism be designed and installed using a paired equipment configuration running in high-availability mode. 

Finally, even when you have a high-availability configuration in place, keep in mind that a link balancing/failover mechanism can fail...and that's when having both maintenance and spares for equipment at each site can get your network link back up and running as quickly as possible.


0 Comments

Choosing A Desktop Management System For Mixed Environments

9/7/2015

0 Comments

 
Picture
Most of our mid-to-large-sized clients utilize a desktop management system (DTMS) of some sort. What frustrates me is when they use two or more systems from different vendors because they have both Windows and Macintosh workstations.

What they use depends upon how the network was built. If the network builder was a Mac person, probably the DMS in use will be a Mac-oriented system with token support for Windows. If the builder was a Windows person, the opposite will be true.

I haven't yet found a DMS which provides equal support for both Windows and Macintosh workstations. There are a couple of vendors who provide pretty good shared functionality across both platforms, but due to basic differences in Windows and Mac architectures, there are no systems with 100% matching functionality.

So what do you do?

ANALYZE YOUR TICKETS:
If you don't already know what your top 5 trouble ticket topics are, now's the time to find out. Does your team spend a lot of time re-imaging PCs? Do most of your tickets require assistance with some feature of Microsoft Office? Are you having problems determining who has the most current Windows or OS X security patches?

Reviewing the last year's worth of trouble tickets isn't a fun task, but you need to know what support functions are most critical to your end user community. Different DTMS vendors have different strengths. One vendor may be great at imaging but not so great at software inventory, for example, while their competitor may only offer so-so imaging but incredibly detailed inventory capabilities.


CHOOSE THE FEATURES:

Once you know what your top 5 trouble ticket issues are, you'll know what tools you'll need to solve them. This will allow you to decide what DMS features are really important to you.  The basic features I generally review with my clients are:

Workstation Discovery: Can it find workstations across the entire network? This is a biggie. While I don't expect any DTMS to be able to discover 100% of the PC and Mac workstations on a network, I shouldn't have to fiddle with more than, say, 15% to 20% of a given set of workstations at most.

Inventory of discovered workstations: Again, this is one of the big reasons for having a DTMS. What software and hardware are on each workstation?

Software inventory and licensing reports: Do you know how many copies of Office or Acrobat Pro are licensed vs in use, or are you simply making an educated guess? 

Software usage reports: This is a bit subtler than just software licensing. How much is each licensed application being used? A lot of organizations buy extended versions of Microsoft Office, but only a fraction of the users make use of Publisher or Access. Being able to run a report to see which users never touch anything in Office other than Word, Excel, and Outlook could save your company a lot of money. Why license software that's not needed?

Remote Control: Most IT help desks don't have a large enough staff to physically visit each user's desk. Being able to remote into an end user's workstation is incredibly helpful, and saves everyone a good deal of time and trouble.

Keep in mind that remote control is a politically sensitive component. Depending upon your company's corporate culture, you will probably want to configure remote control to (a) ask the end user if it's OK to take control of their machine, and (b) provide some sort of visual and/or audible indicator that the machine is being remote-controlled. 

Software Distribution: While it's possible in a Windows environment to push applications out via Group Policy, doing so for Macintosh workstations is a different story. Does your contemplated DTMS allow for pushing out applications to both PC and Mac users?

Patch Management: How do your users apply OS patches and updates? Is there a corporate policy? If so, how is it enforced? Some DTMS vendors allow for a unified approach to patching each platform (PC and Mac) so that you run a report to find out who may not have applied the latest security hot patch.

Imaging: How heavily does your IT help desk rely on imaging/re-imaging workstations of both flavors? Does the DTMS handle rolling out a whole new workstation image? Is there a way to provision a "universal image" that can roll out Windows across a range of different workstation hardware platforms? ​Can it roll out and/or provision images for OS X machines?

Monitoring and Alerts: Some DTMS can monitor workstations for issues such as rapidly filling-up hard drives or virus/malware attacks, alerting the help desk automatically. 

Device Control/Security: Do you work in a high security environment? Is there a need to disable USB ports, or disable specific types of USB devices (such as storage devices)?

Keep in mind...not every company will need all of the above features.

DETERMINE FEATURE PARITY:
Once you know what features are important to you, evaluate whether a potential DTMS provides full parity of those critical features across both PC and Mac platforms. 

If a DTMS doesn't provide full parity, you need to decide whether or not this is something you can live with. For example, your shop may not need to reimage Mac workstations very often, so a DTMS that only provides PC imaging may work out just fine for you.

ASK HARD QUESTIONS:
OK, now that you know what features are important to you, and across which platforms, make up a questionnaire asking potential DTMS vendors which features and platforms they support. Score the resulting responses and pick the top 2 or 3 scoring vendors.

HAVE A "BAKE-OFF":
Design a live evaluation for each vendor in which they have to install an evaluation copy of the software and prove to you that they can handle the tasks set out in the questionnaire. Some typical challanges:
  • Discover 80% of the workstations on our production network.
  • Demonstrate remote installation of your management agent on both a Windows 7 PC and a Mac OS X workstation.
  • Roll out [a sample application] to both a PC and a Mac.
  • Provide software inventory for both a PC and Mac.
The exact challenges set forth will depend upon what tasks you've identified as being especially important to your end user community.

Typically, discovery type tasks will be run against the production network after-hours, while demonstration sof remote agent installation, application rollout, and detailed software inventory are generally limited to either a set of workstations in a lab environment -or- a tightly defined set of live workstations that are not production critical. 

Once you've seen the products in action, you'll be able to make an informed decision, rather than an uninformed guess, about which DTMS will suit your environment the best.


0 Comments

Email Is Not Secure.

8/15/2015

0 Comments

 
Picture
My business partner who handles our security practice constantly reminds our staff and our customers: Email is Not Secure.  

This means that confidential business information, especially including passwords and/or account login information, should never be transmitted via regular email. Unfortunately, I frequently observe clients sending passwords and similar security-related information via email. I've lost track of how many times a customer has emailed me a password for some critical system, only to be surprised by a phone call from me telling them to change that password right now and call me back to let me know what it is. 

It's not just criminals you need to be aware of. Since the Edward Snowdon affair, a number of U.S. government run surveillance programs have come to light. Today, the New York Times ran an article alleging that that the National Security Agency worked closely with AT&T to spy on internet traffic:

http://www.nytimes.com/2015/08/16/us/politics/att-helped-nsa-spy-on-an-array-of-internet-traffic.html

Am I surprised? Not particularly. In terms of national security--real or perceived--whatever the NSA really wants access to, the NSA is going to get, and it's foolish to think otherwise. What I found more interesting in the article is the suggestion that AT&T provided internet data from its peering connections...meaning anything that crossed an AT&T internet junction to get to another destination, even if it was transmitted via a non-AT&T ISP line, was potentially inspected by the NSA.

For personal email, I happen to use Gmail extensively, for its cost (free), uptime (excellent), and connectivity to other applications and website (superb). However, I don't utilize Gmail for any type of confidential or secure transaction, and I'm pretty good at ignoring ads based on whatever I happen to be emailing someone about. 

If the NSA (or Google) really wants to know the details of who I'm meeting for dinner, or what my spouse has asked me to pick up at the grocery store on the way home from work, they are quite welcome to the boring details of my life. This is somewhat akin to the "transparent life" movement; don't do anything online that you wouldn't want a total stranger to read about on Facebook.

Some of my acquaintances, however, feel very strongly that they want to maintain their privacy, even though their emails are no less innocent than mine. They've turned to more secure email systems such as SwissMail or ProtonMail (both based in Switzerland) or HushMail (based in Canada). 

There's an intriguing Forbes article about ProtonMail here:

http://www.forbes.com/sites/hollieslade/2014/05/19/the-only-email-system-the-nsa-cant-access/

A number of other secure email alternatives can be found here, many of which I've never heard of: 

http://www.techspot.com/article/896-secure-email-and-cloud-storage-services/

There are a number of systems to improve corporate email security. Most of them shunt marked-as-sensitive outgoing messages into a separate encrypted mail system which requires the recipients to create an account and log in to receive their email. The shunted outbound message is replaced with a plain text message indicating that a secure email is waiting for the recipient on thus-and-such a URL. 

Sometimes this capability is built into existing systems. One of the lesser-known features of Barracuda Networks' anti-spam appliances is the ability to encrypt inbound and outbound emails using a system similar to what's described above. Encryption can be done automatically based on a number of message attributes, including sending or receiving domain or email address, or manually by inserting a specific keyword in the subject line (e.g., "[encrypted]"). 

Sometimes it's not the contents of the email that are sensitive, it's the attachments. If you have critical/proprietary/secret business documents, don't send them via regular email. Use a third party system such as Box.com. For a more complete list of similar systems, use your favorite search engine to look for "secure file transfer". 

The other benefit of using a secure file transfer system is that it will reduce the size of your company's email database, and make your email adminstrators very happy. In nearly every client company, I've watched users routinely swap huge files via email not just with outsiders, but also internally. If you've ever wondered why your email system keeps taking up more and more disk space, this is one of the primary reasons.

There are also file sharing mechanisms that aren't necessarily marketed as "secure", but which are sold based on convenience. Most of these are probably secure "enough" for routine business files. These are typically sold as file synchronization products, which keep your files in the cloud and synchronize them to all of your various devices. However, they also allow sharing of your files with an outside person via an easy to copy and paste link. Box.com is one such system, as is Novell Filr, Barracuda CudaFile, SugarSync, Dropbox, and Code 42's CrashPlan. (Note: This is not intended to be a comprehensive list or an implicit endorsement; these are simply the products with which I'm most familiar.)

The point of all of this? Be aware that email is not secure, and treat it accordingly. If you have something sensitive to transmit, transmit it via a secure system...which will generally NOT be your personal or corporate email.


[Disclosure: My employer resells and/or utilizes Barracuda, Code 42, and Novell products and services. Neither I nor my employer have received compensation of any kind from any of the companies mentioned in this article, nor have any of those companies requested inclusion in this article.]


0 Comments

Repairing Active Directory

3/16/2015

0 Comments

 
Picture
This is perhaps a slightly "techier" entry than you're used to seeing from me. It's OK. People around me often forget that as an IT consultant, I still have plenty of hands-on work. Today I'm going to talk about a topic which touches nearly every aspect of my work these days: Active Directory Health.

Microsoft Active Directory is a wonderful thing...except when it's either abused or neglected, or tinkered with too much.Then it turns into a wounded entity which can drag down or halt your organization's productivity. 

I spend a lot of time cleaning up Active Directory for new clients. The methodology is surprisingly simple, so much so that many clients simply don't believe me. There's a limited number of attributes to consider and/or check.


Number and placement of Domain Controllers

This is a topic about which much has been written. Let's stick to the basics. At a minimum, each forest/domain should have two domain controllers (DCs) for fault tolerance. It is my personal preference that one of these be physical. The other(s) can be virtual.

There are two reason for my wanting a physical DC, both of which reflect my old-school tendencies regarding disaster recovery and business continuity: DNS Availability and Time Sync.

In terms of DNS Availability, if my hypervisor cluster goes down or is unavailable for any reason, having working DNS available (on a physical DC) makes troubleshooting much easier than if I have to stop and remember (or look up...if I can) IP addresses for my hypervisor cluster and other machines.


Time Synchronization

Active Directory utilizes Kerberos as an authentication mechanism. Kerberos is notoriously picky about time sync. If two devices have more than a slight timesync difference, Kerberos will frequently throw authentication errors. 

It has been my experience that virtual machines have a tendency to not be good at tracking time consistently. While it's possible to point a virtual DC responsible for time to an external NTP time source (which, to be fair, has worked for some of our clients), a physical DC provides a hardware-based reference clock for Active Directory.  

The physical DC should receive NTP time sync from a highly reliable source. I don't recommend picking a naval observatory or government time server off of a list. Those systems don't provide a guaranteed NTP signal to outside systems. Instead, use the public time server pool at pool.ntp.org, or a hardware based system such as a GPS time sync appliance.  

The time syncing DC should be the holder of the FSMO (Flexible Special Master Operations) "PDC" role, which is responsible for providing network time to Active Directory.


Network Communication

The #1 barrier to a healthy Active Directory forest/domain is bad or intermittent network communication to and from domain controllers. If every member server can't communicate to a primary and secondary DC, AD will not be able to maintain a synchronized directory database. (This goes double for DCs; all DCs must be able to communicate with all other DCs.)


DNS Server Configuration

It's difficult to believe, but not pointing all of your servers and workstations to the correct DC can cause network issues. I see this come into play every time we need to replace domain controllers for a client. Nobody ever remembers to go through every device and change out the DNS servers being used. This includes not just servers, but workstations (via DHCP, GPO, or static configuration), switches, routers, appliances, and firewalls. 

Domain Controllers should NOT refer to themselves for DNS. A DC with 127.0.0.1 at the top of its DNS server list may appear to lock up for 5-10 minutes during reboots as various dependent services try to communicate with a DNS service that hasn't started up yet. Each DC should have another DC as the first server in its DNS list. The loopback entry, if you really want to put it in, should be at the very bottom of the list. 


Replication

In situations where DCs have been added and removed over a period of time, the default "automatic" replication connections may not be sufficient. Ensure that every satellite site has a working replication connection to your primary site.


Running DCDIAG

I've lost track of how many times I've walked into a client with Active Directory issues, only to find they've never tried running DCDIAG.  The utility is there for a reason, folks. Learn to love DCDIAG; in its basic form, with no switches, it's read-only and won't hurt anything, I promise.

Periodically run DCDIAG and check that all tests pass. I'm not terribly concerned about the tests that check log files failing; those tests only indicate that one or more warnings or errors were found in a given log file. You should still check the log file errors listed as examples, which brings me to...


Browsing Event Logs

There's no substitute for checking event logs on every server (and/or workstation) involved in a persistent problem situation. This is pretty basic research.

There are three log files that I always check on each DC and on each involved server to get a feel for what's going on with a given network. In order, they are: System, Security, and Application. 

The System log tells me how the server itself is feeling. If a server is having health issues which might affect AD, those issues will show up here first.

The Security log tells me if authentication is working or not working.

The Application log tells me which apps are complaining and why. I look at this VERY closely on DCs to check if someone is trying to use a DC as an applications server. Just because a DC *can* run applications, that doesn't mean it *should*. Let a DC be Just A DC. Don't try to make it a file or print server, don't run databases on it, don't make it a vCenter server, just...don't. DCs should run Active Directory and related authentication services such as LDAP or RADIUS, DNS, and DHCP. The fewer services on a set of DCs, the higher the probability of a stable Active Directory network.

With any of the event logs, I usually don't have to browse back more than a couple of month or so to get a feel for the server's pattern of recurring errors and/or warnings. In networks with Active Directory issues, there will always be a pattern. 

For example, a lot of Kerberos errors usually indicate a problem with timesync, communication, or both. In most cases, plugging the resulting error codes into your favorite search engine will reveal a number of potential avenues for further investigation.


None of the above tools is what I'd consider "rocket science." This is all pretty basic, simple stuff. When you use these tools consistently and together, Active Directory cleanup can be much easier than most people think.

0 Comments

Increasing Customer Satisfaction through Best Practices.

2/15/2015

0 Comments

 
Picture
As the information technology industry migrates from on premise to cloud and hybrid applications, the number of potential variables affecting a successful product installation is increasing rapidly. The challenge facing tech companies is how to increase customer satisfaction to hold on to existing customers as well as earn new accounts.

Customer satisfaction is something that can’t be bought, but must be earned. It’s easy to get a new customer to try your new product or service, but if you can’t keep them happy, they’ll soon leave you to try whatever competitor is shouting the loudest.

If you were a tech company, and you knew of a best practice that would prevent 40%+ of your support hotline calls from being made, would you follow it?  Of course you would. Would you tell your customers to follow it?

Ah, that’s the issue. Not IF you would tell your customers, but HOW to tell them. And therein lies the concept of Best Practices Consistency.

Some years ago, I was appointed to an end user task force by a major technology company who wanted to improve end user satisfaction. As part of this task force, the manufacturer analyzed their technical support call center data to find out what were the most common problems and solutions.

The answer was simple, and it blew everyone’s mind: over 40% of ALL tech support calls were resolved by asking the customer to apply all outstanding patches and fixes, reboot, and attempt to recreate the original problem. 

Obviously, this was a best practice that needed to be communicated. But how?

Customer satisfaction with technical products and services depends upon a “three legged stool” of” Training, Professional Services, and Support. It could be argued that Development adds a fourth leg --as we’ll see later—but I’m concentrating on the customer-facing departments of a typical product/service manufacturer.

Each of these functional areas feeds into and is fed by the other two areas. Failure or absence of any of these functions leads to increased stress and reduced customer satisfaction in the other two areas.

In this case, the manufacturer set the best practice of “Patch, Reboot, Reproduce” at the top of their technical support engineers’ call checklists, right after establishing customer contact information and product type/version.

Next, the manufacturer had its instructors teach this best practice in end user technical training classes. “After installing the product, make sure you’ve patched and rebooted. If you run into any problems after installation, always check and apply new patches before calling Technical Support.”

Finally, the manufacturer’s professional service consultants were also instructed to discuss and demonstrate this best practice with every client during each consulting engagement.

Unsurprisingly, while the number of technical support calls didn’t decline immediately, the average length of time for such calls dropped steadily. Eventually, as customers internalized the best practice, the tech support call volumes started to drop.

During this period, customer satisfaction with the manufacturer’s products started to rise.

I tell this story not to teach the best practice itself, but to illustrate that creating and teaching best practices leads to wonderful things for all parties concerned.

  • Customers who are taught product installation best practices feel more empowered and less frustrated than those who are just handed a manual and installation media. 
  • Professional services consultants use best practices to not just empower clients, but to establish a rapport and provide added value. 
  • Technical support engineers who speak with customers who are following best practices will have an easier time troubleshooting issues, leading to reduced frustration on everyone’s part and shorter call times.

Establishing Best Practice Consistency begins with defining a list of best practices for your technical product or service. Some best practices might actually be considered product prerequisites, but should still be included in the list to be taught to customers.

The rule I follow: if a specific practice will prevent product malfunction, customer frustration, or a technical support call, it belongs on the Best Practice List.

Once you’ve established a list of best practices, circulate it internally between your company’s Development, Support, Training, and Professional Services groups. All of these groups must have input. Keep in mind that a product engineer may make assumptions about a customer’s technical environment which are different from those of a trainer, support engineer, or consultant.

With an agreed-upon list in hand, it’s time to insert that content in a number of places:

  • Every professional services consultant should have that list ready to show and hand to clients.
  • Every training course should include a copy of that list as part of the curriculum. (Extra points if the training class guides students through each of the best practices!)
  • The best practices checklist should be part of technical support’s basic call checklist.
  • Product engineers should have copies of the checklist to refer to as they make changes to code, so that if their base assumptions about the product environment change, they can notify the other teams.

How does this impact the customer? Let’s think about that for a moment using a sample best practice.

Jane has been tasked with implementing a new complex application from XYZ Company. As part of the implementation, she’s sent off to a training class, after which she’ll meet with one of XYZ’s consultants to begin installing the application.

During training class, she’s warned “Make sure that you have at least one physical server time-synced to a reliable external time source.” The class curriculum guides her through checking time synchronization, further making her aware of the need for the best practice.

The next week, when XYZ’s consultant comes in to help Jane install the product, Jane is asked by the consultant if she’s checked her network time sync. Jane remembers this was discussed in class. Since consistency of information is generally perceived as familiar (and therefore reassuring), Jane probably feels an increased level of comfort. Not only was what she was taught a common practice, but her consultant knows about it, too. This gives them a common concept and/or vocabulary to assist in building an effective professional rapport during the installation.

Several months after the product’s been successfully installed, Jane has a problem and calls tech support. They ask her if her network time sync is in order. Again, the consistency of information is reassuring to Jane, putting her more at ease. Even if she’s impatient at that point (“Yes, yes, time is fully synced!”), she knows that the tech support engineer is working through the same list that she was taught, the same list that was reinforced by the company’s consultant. The support engineer’s job is easier, because Jane already has a time-synchronized network, so they can move on more advanced potential causes of problems.

This entire situation applies to not just best practices, but also to technical vocabulary. Establishing a set of common concepts (best practices) using a consistent vocabulary will also help facilitate better communications between customer and manufacturer.

However, when the dissemination of best practices is applied inconsistently (or not at all), customer satisfaction will decrease as support difficulties (or perceptions of same) increase. For example, if a critical best practice isn’t taught during training classes, then the customer will be (probably unpleasantly) surprised during the first interaction with a consultant or technical support. “But nobody ever told me that!” is a terrible way to begin a support interaction.


0 Comments

Defusing Verbal Confrontation

1/30/2015

2 Comments

 
A couple of days ago, I learned that a favorite author of mine, Suzette Haden Elgin, died. Suzette was a linguist who branched out into science-fiction fantasy writing, and into non-fiction writing. She wrote one of my very favorite business books, The Gentle Art of Verbal Self Defense. 

The link in the previous paragraph leads to a fan page explaining the concepts of verbal self defense. but I strongly recommend you search online or at your local bookstore for a used copy of the book, which has been out of print for a few years.

The basics of "GAVSD", as it's often abbreviated by fans, is best summed up as follows:
  1. Understand that you're being verbally attacked,
  2. Identify the type of attack.
  3. Make the defense fit the attack.
  4. Be ready to follow through.

I can't summarize the entire book in a single blog posting, but I can certainly give an example or two which may help convince you to rush out in a buying frenzy to grab a used copy of the book to read for yourself.

There are structures identified in the book as "Verbal Attack Patterns".  Each VAP has a specific set of defenses.

The point of GAVSD is not to escalate. The point is not to win. The point is to turn an argument (or an attack leading to an argument) into a productive and useful discussion.

Commonly, verbal attacks all contain bait and suppositions. You must learn to ignore the bait, and instead respond directly to an indirect presupposition. One example I commonly use when counseling my engineers how to handle upset clients:

This is a common verbal attack I've experienced from clients who were highly upset during tense technical situations: "If you REALLY cared about fixing this mess, it would be DONE by now!"

The first part of the attack--the presupposition--is "You don't care about fixing this".
The bait is, "You're not working fast enough."


How on earth can a person possibly respond to this without getting into a debate? 

It's not useful to say "I'm working as quickly as I can," because that will just lead to a response of "No, you're not!"  The same applies to an attempted answer of "Well, of course I care about fixing this".  Both of these attempted answers take the bait and ignore the presupposition.

Here's a more useful response: "When did you first begin to feel I didn't care about fixing this?"

I promise you, this answer will stop the verbal attacker in their tracks, because it rips away their presupposition and ignores the bait. This answer indirectly acknowledges the situation, and simply asks for clarification.

And making someone stop and think--even for just a few seconds--about how to respond to a calmly worded answer such as the one above is all you need to turn a potential argument into a thoughtful discussion.

Even if the attacker says something along the lines of "The minute you walked through the door!", it's easy to defuse the emotional content by asking some basic questions such as "Why did you feel that way?", or "What did I say to make you feel that way?" Avoid yes-no questions during this phase of the exchange, because the goal is to keep the attacker discussing what's really going on inside of their head so you can both logically and calmly address those feelings and start working on the situation together.

Obviously, I can't (and won't) try to rehash the whole book here. Suffice it to say that I make a point of re-reading The Gentle Art of Verbal Self Defense at least once a year, or whenever I run into a verbal exchange that escalates, to remind myself that there is always a better way to handle negative interactions than escalation and Mutually Assured Destruction.

During her life, Suzette wrote several specialized versions of GAVSD: for the workplace, for education, for the military. The primary difference between the various versions is the set of situations used for examples. Despite this, a copy of the original title is easily applicable to any personal or professional situation, and should be easy to locate.

Then there's how I learned to differentiate between opinion and fact...but that's a topic for a different post. 
2 Comments
<<Previous

    Archives

    February 2017
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    March 2015
    February 2015
    January 2015
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014

    View my profile on LinkedIn

    Categories

    All

    RSS Feed

Proudly powered by Weebly