IT

Updating Active Directory from a CSV

Scenario:

You’ve been asked to populate everyone’s Active Directory job title. The payroll system is correct, and they’re able to export you a list of usernames and correct job titles. All you need to do is get that into AD.

Solution:

You could do this manually of course, but that’s no fun and a waste of time. This is one of those scenarios where you’ll hopefully think ‘PowerShell can do this!’ and possibly wonder how. That’s what I did anyway, so set out to make it work.

Here’s a fake example of the data I was working with, in a file called fake.csv:

EMPLOYEENAME,JOBTITLE
AFOWLER,IT OPERATIONS MANAGER
RSOLE,JANITOR

Tip: If you open a csv file in Excel it is a bit easier to read.

From this data, we want to match the EMPLOYEENAME to the correct AD account, then update the Job Title field from the JOBTITLE entry of the csv file.

A script that will do this is:

Import-module ActiveDirectory
$data = import-csv -path c:\fake.csv
foreach ($user in $data){
Get-ADUser -Filter “SamAccountName -eq ‘$($user.employeename)'” | Set-ADUser -Replace @{title = “$($user.JobTitle)”}
}

So, what’s happening here? It can take a bit to get your head around especially if you’re not used to programming (like me), so I’ll try to explain it:

Import-module ActiveDirectory
Importing the ActiveDirectory module so the Get-ADUser command works. If you can’t load the module, install RSAT (Remote Server Administration Tools) which includes the AD module.

$data = import-csv -path c:\fake.csv
This is setting the $data variable to memory, which will contain all the contents of the fake.csv file.

foreach ($user in $data){
This is saying ‘for each line of information from the $data variable (which is the csv file), map that to $user and do the following”

Get-ADUser -Filter “SamAccountName -eq ‘$($user.employeename)'” | Set-ADUser -Replace @{title = “$($user.Job Title)”}
This is getting any AD User where their SamAccountName matches the employeename column of the $user variable (which is the current line of information from the csv at time of processing). Then with the pipe | it will use the result to then Set the AD User’s title field (where the job title goes) to the Job Title part of our $user variable. This command will run twice, because there are two lines for ‘foreach’ to process.

}
This is closing off the command which each ‘foreach’ command runs.

 

I hope that explains it enough so you’re able to manipulate the script to your own requirements.

 

KMS and MAK Licensing

Microsoft licensing is one of the things that puts fear into most people who have dipped their toes into it. Understanding how to be compliant with Microsoft – and how to make sure the company’s money is being spent properly – can be daunting. From an admin’s side, this is often not a concern as it’s not a part of their job – but at least understanding the implications of installing 10 different SQL servers in the environment is a necessity. One of the more fundamental models Microsoft now uses with Windows Server, Windows Client (e.g. Windows 7) and Office is using a key to register the products. So, how do you do it if you’re on a Volume Licensing Agreement?

What are KMS and MAKs?

With a Volume License Agreement with Microsoft, you are normally given two types of keys: Multiple Activation Key (MAK) and Key Management Services (KMS). The MAK will normally have an activation count, while the KMS does not. Simply put, MAK is a key that registers direct back to Microsoft with a certain amount of allowed activations. KMS on the other hand, lets all your clients use a generic key to talk back to a KMS Server on premise, and that centralised server talks back to Microsoft. The MAK side of things is pretty straight forward: you put a different key in per client, it will phone home and then either activate or fail. This works, but isn’t the way you should do things in a large environment. KMS gives you that automation.

KMS sounds great, how do I set that up?

I’ve written about this before on How To Enable Office 2013 KMS Host and How to add your KMS keys for Windows 8 and Server 2012, so I’ll just clarify how the process works. There are two types of KMS keys – client and server. The client key is normally the default key installed when using a volume license version of software, and the keys are publicly available. Here’s a list on Technet of keys. The KMS server needs to be configured as per my “How to” articles. Having a KMS Client key registered on your client will make it go through a different process of phoning home and check DNS records. The KMS server on premise gets the request, approves and activates the client. Technet as per usual has great documentation covering all of this, available here.

I’ve set it up and it’s not working – help!

OK, don’t get too worried here. First, you need to have enough clients trying to register on your KMS server before it will activate any of them. For Windows Client operating systems, you’ll need 25. Yes, that’s a lot. For Windows Servers and Microsoft Office, you’ll only need 5.

For an example, let’s say you’ve installed Microsoft Visio 2013 and it’s not registered (if you’re unsure if it’s registered or not, in Visio go to File > Account. On the right hand side it will tell you if the product is activated or not). Start by checking that you have a correct KMS key entered – you can re-enter it in the product.

You can force the activation of Office 2013 by going to the folder where it’s installed (by default it’s C:\Program Files (x86)\Microsoft Office\Office15) and running the command:

cscript OSPP.VBS /act

This will either tell you that you’ve successfully activated your product, or give an error. One of the most common errors is:

ERROR CODE: 0xC004F038
ERROR DESCRIPTION: The Software Licensing Service reported that the product could not be activated. The count reported by your Key Management Service (KMS) is insufficient. Please contact your system administrator.

Nice description. So, next up you’ll need to check your count reported on the KMS server. Forgot what server your KMS server is? You might be able to find out by using the command:

slmgr /dlv

which will give you an informative window telling you the KMS machine name from DNS. Or you can check your DNS for the entry from the DNS console under DNS Server > Forward Lookup Zones > internal domain name > _tcp > _VLMCS record.

On your KMS server, you can display the client count so far, to see if it’s hit the magic 5 with this command:

cscript slmgr.vbs /dlv 2E28138A-847F-42BC-9752-61B03FFF33CD

The string on the end is the Office 2013 Activation ID. For Office 2010 it’s “bfe7a195-4f8f-4f0b-a622-cf13c7d16864”. You’ll see a lot of information, but the important part is this:

Key Management Service is enabled on this machine
Current count: 5
Listening on Port: 1688
DNS publishing enabled
KMS priority: Normal

In this example, I’ve just hit the 5 so clients will now activate. If it was less than 5, I’d still be getting the previous error on the clients I had.

If you generally want to see what licenses you have on the KMS server, you can run the ‘Volume Activation Management Tool’ which is available as part of the Windows Assessment and Deployment Kit (ADK). Install instructions from Technet are available here. This tool will visually show you what products you have licensed, but won’t go into great detail. It’s good to have just as an overview.

There are other scenarios and issues that can happen with KMS activation. On the KMS server, you can check the KMS event logs in Event Viewer under Applications and Service Logs > Key Management Service. On the client, there are a huge amount of switches you can use with the slmgr.vbs script – you can run it without any switches to see them all.

Samsung Gear 2 Neo Review

The Samsung Galaxy S5 and Samsung Gear 2 (+Neo/Fit versions) released in Q2 this year, with high expectation. The Galaxy series of phones is one of the best selling in the world, and a product update to the Gear smartwatch had many consumers eagerly awaiting the release. The Samsung Galaxy S5 is a decent update to the Galaxy line (I’ll echo the phrase “evolution not revolution”), but the Gear 2 I believe still has a long way to go.

I’ve been playing with the combination of the Samsung Galaxy S5 mobile phone and Samsung Gear 2 Neo for the last few weeks. I really wanted to like these complementary devices, I had been waiting for weeks on their arrival. Disappointingly I’m not convinced about the usefulness of smart watches, and I’ll explain why.

First to clarify for those wondering – there are three versions of the Samsung Gear 2. The Neo version is similar to the vanilla Gear 2, but is missing the camera. I’d suggest to avoid confusion they call the Neo the Samsung Gear 2, and the Gear 2 the Gear 2 Cam… but I’m not in marketing so maybe that didn’t test well with focus groups. The third version is the Samsung Gear 2 Fit, which is a longer and skinner version, missing the camera and IR sensor.

Feature Samsung Gear 2 Samsung Gear 2 Neo Samsung Gear 2 Fit
Camera Yes No No
Screen 1.84-inch narrow 1.63-inch square 1.63-inch square
IR Sensor Yes Yes No

To start with, the Neo has a strange clipping mechanism on the strap. It just pushes in, and actually works quite well but took me a moment to work out due to requiring enough force to make me worried I was about to break something.

Once on my wrist, I found it to be very comfortable and sleek. It feels reasonably natural to wear, and the wrist strap doesn’t dig in. I was feeling good about this watch… until I turned it on.

There was a bit more of a process to get the watch up and running than I expected. On the Samsung Galaxy S5, I had to go into the Samsung Store (not the Google Play Store) to find the Gear Manager app. With all the pre-installed apps already from Samsung, it was a bit annoying to not find it already installed. I think this is because Samsung want to get you into their own App store to buy all the extra applications and watch faces, or I could just be a bit cynical.

Being a watch, I started by wanting to find the nicest watch face possible. The default was rather brightly coloured face, so I changed it to a much more sensible inbuilt time/date/applications display, with a very unexciting black background:

20140528_172105

 

I thought this looked rather smart. I played around with the apps for a bit, checked the weather and came to the realisation that battery life was still a big issue, which meant this smartwatch was a backwards step in telling time.

There’s the obvious annoyance of having to charge your watch every few days, rather than changing the batteries every few years (or never if you’ve got a fancy kinetic watch). Putting that aside, the usability of a smartwatch that’s trying really hard to preserve battery life is frustratingly annoying.

The watch display on the Gear 2 Neo is off by default. Completely black. When you make a motion with your arm to look at the time, it usually detects the movement and turns on the display for you. That’s great, but it takes half a second. If you have a normal watch, you’re used to a half second glance and you’re on your way. With this watch, you’re waiting that half a second that feels a lot longer for the display to light up. Sometimes it doesn’t even work, and you’ll have to press the button below the display with your other hand. You might as well have pulled your phone out of your pocket at this stage.

Your standard watch doesn’t have a pedometer. This smartwatch does, but you have to turn it on and off. It doesn’t just continually keep track. On the flip size, at night it will continually buzz or beep when anything happens on your phone such as a email or notification. This can be turned off by enabling sleep mode, but again this seemed to be a manual function. I had a brief look and couldn’t find a way to automate this (such as setting the times 10pm to 6am for sleep mode), which was another frustration.

To me, this is a watch that is the start of a good idea. Battery life needs to be improved vastly, and so does the flexibility around how you choose use the watch. I ended up concluding that this didn’t do as good of a job at being a watch as my analog watch, and until that’s fixed, the Samsung Gear 2 smart functions are icing on a stale cake.

Coping with Infinite Email

Automatic Deletion of Deleted Items with Retention Policies

Exchange 2010 and 2013 have an option called “Retention Policies”. I’ll base the below on what I see for Exchange 2010, but most of not all should apply to 2013 also.

Retention Policies are useful if you need to keep your user’s mailboxes clean, as well as trying to avoid a Deleted Items folder with every single email the employee has received in their time with the company. You can work out what the company agrees with for what can and can’t be auto deleted, and save a lot of money on space for both live information and backups.

The Retention Policies are made up of “Retention Policy Tags” and these tags “control the lifespan of messages in the mailbox” as quoted by one of the wizards that you configure this in mailbox. The Retention Policy is then targeted at the mailboxes you want to apply these settings to.

Gandalf-You-Shall-Not-Pass-Ian-McKellenMaybe not this wizard.

It’s worth noting that a mailbox can only have one Retention Policy linked to it, so you need to plan overlapping settings accordingly.

So, what can a Retention Policy Tag do? You give it a ‘Tag Type’ which is either a folder in someone’s mailbox (e.g. Deleted Items) or every other folder that isn’t an inbuilt folder. From that definition of what folder the tag is on, you can either set an age limit for all items in that folder, or set the items to never age.

deleted items

The Age limit is a number in days. This number actually means something different depending what Tag Type was targeted. For an email in the Deleted Items folder, it’s based on the date the item was deleted by stamping it at the time of deletion. There’s some caveats around that, so refer to this chart on TechNet which lays out how the Retention Age is calculated.

There’s also a Default Archive and Retention Policy (called MRM Policy in Exchange 2013) that is applied to all mailboxes that have no other policy applied, if archiving is enabled (remember that can only be one). So if you have simple requirements, use this policy. For more complex requirements, you’ll need multiple policies and either manual management of mailboxes to apply the right policy, or use a script that’s run at regular intervals.

Once you’re set up, the policies are enforced by the Managed Folder Assistant. This runs on an Exchange server, which is controlled from the service Microsoft Exchange Mailbox Assistants. This used to be schedule based (Exchange 2010 pre-SP1) but SP1 onward and Exchange 2013, this is an always running throttled process. It’ll do it when it’s the ‘right time’ based on several criteria and checks. If you want to know the specifics, read this from TechNet.

To check that the policy has applied, you can go to the properties of the folder of the mailbox in question (for me it’s Deleted Items) and you’ll see the policy listed:

deleted items 2

You can also look at the individual emails to see both the retention policy applied, and when the email will expire. This is what I see from Outlook 2010:

deleted items 3

If you want to process a particular mailbox right now because you’ve just configured something, you can use the PowerShell command:

Start-ManagedFolderAssistant -Identity “guyinaccounts”

If you want to do more than a single mailbox, you’ll need to pipe it. Again, more details here on TechNet. The Event Viewer on your Exchange server should tell you how it went, but from some of the information I’ve read, a Retention Policy that’s only just been targeted to a mailbox can take up to 48 hours to actually recognise and start processing. For me it took more than a few hours before I could see the policies on my emails.

One last point, when you first create and apply a policy is when Exchange will start tagging emails. For my example, I set it to 60 days Delete and Allow Recovery, on the Deleted Items. This caused all exisiting deleted items that went back a few years to get marked for deletion 60 days from when I applied the policy. It won’t go back and instantly delete your older items.

Who Will Be There For The Long Run?

You may have noticed that the theme on my blog has changed. The theme I was using was a light version of a pro product, which I didn’t buy. I was looking at changing some small settings and discovered that the creator of the theme had stopped supporting it a few months ago.

Knowing that I’d probably have issues in the future, I decided to find a different theme. It had to work with the content I already had and look pleasing enough to me. I also didn’t want a v1.0 theme, because that gives me no assurance that the creator has any interest in updating it when future WordPress versions are released.

I realised that this same methodology is how I approach most pieces of software. Ideally it needs to have been around for a little while to prove they can deliver, and keep their product updated. It needs to have good support, either from the community or the creators. It needs to integrate well with existing systems, but also not cause you to be locked in to the product itself.

After working in I.T. for a while, I’ve found this is instinctively how I think. A big factor would be learning this from when things go wrong – from implementations, upgrades or changeovers and considering what decisions should have been made early on to prevent this.

This in itself causes issues, because how can a software solution get customers if everyone wants something that’s already proven? Companies will often take risks if option B is substantially cheaper than option A, or the vendor of the software have proven themselves with other solutions… but generally it’s safer to go with the proven solution.

Maybe this methodology is changing with the rapid release cycle we’re now seeing globally. It’ll probably cause more issues due to less testing time and more updates, which instinctively is the opposite of what we’ve all learnt to do in IT. This applies to the cloud too – you’re putting your faith in a 3rd party, but you have no visibility or control over changes. Without that visibility, how do you know everything of yours will work after the fact? Or will you be left trying to find another cloud vendor that works with your existing setup?

So yes, I have a new theme. It works, and it’s free. It’s newer than v1.0 so at least there’s some evidence that it will be maintained, but they may stop this at any time. I’m not giving them any money so I can’t complain, but it’s still the fundamental basis of my decision process. It’s luckily quite easy to change themes because of the well designed plug and play style of themes. This is what I expect from any software vendor (but rarely met), and anything beyond increases the risk of pain – it may not be now, but chances are it will come.