Author: Adam Fowler

Integration fundamentals – What to Avoid

Hi,
An opinion piece here, so please poke holes and post criticisms below.
Lately I have been going through a lot of system changes at work. That is to say, more than normal, and most at the early stages. We’ve been stuck in a state of limbo, mainly because the several systems we want to upgrade or change all talk to each other in one way or another. I’ll first briefly outline one house of cards, and then move to what should have been done better, generally speaking (or typing as the case may be).

We are on Exchange 2007, and want to go to Exchange 2010. That’s not too difficult you may think, you can build your whole new Exchange environment and move a few mailboxes over for testing, then just do a mass mailbox migration over the weekend and everything’s great.

This would be true, if several other systems weren’t leveraging off of Exchange 2007. Firstly, voicemail. Our phone system will pass unanswered calls through to the Unified Messaging Exchange 2007 server, which means we need the same functionality in Exchange 2010. How do we even test this? We need to contact our PBX support, and pay for changes back and forth out of hours. It’s not something we can easily do without business impact. Then, the PBX has no official support for Exchange 2010, so if something doesn’t work or goes wrong we’re fairly stuck.

Then, we’ve got the same problem with faxing. It goes from our PABX via Unified Messaging. Both of these services are considered business critical.

At the same time, we want to change our PBX system. So we’ve got the above problems in reverse, but on top of that we use OCS 2007 R1 which also needs to get upgraded. So now, we need to deploy a new PBX system, integrate it with a new Exchange environment, which in turn is integrated with Lync to replace OCS, and that talks to the phone system for both making/receiving calls and precense.
Now, because we want to change our PBX system we may need to also change our switch infrastrucutre because if we keep what we have, and went with a provider such as Cisco, they would say that they won’t support what issues happen with vocie quality if the switches aren’t theirs. Our switch infrastructure is up for renewal anyway.

I could go on about this with several other systems that are tied in, but hopefully the above is starting to paint a picture.

When integrating systems, think about how the OSI 7 layer model works. Refresher: each network layer can talk above and below it, regardless of what it is. This means that anything that gets changed in your network environment should work, if it meets the standards. You can swap a network card over, and everything else above it will work exactly the same way as before (drivers pending). You can swap a centralised switch, and it will continue to pass the packets of data around like the old switch did. Your application can talk to anything else on the network when anything below it gets swapped over. Hopefully that shows what I’m trying to say…

Where possible, use standard protocols or single supplier solutions. If you’ve got something that needs to send alerts out, go for simple SMTP emails. Everything supports it, and little to no work should be required when you have to change something. If they won’t support standards like SQL databases of either the latest version or the version before, you should hear alarm bells ringing.
If you need two seperately supplied systems to talk to each other, get each company to show proof they support the other, and will in the future. There’s no use 3 years later saying that company X would say it would work.

This should be the case for any system implemented – think about the future and what would happen, and what might go wrong if you have to swap out any part of it.

A prediction: Personal Cloud Desktops in the next 5 years.

Hello,

Lying in bed last night, I had a revelation about where I can see us heading in the next few years. This is mostly reliant on better broadband though (hello NBN!) but regardless I believe my idea is where we are heading.

Firstly, if you use more than 1 PC/device then you’ll know the frustration of having to either do multiple installs of applications you use, or re-do settings. A good example of this is your browser’s favorites/bookmarks list. Sure you can type in the websites, but it’s nice to have a full list to just choose from. To fill this void, services like Delicious http://www.delicious.com/ popped up – your bookmarks in the cloud! Now it doesn’t matter where you are, you can access that same list.

Email went the same way – Outlook is nice to use, but it doesn’t help you when you’re at work and want to check your personal emails. Again, the solution was to have your emails in the cloud and sync all your devices/PC’s to that single point, or even just use a web interface and forget about using any other client.

Twitter is my third example. Personally, at home I use Tweetdeck, work I just use the webpage www.twitter.com and on my iPhone it’s the official Twitter App. They all have different options and strenghts/weaknesses, but there’s no standard to how I’m reading twitter between all these things. For those of you that attened TechEd Australia and didn’t fall asleep in the Keynote, this was one of the points Microsoft made about their future vision – consistency across all platforms.

At the moment, I’m at the stage that if I’m at work, or even in the lounge at home, I can remote desktop to my main desktop with everything set up how I like it. I’d rather do that then have to both installing a bunch of apps yet again, looking for that consistent user experience.

Anyway, I don’t see this consistency stopping with just the apps – I see it as being your whole environment/desktop. I believe this will happen one of two ways:

1. All your settings/apps/etc will be synced to the cloud on a single account. You sign in somewhere, and all these settings get pulled down. It’s almost like a roaming profile, but with a much wider reach. This may be the first step before #2, because there are many limitations with this.

2. Your own virtual desktop in the cloud. Instead of just syncing bits and pieces – your whole desktop can either be hosted in the cloud, or synced there. Differential Syncronisation would make any changes required back and forth. Think of it the same way you do email, but on a larger scope. You sit in front of a new PC, and either download or plug in your desktop – and everything’s there just how you left it. This is already in place in some corporate environments, but as far as I know it’s only doing a remote session to a server. This is the next step, where you have the option of still using your local powerful PC because you’ve got a copy of the desktop on it. If that isn’t needed, then you can still just remotely control your little space in the cloud (again, I’m hoping you see the parallels with email here).

So there you have it. My prediction. I want my personal desktop in the cloud, but also don’t want to be limited by latency or bandwidth issues.

TLDR; A personalised virtualised desktop that downloads locally on demand, with differential synchronisation for changes.

SMS Dead in 5 years, Email Dead in 10?!

27/09/11 Update: Tommy Tudehope has written an article on his thoughts here: http://www.abc.net.au/unleashed/2913064.html

Today I was listening to the current affairs show ‘Hack’ which airs daily on Triple J at 5:30. One of the topics today was from a Social Media Consultant Tommy Tudehope (on Twitter at @TommyTudehope), who was predicting that SMS would be dead in 5 years, and Email dead in 10.

For the audio of the broadcast: http://mpegmedia.abc.net.au/triplej/hack/daily/hack_wed_2011_09_21.mp3

Webpage of Triple J’s Hack: http://www.abc.net.au/triplej/hack/podcast/

Tommy’s claims:

“… People think SMS is one to one, of course it is, but is it really private, who has access to it, and are you always relying on your service provider Telstra or Optus to connect you through.”

“…A lot of businesses have trouble working/collaborating with other businesses so sending mass emails to different people who you’re working with. Now with Google Plus, you can have a single group of people in once circle and share that information. You might have an announcement that you got a new CEO.”

“This is why I think it will (die) because instead of sending a corporate email to 5000 employees, why don’t you just have a group of people on your Google Plus account with in that circle just say all your employees and make the announcement ‘Hey we’ve got a new CEO’ saves the cost of the email (which is obviously minimal) but it’s also a more direct and informal communication.”

This is my response to Tommy’s predictions.

I could start and end with citing the example of the fax machine. Invented in 1846, and in the mid 1970’s the first fax connected to a phone line via modem was invented. They aren’t as common these days, but many businesses and homes still have them. Why hasn’t email killed off the fax machine yet?

I do agree that the internet based free services will continue to grow and evolve, but just like the whole PC vs tablet argument – they’re an companion, not a replacement.

Now, I agree with the notion that SMS isn’t private, but it’s no more private than ANY other service that goes through a 3rd party. The security behind SMS is much higher than anything app/web based.

The main claim for SMS dying from Tommy is that SMS will be killed off by Facebook, Twitter, Skype and other apps. I don’t believe this will be the case until another lowest common denominator method of text messaging is around, and this needs to be heavily integrated into ALL phones. Why replace SMS? The telcos have no incentive to do so – it makes them the most money. Mobile phone OS makers have no reason to innovate in this area either unless they can value-add. The growth of Apps lets consumers choose, but with choice comes diversification and separation.

The crux of the argument it seems all comes back to smartphones. The technical bar is too high for many people to set this up and use. SMS is easy, and if you can call someone on their mobile you can SMS them. The same can’t be said for any of these other services.

Now, for email’s 10 year life expectancy, it’s almost the same argument. It’s too heavily integrated to be replaced. Systems have been built the last 40 years around email and nothing’s going to kill it. Email needs to be overhauled to an Email v2 with a lot more security and guarantee of service (instead of being able to fake any address, being sent plain text and hoping the other end received it), but that’s not replacing email.

I’m a bit baffled at Tommy’s understanding of how this all works. His comments about the ‘cost of sending an email’ are quite strange – I don’t know where he’s coming from on this. It raises many questions for me: How is Google Plus any cheaper? How is it easier to set up a group of all your employees in a circle, then require them to check themselves for updates? How is email not direct? Why class email as formal, when surely the content and style of the email should demonstrate it’s formality? Why would the example of announcing a new CEO not be formal? What is difficult about sending emails to another business (if it’s collaboration on a project, surely each end could just create an email group for the members!)?

At no stage was any reason given why Email and SMS would die off, just examples of other services that could do other things (such as one to many communications – Twitter being a prime example of that).

Could you see your business solely communicating on any current method of social media? Is there anything out there that can even be managed, audited, recorded and controlled by a company the way Email currently is?

I hope Tommy reads this and clarifies his position – He didn’t receive much air time, but I get the feeling it’s a ‘blinkered’ viewpoint. His business is Social Media, and anyone I see in that business puts more importance on it than it deserves… and of course, that’s in their self interest.

Do you think SMS will die in 5 years, and Email in 10? Please respond in the comments below!

Do You Trust The Cloud Yet?

The Cloud – Monkey (from Monkey Magic) had one, should you use it too?

Has your CIO/CEO/IT Manager done this?

Do you trust the cloud?

I would be surprised if you whole-heartedly said ‘yes’. Firstly because you’re talking back to a blog post which is quite strange behaviour, but secondly because there’s a lot of media attention going on in this space.

Just to rehash the last week, there were two major events, one from Google and the other Microsoft.

Google:

Wednesday 8th September (ish, it’s hard to gather what timezone they’re all talking about) saw a Google Docs outage. The outage lasted 52 minutes: 23 minutes from being alerted to kick off a rollback proccess which then took 24 minutes to do. Add an extra 5 minutes – the time it took for “the additional capacity restored normal function”.

The cause was due to a change they had implemented to improve real time collaboration, but the heavy load of the real world exposed a memory management bug.

Microsoft:

Wednesday 8th September again (although later in the day in America, so the 9th for Aussies) it was Microsoft’s turn. Office 365, Dynamics CRM and some other non-enterprise level services (Hotmail, Skydrive, Live stuff) were down for a few hours. This one was not as clear cut – the outage itself was for the North American data centres, which meant genereally that Australians were fine as we use another based in Singapore. The fix was an update to DNS servers, which means we all have to wait for replication of the new records around the world before everyone everywhere is without issue.

The cause for this is a bit less detailed than Google’s, with ‘DNS issues’ being claimed as the cause.

That’s scary, how do I cope with these outages?
So, would your business complain about these outages? OK yes, you probably have someone who complains about their keyboard clicking too loudly when they type, so of course someone will complain about this.

If you wanted to jump into the cloud, I’d suggest to look at a hybrid solution. This isn’t news to many readers I’m sure – multiple paths of redundancy for “everything” which includes your servers and services. For emails, you can split between your local Exchange (or even hosted) and Google Apps. Postini replays everything that goes through it to Google Apps, so your users can jump onto Google Apps in the event of an Exchange outage and continue working. Then, you’re not missing out on the feature rich options of Exchange, but the business critical emails have full redundancy.

My Conclusion:
The real summary here is – go ahead, use the Cloud. BUT – do what you should already be doing (i.e. redundancy, are you paying attention here? Good.). A single provider in the cloud is not reliable enough at this stage to be trusted for it’s own inbuilt redundancy. Trust two clouds, or one cloud and the other half on-premesis.

If you’re a small business with a portaloo full of staff, then it becomes much harder to justify. Also, is there a manual process that can be used in the event of a service failure? Business Continuity should dictate what’s required for redundancy. Maybe writing things down for a day is a completely valid way of coping with the outage, with little or no loss to the business? There’s no reason to spend money on redundancy in that sort of situation.

Google Apps and Office 365 both guarantee 99.9% uptime – sounds great, but that’s ~42 minutes a month they’re allowing for. How much would your business lose if nobody could do their computer based work for 42 minutes during the day, every month on average? Over a year, that’s slightly over 1 full working day. If the cost of that outweighs the cost of getting a second cloud service or some other means of redundancy, then it’s already paid for itself. Getting a service refund after the event isn’t really what people care about, they just want it to work.
Sources:

Google Docs Blog – http://googledocs.blogspot.com/2011/09/what-happened-wednesday.html
Google Postini: http://www.google.com/postini/continuity.html
Windows Blog – http://windowsteamblog.com/windows_live/b/windowslive/archive/2011/09/08/current-hotmail-and-skydrive-issues.aspx
Office 365 Tweet – http://twitter.com/#!/Office365/status/112008132443648000
ZDNet Microsoft Outage – http://www.zdnet.com/blog/microsoft/outage-hits-microsoft-crm-online-office-365-customers/10359

Group Policy Preferences – What Takes Preference?

How do you know if one Group Policy Preference occurs before another?

Hi again everyone,

Today I’m sharing something that I have just found out about, thanks to the very helpful Alan Burchill (Twitter) who is a MVP in Group Policy. Thanks Alan!

So, I’ve talked about Group Policy Preferences before – wonderful, and not widely used enough yet – but they’ll do pretty much anything you could do with a login script with the added benefits of high granularity, GUID and targeting based on almost any criteria you can think of rather than writing complex scripts and error reporting in event viewer.

I came into a scenario where I needed to delete all of the files in a directory, then copy several files back into that same directory. As I created this, I then wondered how I’d make sure that the delete occured before the copy. If it happened the other way around, the end result would be an empty directory!

If you’re doing multiple settings of the same type, then they get an Order number as per the screenshot below. You can move the order around, to make sure things occur in their proper turn. Something like environmnet variables may need this, since your second variable may use the first variable as part of it’s path, and if it was the wrong order then you wouldn’t have the first environment variable set before it was called:

gppenv

Simple enough. But, what if you need an environment variable in place before you map a drive using that variable? The next screenshot is from the same policy, using one of the variables above to map a drive. But, if you look at the order number, it’s also a 1. The order is only relevant for items in that same area (as per this example, Drive Maps).

 gppdrive

 

 

How do you know what will happen first? This is what Alan Burchill found out for me. There is a set order in which each client side extension runs, and to view this you need to delve into the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\Winlogon\GPExtensions

 

gppreg

The order is based on the GUID. I’ve highlighted the second one, which is Group Policy Environment – the environment variables. They’re all self explanatory once you click on them, or you can search what you’re looking for and then ensure that they are in the order you need them to be. There’s one exception to this, which is {35378EAC-683F-11D2-A89A-00C04FBBCFA2} but that’s a blank entry… possibly so you can force something to be first if needed? We’re not quite sure on that bit!

I hope that helps you if you ever run into this sort of thing. There are other workarounds you can do, such as creating two seperate GPO’s and put those GPO’s in the correct order, but best practise is to have as few GPO’s as possible.Anyway, for my example above, the environment variables come before drive mappings which is {5794DAFD-BE60-433f-88A2-1A31939AC01F} so we’re safe.