Exchange 2010 Gotcha – #1

As I work through migrations to Exchange Server 2010 with my various clients, I’m developing a list of Exchange 2010 “gotchas”. Not necessarily things that are earth-shattering, but do have the potential to be surprising to administrators.

Gotcha #1 revolves around the fact that an Offline Address Book is no longer automatically specified for a new mailbox database. Surprise!

So, just like most other things that require specific processes in Exchange, you should develop a list of items to be performed when you create a new mailbox database, whether you use the Exchange Management Console or the Exchanage Management Shell. Consider this as a list:

  1. Create the database, specifying an individual folder for the database (e.g., E:\DB-Zippy\Zippy.edb) and an individual folder for the log files (e.g., E:\Logs-Zippy). If you are using the EMC, and you do not want to take the default paths, you will have to type the entire path into the path fields (there is no browse button). As a suggestion, using Windows Explorer, first browse to the path where you want to put the files, and copy them into the clipboard, then you can paste them directly into the fields in EMC (or into the EMS, where you always have to enter the entire path).
  2. Set the Offline Address Book (OAB). If you are using the EMS, you can specify the OAB (e.g., “\Default Offline Address Book”) when you create the database. When using EMC, you’ll have to open the property sheet for the mailbox database, click the Client Settings tab, and select the address book via the Browse… button. You should also take this opportunity to verify that the Public Folder database is a valid public folder database (if you still have public folders in your environment).
  3. Set the Journal Recipient. You cannot specify the Journal Recipient during creation when using either EMC or EMS. With EMC, you’ll have to open the property sheet for the mailbox database, click the Maintenance tab, check the box to select the journal recipient, and browse for the particular user.
  4. Set external permissions. If you are using Cisco Unity or RIM’s BlackBerry Server (Enterprise, Professional, or Enterprise Express), then you’ll need to set additional permissions to allow those software packages to access mailboxes contained in this mailbox database. There is no mechanism for performing this operation using the EMC. For BlackBerry, this is the relevant command, executed from the EMS (assuming that your BlackBerry administrative user is named BESAdmin):

Get-MailboxDatabase | Add-ADPermission -User BESAdmin -AccessRights ExtendedRight -ExtendedRights Receive-As, Send-As, ms-Exch-Store-Admin

Obviously, there are additional parameters that can be set. However, these four meet the needs of all my clients and probably will work for you too!

Watch this blog for more “Exchange 2010 gotchas”.

Until next time…

If there are things you would like to see written about, please let me know.


Follow me on twitter: @EssentialExch

A Brief History of Time…(ok ok, let’s go with “An Introduction to the Windows Time Service”)

Note: This article is written for “modern” versions of the Windows operating system – that is, Windows Server 2008, Windows Server 2008 R2, Windows Vista, and Windows 7. For older versions of the Windows operating system, the concepts still apply, but some of the command line parameters for w32tm have changed.

Windows, especially in an Active Directory environment, requires “good” time. For this discussion, having “good” time means that all members of a domain are capable of synchronizing their clock to a domain controller. Domain controllers synchronize their clocks with the domain controller which holds the PDCe (Primary Domain Controller emulator) role in their Active Directory domain. PDCe’s in child domains synchronize their clocks with the PDCe of the root domain of the Active Directory forest.

When Windows does not have good time, log file entries have incorrect timestamps, event logs have incorrect timestamps, database transaction logs have incorrect timestamps, etc. etc. When the time on a computer becomes too far off from that of a domain controller (more than five minutes above or below), the computer is no longer capable of acquiring Kerberos tickets – this means that a computer and/or a user will not be able to log in to the Active Directory domain, nor will they be able to access any resources on the Active Directory network.

This can happen to user workstations and to servers. Obviously, a server may effect more users than a single workstation; but that doesn’t mean you should pay any less attention to your user workstations.

The “Windows Time” service is responsible for keeping a computer’s clock synchronized. This service can be controlled and configured on each computer by a command line tool named w32tm. Modifying any parameters for the Windows Time service requires local administrator permissions (and if UAC is enabled on the computer, it also requires an elevated command prompt or PowerShell session).

Determining whether a computer can synchronize its clock is easy to test (this is irrespective of whether the correct time source is configured – just that the computer can synchronize to the configured time source). Open an elevated command prompt or PowerShell session, and then enter:

w32tm /resync

Does it work? If so, then this computer can synchronize its clock with its configured time source. If the clock on the computer is off (“skewed” is the typical term used for this situation), then further analysis is required. If the time on the clock is off by an even number of hours, you should probably be looking at the timezone configured for the user or computer, not at the time synchronization sources.

If there are other computers whose time is skewed, then enter the same command on the other computers. The command should work there too.

If the resync commands work, but the computers are getting the wrong time, you need to begin analyzing the configuration for the Windows Time service. From your shell or PowerShell session, enter:

w32tm /query /source

This allows you to determine where the particular computer is getting its time. There are a number of possible responses. These include:

Local CMOS Clock

In this case, the computer is using the hardware clock on the computer as its time source. If you are using VMware, this means that the virtual machine is synchronizing to the VMware host.

Free Running System Clock

In this case, the computer is not using any external source, but depending on the time tick generated by the System Idle Process running on the computer. This value will generate a skewed time more quickly than any other.

a hostname of a domain controller in the Active Directory forest

In this case, the computer is using a domain controller as either an NTP server or as the time source via Active Directory. To determine which, see “/query /configuration”, discussed later.

a hostname of a computer running a NTP server

In this case, the computer is using a non-Active Directory server running an NTP server as its time source.

VM IC Time Synchronization Provider

In this case, the computer is using Hyper-V virtualization services as its time source.

Best practices from Microsoft recommend that you never use virtualization services (regardless of your hypervisor provider) as a time source for domain-joined computer; instead, you should depend on typical Active Directory synchronization methods.

VMware recommends that, for domain-joined computers, you install an NTP server on the VMware host and you have the computers synchronize to that NTP server.

*** Edit on June 25, 2010 – A VMware employee contacted me at the end of May suggesting that the above line was not accurate. Indeed, VMware updated their documentation (the linked VMware KB 1318 article below) in March of 2010. Now, for most intents and purposes, the VMware recommendations match the Microsoft recommendations.

In my mind, you are better off starting with the Microsoft recommendations and then go from there.

Here are references to the above comments and best practices:

Virtual Domain Controllers and Time Synchronisation
Considerations when hosting Active Directory domain controller in virtual hosting environments
Deployment Considerations for Virtualized Domain Controllers
VMware KB: Timekeeping best practices for Windows

Now, if the initial resync command doesn’t work – that particular failure reason is what you need to figure out. The first thing I always check is the firewall configuration. By default, time synchronization requires that that a computer be capable of sending a UDP request to port 123 on the NTP server (and receiving the response). NTP servers also listen on port 123 for TCP requests. The Windows Advanced Firewall in the modern Windows operating systems will automatically have an entry opened for time synchronization on UDP port 123 to your domain controllers. However, if you are configuring your PDC emulator server, you also need to ensure that the external firewall also allows that request. If you have non-domain-joined computers, then you may need to globally allow port 123 requests in your firewall.

The command below will tell you the time source for a particular computer:

w32tm /query /configuration

You are initially most interested in the value of the Type variable which is displayed. There are a number of possible responses. These include:

NTP – the external time source is the NTP server(s) specified by the NtpServer variable
NT5DS – the external time source is the domain hierarchy (that is, time synchronization originates from a domain controller)
NoSync – there is no external time source
AllSync – the computer should use both the domain hierarchy and the manually specified NTP server(s) as external time sources

There may be multiple external NTP servers listed in the NtpServer variable.

To properly set up a time source synchronization hierarchy for your domain, you need to begin by locating the domain controller which holds the PDC emulator FSMO role (obviously, if you have a single domain controller, such as is normally the case in SBS 2008, this process can be shortcut). To determine the holders of the FSMO roles, at that earlier-opened command prompt or PowerShell session, enter:

netdom query fsmo

Next, on the domain controller which is revealed to hold the PDC emulator role, you should do something like this:

w32tm /config /manualpeerlist:pool.ntp.org /syncfromflags:manual
w32tm /config /update
net stop w32time
net start w32time
w32tm /resync /rediscover

This ensures that this particular domain controller will attempt to synchronize with an external source providing known good time. pool.ntp.org is a common source. Windows computers come configured by default to use time.windows.com, which sometimes works and sometimes doesn’t.

For all other domain-joined computers, the appropriate configuration is:

w32tm /config /syncfromflags:domhier
w32tm /config /update
net stop w32time
net start w32time
W32tm /resync /rediscover

That really should take care of it. /syncfromflags:domhier is the default for domain-joined workstations and should be for all DCs except for the one in the root domain holding the PDCe role.

When a computer is properly synchronizing from an external source (after the Windows Time service restarts or becomes capable of synchronizing after some interval where it can’t synchronize), the following entry is made to the System Event Log:

Log Name: System
Source: Microsoft-Windows-Time-Service
Date: 1/24/2010 1:01:27 AM
Event ID: 35
Task Category: None
Level: Information
Keywords:
User: LOCAL SERVICE
Computer: W2008R2-DC
Description:
The time service is now synchronizing the system time with the time source pool.ntp.org (ntp.m|0x0|0.0.0.0:123->69.26.112.120:123).

If the time source is a DC, the DC will be named and its IP address listed, just as if it were an external source.

Until next time…

If there are things you would like to see written about, please let me know.


Follow me on twitter: @EssentialExch

Where oh where, did my AD site go…[Alternate title: It’s the DNS, stupid.]

I recently had a very confusing issue arise at one of my Exchange 2007 clients and I decided to share it with you. At this particular company, an Active Directory site is reserved for Exchange, and there are two domain controllers (both global catalog servers) in that AD site. The front-end is two CAS (CAS1 and CAS2) load-balanced by an ISA enterprise array with a CCR backend.

The week before, we had replaced all of the domain controllers in the forest with Windows Server 2008 R2 domain controllers, and bumped both the domain functional level and the forest functional level to Server 2008 R2 (we are going to enable the AD Recycle Bin). The new DCs replaced the old DCs and kept the original IP addresses.

That’s the setup.

An onsite technician was applying patches late one night (good for him!). Unfortunately, he patched and rebooted both of the Exchange AD site DCs at the same time (bad for him!). As you may already know – that makes Exchange very unhappy. System Center Operations Manager is also running in the environment and it immediately started to generate alerts about the missing domain controllers.

Sidebar: In Exchange 2003 and above, Exchange executes an Active Directory Topology discovery every 15 minutes. The specifics vary between versions of Exchange, but suffice it to say that, within 15 minutes, Exchange will find another DC/GC set (if they exist). In that case, your best bet is just to wait out that 15 minutes.

The technician reacted to the alerts from OpsMgr by rebooting the Client Access Servers. They both found out-of-site DCs and began working.

Then, the fun began. When the in-site DCs came back online (just a few minutes later), CAS1 reassociated with the in-site DCs and reset its secure channel to one of the in-site DCs. CAS2 did not.

The symptom of this is that all users connected to CAS1 through ISA were fine. However, the users that ISA connected to CAS2 were redirected through the same URL that they had already used – and CAS-to-CAS proxying did not work, so they couldn’t access any Exchange services – OWA, Exchange, anything. Quick workaround: remove CAS2 from the webfarm and RPC publishing in ISA so everything was routed through CAS1. However, redundancy is now lost.

Why this problem happened – I don’t know. The NetLogon service is responsible for maintaining the AD site a computer identifies itself with and maintaining the secure channel to a proper DC. However, for CAS2, NetLogon refused to reassociate to an in-site DC.

NetLogon bases site affinity on DNS. Both servers, CAS1 and CAS2, were configured identically for DNS. NetLogon uses a Windows API call named DsGetSiteName. In Windows Server 2008 and Windows Server 2008 R2 (and in Windows 7), you can use the nltest.exe utility to check this value. To wit:

PS C:\> nltest.exe /dsgetsite
Default-First-Site-Name
The command completed successfully
PS C:\>

Sidebar: nltest is available for Windows Server 2003 and Windows Server 2003 R2 as well, you just have to download and install the Windows Support Tools.

NetLogon does its check-and-reset once an hour, and upon startup. Once you know that, it should be easy to just restart the NetLogon service, right? Well, that didn’t make any difference.

So, we have the capability of forcing a particular secure channel for a server, and this also will set its AD site. To wit:

PS C:\> netdom.exe reset cas2 /server:DsEx2
The secure channel from CAS2 to DOMAIN was reset.
The command completed successfully.
PS C:\>

Note: nltest.exe has this functionality too, but netdom.exe has been around longer and I was more familiar with its parameters. See the SC_RESET parameter to nltest.

The AD site is updated, the secure channel is updated, and everything looks great. I declare success, put the server back into ISA, and move on.

An hour later, the client calls and says it is broken again.

Well, he’s right. The AD site has flipped back again and CAS2 is thus not operating properly. Obviously this has happened because NetLogon did its cycle.

C-r-a-p.

OK. Now its time to buckle down. AD sites are based on DNS. We know that. So, I ran dcdiag on all the servers. replmon on all the servers. Everything is clean.

But then visually examine the DC locator records in DNS – and… I find an extra one.

During the process of standing up all the new DCs, and configuring the new DCs with old permanent IP addresses, the OLD DCs ended up with the temporary IP addresses of the new DCs. Then, the old DCs were demoted.

All of the DCs but one cleaned up after themselves. The extra locator record was one of those DCs, and shockingly, now has stale DNS.

The fix? Remove the stale DC locator record. Reset the secure channel again, just to ensure it gets to the right place.

And Voila! It’s fixed.

If you’ve ever been to one of my installation seminars or read many of my articles, I talk about the importance of DNS in both Active Directory and Exchange Server. Here, yet again, is another example of that. Sometimes, you just have to take a look in the right place to find the problem.

Until next time…

If there are things you would like to see written about, please let me know.


Follow me on twitter: @EssentialExch

Speeding Reboot When Exchange is on a DC/GC

As I’ve noted in several previous blog entries (such as here and here), installing Exchange on a domain controller and/or a global catalog server is not a best practice. However, if you are running SBS (Small Business Server) or EBS (Essential Business Server) or if you only have a single server in your environment – you may not/don’t have much choice.

Given that you or your company may have no choice in the decision, it still may come as a disappointment (disgust?) that it takes so long to reboot your Exchange server.

This typically happens because of two primary reasons:

  • When Exchange is installed on a DC/GC, that Exchange server will refer to no other DC/GCs in the Active Directory, and
  • When a shutdown or reboot request is received, it isn’t possible for Exchange to terminate prior to Active Directory shutting down.

Now, you may think “poor poor Exchange, do what _I_ want anyway!” Well, in Exchange’s defense, that may be harder than you think. Consider a common scenario that may occur:

  • A VSS backup is running against your server and it’s just entered the Freeze stage against all writers
  • Exchange is running
  • RPC/HTTP is up
  • OWA is up
  • SQL is running\
  • …a shutdown request comes in

What is the right order to shut things down in that ensure everything gets shut down before AD starts shutting down?

The answer is – can’t be done!

Exchange and Active Directory have no mechanism for terminating the right things in the right order. So, it is up to a human brain to help them out.

I suggest you create a directory on your combination Exchange / Active Directory server named c:\scripts. Within that directory, create a file named shutdown.cmd. In that file, place the following commands:

echo %DATE% %TIME% Shutting Down Services >>c:\scripts\shutdown.txt
net stop msexchangeadtopology /y
echo %DATE% %TIME% Shut Down MSExchangeADTopology >>c:\scripts\shutdown.txt
net stop msftesql-exchange /y
echo %DATE% %TIME% Shut Down MSFteSQL-Exchange >>c:\scripts\shutdown.txt
net stop msexchangeis /y
echo %DATE% %TIME% Shut Down MSExchangeIS >>c:\scripts\shutdown.txt
net stop msexchangesa /y
echo %DATE% %TIME% Shut down MSExchangeSA >>c:\scripts\shutdown.txt
net stop iisadmin /y
echo %DATE% %TIME% Shut down IISAdmin >>c:\scripts\shutdown.txt
echo %DATE% %TIME% Shut down services script complete >>c:\scripts\shutdown.txt

Note that the echo statements are completely optional. They are simply present to allow you to record the sequence of events that does occur during a shutdown.

Once you have created this file, open Administrative Tools -> Group Policy Management.

Expand the domains node, then expand the node for your domain, and then expand the Group Policy Objects node.

Under the GPO node, right click on the Default Domain Controllers Policy and select Edit…

Expand Computer Configuration -> Policies -> Windows Settings and then click on Scripts.

In the right pane, double click on Shutdown, then click on Add in the dialog that opens. Browse to the shutdown.cmd that you created earlier and click OK.

Now, click OK until you are back to the group policy main window and close it and then close the Group Policy Management window.

If you have a single DC, you are done. Otherwise, wait for 15-20 minutes to allow your modified group policy to replicate to other DCs in your Active Directory.

Now, each time your DC that has Exchange Server installed on it reboots (or shuts down), it will execute the above script. This will reduce the required reboot time 50% – 75%.

Enjoy!

Until next time…

If there are things you would like to see written about, please let me know.


Follow me on twitter: @EssentialExch

The Experts Conference – TEC’2010

For the third time, I’ll be speaking at the upcoming The Experts Conference, sponsored by Quest.

I’ll be discussing using Exchange Web Services (EWS) from PowerShell — and for some reason, my abstract isn’t yet posted on the conference website! I need to get that corrected.

TEC’2010 is being held in Los Angeles, CA this year, from April 25 – 28. The Exchange track will have it’s strongest expert content ever, with MVPs and Microsoft personnel from all over the United States and Europe. Of course, as always, TEC’2010 will have a huge Directory Services and Identity Management track and is introducing a SharePoint track.

See the official TEC’2010 website for more information.

I hope you’ll come join me and many other people at TEC’2010. It’s a great time with lots of talk-time with some of the best folks in the business.

Until next time…

If there are things you would like to see written about, please let me know


Follow me on twitter: @EssentialExch

Disabling WSUS Logging (or any website on Windows Server 2008)

SBS 2008 has IIS logging enabled by default. For most websites on an SBS server, this probably isn’t an issue.

However, the WSUS Administration website can generate very high traffic. On my client’s servers, I’ve seen 5 GB generated in just a couple of months. One person reported as much as 7.5 GB generated within a month.

Unless you need this logging for some debugging purpose, you can easily disable the logging. Sure, there are command line ways to do it, but in this case, using the GUI is pretty easy.

Open the IIS Manager and expand both the server and the Sites nodes in the Connections pane. See the figure below.

Next, click on the WSUS Administration website, then locate the IIS feature named Logging in the main pane. Double-click on it (or single click and select “Open Feature” from the Action pane).

Finally, click Disable, red-circled in the figure below. That’s all it takes!

If you should ever need to re-enable logging, you can return to this same window. Once disabled, the “Disable” action changes to “Enable”.

Disabling WSUS Logging

Until next time…

If there are things you would like to see written about, please let me know


Follow me on twitter: @EssentialExch

Getting the Contents of an Active Directory Integrated DNS Zone, Version 2

In June of 2009, I published the first version of Getting the Contents of an Active Directory Integrated DNS Zone. Shortly after that, Chris Dent (chris at highorbit dot co dot uk) published a blog post clarifying the format of the dnsRecord attribute. Most of the time, the difference between the “correct” format and what I had deduced had no effect. But, I recently had to go back to this project and I needed to decode more DNS resource records. Now, using the proper format became important.

Thus, version 2 of the utility was born. I now process A, AAAA, MX, PTR, CNAME, SOA, NS, TXT, SRV, and HINFO resource records.

I hope you’ll enjoy this update. Now, if I can figure out how to determine lame delegations in PowerShell, I can write a PowerShell version of dnswalk!

##
## dns-dump.ps1
##
## Michael B. Smith
## michael at smithcons dot com
## http://TheEssentialExchange.com/blogs/michael
## May/June, 2009
## Updated December, 2009 adding many add'l record types.
##
## Use as you wish, no warranties expressed, implied or explicit.
## Works for me, may not for you.
## If you use it, I would appreciate an attribution.
##
## Thanks to Chris Dent, chris at highorbit dot co dot uk
## for some clarification on the precise format of the
## dnsRecord attribute. See his blog post on the topic at
## http://www.highorbit.co.uk/?p=1097
##

Param(
	[string]$zone,
	[string]$dc,
	[switch]$csv,
	[switch]$help
)

function dumpByteArray([System.Byte[]]$array, [int]$width = 9)
{
	## this is only used if we run into a record type
	## we don't understand.

	$hex = ""
	$chr = ""
	$int = ""

	$i = $array.Count
	"Array contains {0} elements" -f $i
	$index = 0
	$count = 0
	while ($i-- -gt 0)
	{
		$val = $array[$index++]

		$hex += ("{0} " -f $val.ToString("x2"))

		if ([char]::IsLetterOrDigit($val) -or 
		    [char]::IsPunctuation($val)   -or 
		   ([char]$val -eq " "))
		{
			$chr += [char]$val
		}
		else
		{
			$chr += "."
		}

		$int += "{0,4:N0}" -f $val

		$count++
		if ($count -ge $width)
		{
			"$hex $chr $int"
			$hex = ""
			$chr = ""
			$int = ""
			$count = 0
		}		
	}

	if ($count -gt 0)
	{
		if ($count -lt $width)
		{
			$hex += (" " * (3 * ($width - $count)))
			$chr += (" " * (1 * ($width - $count)))
			$int += (" " * (4 * ($width - $count)))
		}

		"$hex $chr $int"
	}
}

function dwordLE([System.Byte[]]$arr, [int]$startIndex)
{
	## convert four consecutive bytes in $arr into a
	## 32-bit integer value... if I had bit-manipulation
	## primitives in PowerShell, I'd use them instead
	## of the multiply operator.
	##
	## this routine is for little-endian values.

	$res = $arr[$startIndex + 3]
	$res = ($res * 256) + $arr[$startIndex + 2]
	$res = ($res * 256) + $arr[$startIndex + 1]
	$res = ($res * 256) + $arr[$startIndex + 0]

	return $res
}

function dwordBE([System.Byte[]]$arr, [int]$startIndex)
{
	## convert four consecutive bytes in $arr into a
	## 32-bit integer value... if I had bit-manipulation
	## primitives in PowerShell, I'd use them instead
	## of the multiply operator.
	##
	## this routine is for big-endian values.

	$res = $arr[$startIndex]
	$res = ($res * 256) + $arr[$startIndex + 1]
	$res = ($res * 256) + $arr[$startIndex + 2]
	$res = ($res * 256) + $arr[$startIndex + 3]

	return $res
}

function wordLE([System.Byte[]]$arr, [int]$startIndex)
{
	## convert two consecutive bytes in $arr into a
	## 16-bit integer value... if I had bit-manipulation
	## primitives in PowerShell, I'd use them instead
	## of the multiply operator.
	##
	## this routine is for little-endian values.

	$res = $arr[$startIndex + 1]
	$res = ($res * 256) + $arr[$startIndex]

	return $res
}

function wordBE([System.Byte[]]$arr, [int]$startIndex)
{
	## convert two consecutive bytes in $arr into a
	## 16-bit integer value... if I had bit-manipulation
	## primitives in PowerShell, I'd use them instead
	## of the multiply operator.
	##
	## this routine is for big-endian values.

	$res = $arr[$startIndex]
	$res = ($res * 256) + $arr[$startIndex + 1]

	return $res
}

function decodeName([System.Byte[]]$arr, [int]$startIndex)
{
	## names in DNS are stored in two formats. one
	## format contains a single name and is what we
	## called "simple string" in the old days. the
	## first byte of a byte array contains the length
	## of the string, and the rest of the bytes in 
	## the array are the data in the string.
	##
	## a "complex string" is built from simple strings.
	## the complex string is prefixed by the total
	## length of the complex string in byte 0, and the
	## total number of segments in the complex string
	## in byte 1, and the first simple string starts 
	## (with its length byte) in byte 2 of the complex
	## string.

	[int]$totlen   = $arr[$startIndex]
	[int]$segments = $arr[$startIndex + 1]
	[int]$index    = $startIndex + 2

	[string]$name  = ""

	while ($segments-- -gt 0)
	{
		[int]$segmentLength = $arr[$index++]
		while ($segmentLength-- -gt 0)
		{
			$name += [char]$arr[$index++]
		}
		$name += "."
	}

	return $name
}

function analyzeArray([System.Byte[]]$arr, [System.Object]$var)
{
	$nameArray = $var.distinguishedname.ToString().Split(",")
	$name = $nameArray[0].SubString(3)

	## RData Length is the length of the payload in bytes (that is, the variable part of the record)
	## Truth be told, we don't use it. The payload starts at $arr[24]. If you are ever concerned
	## about corrupt data and running off the end of $arr, then you need to verify against the RData
	## Length value.
	$rdataLen = wordLE $arr 0

	## RData Type is the type of the record
	$rdatatype = wordLE $arr 2

	## the serial in the SOA where this item was last updated
	$updatedAtSerial = dwordLE $arr 8

	## TimeToLive
	$ttl = dwordBE $arr 12

	## $unknown = dword $arr 16

	## timestamp of when the record expires, or 0 means "static"
	$age = dwordLE $arr 20
	if ($age -ne 0)
	{
		## hours since January 1, 1601 (start of Windows epoch)
		## there is a long-and-dreary way to do this manually,
		## but get-date makes it trivial to do the conversion.
		$timestamp = ((get-date -year 1601 -month 1 -day 1 -hour 0 -minute 0 -second 0).AddHours($age)).ToString()
	}
	else
	{
		$timestamp = "[static]"
	}

	if ($rdatatype -eq 1)
	{
		# "A" record
		$ip = "{0}.{1}.{2}.{3}" -f $arr[24], $arr[25], $arr[26], $arr[27]

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "A", $ip
	}
	elseif ($rdatatype -eq 2)
	{
		# "NS" record
		$nsname = decodeName $arr 24

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "NS", $nsname
	}
	elseif ($rdatatype -eq 5)
	{
		# CNAME record
		# canonical name or alias

		$alias = decodeName $arr 24

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "CNAME", $alias
	}
	elseif ($rdatatype -eq 6)
	{
		# "SOA" record
		# "Start-Of-Authority"

		$nslen = $arr[44]
		$priserver = decodeName $arr 44
		$index = 46 + $nslen

		# "Primary server: $priserver"

		##$index += 1
		$resparty = decodeName $arr $index

		# "Responsible party: $resparty"

		# "TTL: $ttl"
		# "Age: $age"

		$serial = dwordBE $arr 24
		# "Serial: $serial"

		$refresh = dwordBE $arr 28
		# "Refresh: $refresh"

		$retry = dwordBE $arr 32
		# "Retry: $retry"

		$expires = dwordBE $arr 36
		# "Expires: $expires"

		$minttl = dwordBE $arr 40
		# "Minimum TTL: $minttl"

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10}"

			$formatstring -f $name, $timestamp, $ttl, `
				"SOA", $priserver, $resparty, `
				$serial, $refresh, $retry, `
				$expires, $minttl
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}"

			$formatstring -f $name, $timestamp, $ttl, "SOA"
			(" " * 32) + "Primary server: $priserver"
			(" " * 32) + "Responsible party: $resparty"
			(" " * 32) + "Serial: $serial"
			(" " * 32) + "TTL: $ttl"
			(" " * 32) + "Refresh: $refresh"
			(" " * 32) + "Retry: $retry"
			(" " * 32) + "Expires: $expires"
			(" " * 32) + "Minimum TTL (default): $minttl"
		}
	}
	elseif ($rdatatype -eq 12)
	{
		# "PTR" record

		$ptr = decodeName $arr 24

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "PTR", $ptr
	}
	elseif ($rdatatype -eq 13)
	{
		# "HINFO" record

		[string]$cputype = ""
		[string]$ostype  = ""

		[int]$segmentLength = $arr[24]
		$index = 25

		while ($segmentLength-- -gt 0)
		{
			$cputype += [char]$arr[$index++]
		}

		$index = 24 + $arr[24] + 1
		[int]$segmentLength = $index++

		while ($segmentLength-- -gt 0)
		{
			$ostype += [char]$arr[$index++]
		}

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4},{5}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4},{5}"
		}

		$formatstring -f $name, $timestamp, $ttl, "HINFO", $cputype, $ostype
	}
	elseif ($rdatatype -eq 15)
	{
		# "MX" record

		$priority = wordBE $arr 24
		$mxhost   = decodeName $arr 26

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4},{5}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}  {5}"
		}

		$formatstring -f $name, $timestamp, $ttl, "MX", $priority, $mxhost
	}
	elseif ($rdatatype -eq 16)
	{
		# "TXT" record

		[string]$txt  = ""

		[int]$segmentLength = $arr[24]
		$index = 25

		while ($segmentLength-- -gt 0)
		{
			$txt += [char]$arr[$index++]
		}

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "TXT", $txt

	}
	elseif ($rdatatype -eq 28)
	{
		# "AAAA" record

		### yeah, this doesn't do all the fancy formatting that can be done for IPv6

		$str = ""
		for ($i = 24; $i -lt 40; $i+=2)
		{
			$seg = wordBE $arr $i
			$str += ($seg).ToString('x4')
			if ($i -ne 38) { $str += ':' }
		}

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3}`t{4}"
		}

		$formatstring -f $name, $timestamp, $ttl, "AAAA", $str
	}
	elseif ($rdatatype -eq 33)
	{
		# "SRV" record

		$port   = wordBE $arr 28
		$weight = wordBE $arr 26
		$pri    = wordBE $arr 24

		$nsname = decodeName $arr 30

		if ($csv)
		{
			$formatstring = "{0},{1},{2},{3},{4},{5}"
		}
		else
		{
			$formatstring = "{0,-30}`t{1,-24}`t{2}`t{3} {4} {5}"
		}

		$formatstring -f `
			$name, $timestamp, `
			$ttl, "SRV", `
			("[" + $pri.ToString() + "][" + $weight.ToString() + "][" + $port.ToString() + "]"), `
			$nsname
	}
	else
	{
		$name
		"RDataType $rdatatype"
		$var.distinguishedname.ToString()
		dumpByteArray $arr
	}

}

function processAttribute([string]$attrName, [System.Object]$var)
{
	$array = $var.$attrName.Value
####	"{0} contains {1} rows of type {2} from {3}" -f $attrName, $array.Count, $array.GetType(), $var.distinguishedName.ToString()

	if ($array -is [System.Byte[]])
	{
####		dumpByteArray $array
		" "
		analyzeArray $array $var
		" "
	}
	else
	{
		for ($i = 0; $i -lt $array.Count; $i++)
		{
####			dumpByteArray $array[$i]
			" "
			analyzeArray $array[$i] $var
			" "
		}
	}
}

function usage
{
"
.\dns-dump -zone  [-dc ] [-csv] |
	   -help

dns-dump will dump, from Active Directory, a particular named zone. 
The zone named must be Active Directory integrated.

Zone contents can vary depending on domain controller (in regards
to replication and the serial number of the SOA record). By using
the -dc parameter, you can specify the desired DC to use. Otherwise,
dns-dump uses the default DC.

Usually, output is formatted for display on a workstation. If you
want CSV (comma-separated-value) output, specify the -csv parameter.
Use out-file in the pipeline to save the output to a file.

Finally, to produce this helpful output, you can specify the -help
parameter.

This command is basically equivalent to (but better than) the:

	dnscmd /zoneprint 
or
	dnscmd /enumrecords  '@'

commands.

Example 1:

	.\dns-dump -zone essential.local -dc win2008-dc-3

Example 2:

	.\dns-dump -help

Example 3:

	.\dns-dump -zone essential.local -csv |
            out-file essential.txt -encoding ascii

	Note: the '-encoding ascii' is important if you want to
	work with the file within the old cmd.exe shell. Otherwise,
	you can usually leave that off.
"
}

	##
	## Main
	##

	if ($help)
	{
		usage
		return
	}

	if ($args.Length -gt 0)
	{
		write-error "Invalid parameter specified"
		usage
		return
	}

	if (!$zone)
	{
		throw "must specify zone name"
		return
	}

	$root = [ADSI]"LDAP://RootDSE"
	$defaultNC = $root.defaultNamingContext

	$dn = "LDAP://"
	if ($dc) { $dn += $dc + "/" }
	$dn += "DC=" + $zone + ",CN=MicrosoftDNS,CN=System," + $defaultNC

	$obj = [ADSI]$dn
	if ($obj.name)
	{
		if ($csv)
		{
			"Name,Timestamp,TTL,RecordType,Param1,Param2"
		}

		#### dNSProperty has a different format than dNSRecord
		#### processAttribute "dNSProperty" $obj

		foreach ($record in $obj.psbase.Children)
		{
			####	if ($record.dNSProperty) { processAttribute "dNSProperty" $record }
			if ($record.dnsRecord)   { processAttribute "dNSRecord"   $record }
		}
	}
	else
	{
		write-error "Can't open $dn"
	}

	$obj = $null

Until next time…

If there are things you would like to see written about, please let me know


Follow me on twitter: @EssentialExch

Exchange Server 2010 RTM and Organizational Health miscalculation

Exchange Server 2010 is starting to get some traction, with companies beginning to install it and put it into their test labs.

An issue – that is obviously a flat out bug – is that the Exchange Management Console (EMC) for Exchange Server 2010 misreports the number of Enterprise Client Access Licenses (CALs) that are required. It does this due to miscalculating Exchange ActiveSync policies.

To calculate the number of Enterprise CALs required, open the EMC and click on the Microsoft Exchange On-Premises node in the left-most pane of the console. After it completes expansion, in the right-hand Action pane, click “Collect Organizational Health Data” and then follow the prompts in the wizard. When the wizard is done, you’ll see an image similar to the one below. Note in the image that the License Summary reports needing 2003 Standard CALs and Enterprise CALs.

Organizational health for a single server with 2003 mailboxes

If you look at the Default Exchange ActiveSync policy, you can determine why. See the following two pictures:

Default Exchange ActiveSync policy property sheet, Device TabDefault Exchange ActiveSync policy property sheet, Device Applications tab

Note the text in both that indicates that MODIFYING the policies on the tab requires an Enterprise CAL. That is correct. The default policies, included in the Standard CAL (which are illustrated in the above two images), are available on this page. The specific licensing page for Exchange Server 2010 CALs is here.

However, Organizational Health currently reports that an Enterprise CAL is required if any of the above policies are set. That is incorrect.

In summary, there has been no change to the licensing of Exchange ActiveSync and the policies that were associated with Standard CALs in Exchange Server 2007 SP1 continue to be the same for Exchange Server 2010. Hopefully, the Organizational Health tool will be quickly repaired.

Until next time…

If there are things you would like to see written about, please let me know!


Follow me on twitter: @EssentialExch

Exporting Mailboxes Larger Than 2 GB On An Exchange Server

At one point, the MAPI used by Exchange was the same as the MAPI used by Outlook. But many years ago (literally – pre-Exchange 5.5) the MAPI used by Exchange server began to diverge from the MAPI used by Outlook. This isn’t particularly surprising, as the needs of a MAPI server are the inverse to the needs of a MAPI client. By Outlook 2003/Exchange 2003, a significant item was that client-MAPI (the MAPI used by Outlook) supports Unicode PSTs. Server-MAPI (the MAPI used by Exchange) only supports ANSI PSTs.

While there are MANY under-the-hood differences between the two types of PSTs, the key issue for most people is that ANSI PSTs are limited to 2 GB in size (the actual limit is about 1.8 GB of data, but this leads to a file size of just about 2 GB). Unicode PSTs do not have that limitation and can of any “reasonable” size. (They are limited by default to 20 GB, but can grow beyond that by adding a registry key for Outlook’s MAPI.)

This leads to a challenge on Exchange 2003 or Exchange 2007 servers when using ExMerge (yes, yes, ExMerge isn’t officially supported against Exchange 2007 but it works just fine). ExMerge can only use server MAPI. However, mailboxes may be larger than 2 GB. So what do you do?

Glen Scales, an Exchange MVP with a developer bent, developed a script in early 2007 to address this problem. Glen’s original script is here.

I’ve recently been working on a project for a large company and we needed to do this export against thousands of mailboxes. I started with Glen’s script and ran into a few issues, so I’ve more-or-less rewritten it; but the basic concept is the same – scan a mailbox on an Exchange server and break it into chunks. Each chunk will not be larger than 1.8 GB and each chunk will not contain any folder that contains more than 16,300 items (16K items per folder was another limit of ANSI PSTs).

I give great thanks to Glen for his original script, without his script this project would’ve been much harder.

If you actually want to know how the script works – I refer you to Glen’s original blog on the topic. The mechanism has not changed.

Without further ado…

''
'' ExMBspanPst.vbs
''
'' Based on a script from Glen Scales
'' http://gsexdev.blogspot.com/2007/01/exporting-mailbox-larger-then-2-gb-and.html
''
'' Requires Outlook Redemption, but not Outlook
'' http://www.dimastr.com/redemption
''
'' Fixes a few bugs:
''	orig. script didn't split at 16K messages in a folder
''	orig. script didn't report progress in 2, 3, ... n PSTs
''	orig. script could create two copies of a message in output PST
''	orig. script didn't send all status output to output file
''	orig. script didn't check for the presence of existing PST
'' Adds a feature or two:
''	accepts input mailbox as parameter
''	a number of stability improvements (error checks)
''	added "option explicit" and updated code for support of same
''	copies HiddenItems (Associated Items) and DeletedItems as well as normal items
'' Almost a full source reformat (so I could understand the code better)
'' Removed a fair bit of unused code (although I may have added more of my own)
'' Release resources whenever possible
'' Use RDO for all things, don't fall back to CDO
''
'' Update published with permission of Glen.
''
'' Michael B. Smith
'' The Essential Exchange
'' michael@TheEssentialExchange.com
''
Option Explicit

Dim mbMailbox      '' name of the mailbox (Exchange alias/mailNickname works best)
Dim servername     '' name of the Exchange server hosting the mailbox
Dim bfbaseFilename '' prefix used to name the new PST
Dim pfFilePath     '' directory in which to store PSTs

mbMailbox = WScript.Arguments(0)
''
'' these should be the only values you need to change
''
servername = "exchserver"
bfBaseFilename = "set1-" & mbMailbox
pfFilePath = "c:\temp\"
''
'' end change area
''

Dim fnFileName '' name of the output PST (set by CreatenewPst; uses pfFilePath, bfBasefileName and mbMailbox)
Dim fNumber    '' index of the output PST (will be updated to start at 1 by CreateNewPst)

fnFileName = ""
fNumber = 0

Dim doDictionaryObject '' scripting.dictionary, contains list of entry-ids present in current PST
Dim fso                '' scripting.filesystemobject
Dim RDOSession         '' redemption.rdosession

Set doDictionaryObject = CreateObject("Scripting.Dictionary")
Set fso                = CreateObject("Scripting.FileSystemObject")
set RDOSession         = CreateObject("Redemption.RDOSession")

Dim tsize       '' the next time I report the size of the new PST (that is, it's calculated size)
Dim tnThreshold '' maximum size (in MB) of a PST, before I switch to a new one

tsize = 10
tnThreshold = 1800

Dim PST
Dim IPMRoot
Dim pfPstFile            '' object for the new PST
Dim PstRootFolder        '' object pointing to the root of the current PST

PST           = Empty    '' PST is the Redemption pointer to the PST
IPMRoot       = Empty    '' IPMRoot is the root of the IPM subtree in the mailbox
pfPstFile     = Empty    '' fso.GetFile(fnFileName) returns the object for this file

PstRootFolder = Empty    '' This variable never actually gets set, but removing it would've
                         '' called for refactoring too much code - when the code is fixed
                         '' to set this value properly, other stuff breaks. That's why the
                         '' return values are commented out in ProcessFolder[Root | Sub].

Dim wfile                '' file we write to for informational messasges
Dim dfDeletedItemsFolder '' the deleted items folder in the current input mailbox
Dim miLoop               '' used for looping through IPMRoot.Folders
Dim fld                  '' used for looping through IPMRoot.Folders
Dim iMessageCount        '' total number of messages processed

iMessageCount = 0

	''
	'' MAIN code
	''

	On Error Resume Next
	Set wfile = fso.opentextfile(pfFilePath & bfBaseFilename & ".txt", 2, true)
	If Err Then
		WScript.Echo "Main: Error: Could not open " & pfFilePath & bfBaseFilename & ".txt"
		WScript.Quit 1
	End If
	On Error Goto 0

	msg "Main: debug output text file is " & pfFilePath & bfBaseFilename & ".txt"
	msg "Main: will attempt login to mailbox " & mbMailbox & " on server " & servername

	RDOSession.LogonExchangeMailbox mbMailbox, servername
	Set dfDeletedItemsFolder = RDOSession.GetDefaultFolder(3)
	Call CreateNewPst

	msg "Main: Enumerating Mailbox " & wscript.arguments(0)

	For miLoop = 1 to IPMRoot.Folders.Count
		Set fld = IPMRoot.Folders(miLoop)
		Call ProcessItems(fld)
		If fld.Folders.count > 0 then
			msg "Main: Calling Enumfolders for " & fld.Name
			Call Enumfolders(fld, PstRootFolder, 2)
		End if
		Set fld = Nothing
	Next

	msg "Main: A total of " & iMessageCount & " messages were processed."
	msg "Main: Done"

	'' clean up and release resources
	Set dfDeletedItemsFolder = Nothing
	RDOSession.Logoff
	wfile.Close
	Set wfile      = Nothing
	Set RDOSession = Nothing
	Set fso        = Nothing

Sub msg(ByVal str)
	WScript.Echo str
	wfile.WriteLine(str)
End Sub

Function Enumfolders(FLDS, RootFolder, ltype)
	''
	'' The current folder in the source mailbox is FLDS
	'' RootFolder should be the parent folder of the current folder
	''
	'' If ltype == 2, then process the non-folder items in the current folder (i.e., messages)
	'' If ltype == 1, then process the sub-folders in the current folder
	''
	Dim fl  '' used for looping through FLDS.Folders
	Dim fld '' used for looping through FLDS.Folders

	For fl = 1 to FLDS.Folders.count
		Set fld = FLDS.Folders(fl)
		If ltype = 1 then
			Call ProcessFolderSub(fld, RootFolder)
		Else
			Call ProcessItems(fld)
		End If

		msg "Enumfolders: " & fld.Name

		If fld.Folders.Count <> 0 then
			Call Enumfolders(fld, fld.EntryID, ltype)
		End if
		Set fld = Nothing
	Next
End function

Function CreateNewPst
	''
	'' conceivably, we should check ERR.number for almost every statement in this routine
	'' realistically, that would make the code almost unreadable and incomprehensible
	''
	Dim pstfld '' used for looping through PstRoot.Folders
	Dim fiLoop '' used for looping through IPMRoot.Folders
	Dim fld    '' used for looping through IPMRoot.Folders

	doDictionaryObject.RemoveAll
	fNumber = fNumber + 1
	fnFileName = pfFilePath & bfBaseFilename & "-" & fNumber & ".pst"

	msg "CreateNewPst: About to create new PST named " & fnFileName

	If fso.FileExists(fnFileName) Then
		msg "CreateNewPst: Error: PST already exists: " & fnFileName
		WScript.Quit 1
	End If

	If Not IsEmpty(PST) Then
		Set PST = Nothing
	End If
	Set PST = RDOSession.Stores.AddPSTStore(fnFileName, 1,  "Exported MailBox-" & now())

	If fnumber = 1 Then
		Dim pstroot

		Set pstroot = RDOSession.GetFolderFromID(PST.IPMRootFolder.EntryID, PST.EntryID)
		For Each pstfld In PstRoot.folders
			If pstfld.Name = "Deleted Items" Then
				doDictionaryObject.add dfDeletedItemsFolder.EntryID, pstfld.EntryID
				msg "CreateNewPst: Added Deleted Items Folder to dictionary"
				Exit For
			End If
		Next
		Set pstroot = Nothing
	End If

	If Not IsEmpty(IPMRoot) Then
		Set IPMRoot = Nothing
	End If
	Set IPMRoot = RDOSession.Stores.DefaultStore.IPMRootFolder

	msg "CreateNewPST: processing each new default folder in new PST"
	For fiLoop = 1 to IPMRoot.Folders.count
		Set fld = IPMRoot.Folders(fiLoop)
		If fld.Name <> "Deleted Items" Then
			PstRootFolder = ProcessFolderRoot(fld, PST.IPMRootFolder.EntryID)
		End If
		If fld.Folders.count > 0 Then
			Call Enumfolders(fld, fld.EntryID, 1)
		End If
		Set fld = Nothing
	Next

	If Not IsEmpty(pfPstFile) Then
		Set pfPstFile = Nothing
	End If
	Set pfPstFile = fso.GetFile(fnFileName)

	tsize = 10 '' back at the beginning now

	msg "CreateNewPst: Created new PST named: " & fnFileName
End Function

Function ProcessFolderRoot(Fld, parentfld)
	Dim newFolder '' next folder to be examined
	Dim CDOPstFld '' a particular folder parent in the PST based on the entryid of the PST

	msg "ProcessFolderRoot: " & fld.Name

	Set CDOPstfld = RDOSession.GetFolderFromID(parentfld, PST.EntryID)
	Set newFolder = CDOPstfld.Folders.ADD(Fld.Name)	
	'''ProcessFolderRoot = newFolder.EntryID
	newfolder.fields(&H3613001E) = Fld.fields(&H3613001E)

	doDictionaryObject.add Fld.EntryID, newfolder.EntryID

	Set newFolder = Nothing
	Set CDOPstfld = Nothing
End Function

Function ProcessFolderSub(Fld, parentfld)
	Dim newFolder '' next folder to be examined
	Dim CDOPstFld '' a particular folder parent in the PST based on the entryid of the PST

	msg "ProcessFolderSub: " & fld.Name

	Set CDOPstfld = RDOSession.GetFolderFromID(doDictionaryObject.item(parentfld), PST.EntryID)
	Set newFolder = CDOPstfld.Folders.ADD(Fld.Name)	
	'''ProcessFolderSub = newFolder.EntryID
	newfolder.fields(&H3613001E) = Fld.fields(&H3613001E)

	doDictionaryObject.add Fld.EntryID, newfolder.EntryID

	Set newFolder = Nothing
	Set CDOPstfld = Nothing
End Function

Sub ReportError(prefix, Fld, item, txt)
	msg prefix & " " & "Error Processing Item #" & item & " in " & Fld.Name & " " & txt
	msg prefix & " " & "EntryID of Item: " & Fld.items(item).EntryID
	msg prefix & " " & "Subject of Item: " & Fld.items(item).Subject
End Sub

Function CalcNewSize(pstFile, item)
	''
	'' calculate what the new physical size of the pstFile will be after adding the next item
	'' to it. do so safely, avoiding all possible faults, and return the value in megabytes,
	'' rounded up.
	''
	Dim pstSize, itemSize, totalSize

	On Error Resume Next
	pstSize = pstFile.Size
	If Err.Number Then
		pstSize = 1048576 '' assume 1 MB for the heck of it
	End If

	Err.Clear
	itemSize = item.Size
	If Err.Number Then
		itemSize = 1048576 '' assume 1 MB for the heck of it
	End If

	Err.Clear
	totalSize = Int ((pstsize + itemSize) / 1048576) + 1
	If Err.Number Then
		totalSize = 3
	End If
	On Error Goto 0

	CalcNewSize = totalSize
End Function	

Sub ProcessItems(Fld)
	Dim strType             '' the IPM type of the input folder
	Dim fiItemLoop          '' used to loop through the input folder
	Dim fiCDOcount          '' how many messages CDO told us to expect
	Dim pfPredictednewSize  '' predicted size of the output PST after the next message is written
	Dim dfDestinationFolder '' output folder in the current output PST
	Dim objMessages         '' collection of messages contained by the source folder
	Dim objMessage	        '' current message of interest from the source folder
	Dim srcFld              '' the source folder
	Dim strName             '' name of the source folder
	Dim i                   '' used as a dummy
	Dim iCount              '' how many messages have been stored in the output folder
	Dim totalMessagesRead
	Dim totalMessagesWritten

	iCount = 0
	totalMessagesRead = 0
	totalMessagesWritten = 0

	Const iCountmax = 16300 '' must be less than 16383, which is the number of messages that CAN be stored
                                '' per output folder in an ANSI PST

	strtype = Fld.fields(&H3613001E)

	'''' frankly, I don't understand the distinction below, it was in the
	'''' original code, but the two should be equivalent.
	If strType = "IPF.Contact" Then
		Set srcFld = Fld
	Else
		Set srcFld = RDOSession.GetFolderFromID(Fld.EntryID)
	End If
	strName = srcFld.Name

	For i = 1 to 3
		''' there are 3 collections in every folder that we might be interested in
		Select Case i
			Case 1
				Set objMessages = srcFld.Items
				msg "ProcessItems: " & strType & ": Processing Folder: " & strName & _
					" (contains " & objMessages.Count & " normal items)"
			Case 2
				Set objMessages = srcFld.HiddenItems
				msg "ProcessItems: " & strType & ": Processing Folder: " & strName & _
					" (contains " & objMessages.Count & " hidden/associated items)"
			Case 3
				Set objMessages = srcFld.DeletedItems
				msg "ProcessItems: " & strType & ": Processing Folder: " & strName & _
					" (contains " & objMessages.Count & " deleted items)"
		End Select

		fiCDOcount = objMessages.Count

		Set dfDestinationFolder = RDOSession.GetFolderFromID(doDictionaryObject.item(Fld.EntryID), PST.EntryID)

		For fiItemloop = 1 to fiCDOcount
			iCount            = iCount + 1
			totalMessagesRead = totalMessagesRead + 1

			If 0 = (fiItemLoop Mod 100) Then
				wscript.echo "... processing message " & fiItemLoop & " of " & fiCDOcount
			End If

			'' I SO wish VBScript had a Continue statement
			On Error Resume Next
			Err.Clear
			Set objMessage = objMessages(fiItemLoop)
			If Err.Number <> 0 Then
				msg "ProcessItems: corrupt message in folder, item number " & fiItemLoop & _
					" of " & fiCDOcount & ", 0x" & _
					Hex(Err.Number) & " (" & Err.Description & ")"
			Else
				On Error Goto 0

				pfPredictednewSize = CalcnewSize(pfPstFile, objMessage)
				If pfPredictednewSize >= tsize Then
					Wscript.echo "... additional 10 MB Exported, total size is now " & tsize & " MB" & _
						" (processing item #" & fiItemLoop & " of " & fiCDOcount & ")"
					tsize = tsize + 10
				End if

				If (pfPredictednewSize >= tnThreshold) or (iCount > iCountmax) Then
					msg "ProcessItems: " & strType & ": New PST about to be created - Destination - Number of Items : " & _
						dfDestinationFolder.Items.Count & _
						" (processing item #" & fiItemLoop & " of " & fiCDOcount & ")"

					Call CreateNewPst
					Set dfDestinationFolder = Nothing
					Set dfDestinationFolder = RDOSession.GetFolderFromID(doDictionaryObject.item(Fld.EntryID), PST.EntryID)

					iCount = 0
				End If

				On Error Resume Next
				Err.Clear
				objMessage.CopyTo(dfDestinationFolder)
				If Err.Number <> 0 Then
					Dim rdosrc

					Call ReportError ("ProcessItems: " & strType & ":", Fld, fiItemloop, "(copyto - likely fatal)")
					msg "ProcessItems: 0x" & Hex(Err.Number) & ": " & Err.Description
					Err.Clear

					''' Try to copy a slightly different way before giving up
					Set rdosrc = RDOSession.GetMessageFromID(objMessage.EntryId)
					rdosrc.CopyTo(dfDestinationFolder)
					If Err.Number <> 0 Then
						msg "ProcessItems: " & strType & ": (copyto): Also Failed RDO Copy"
						msg "ProcessItems: 0x" & Hex(Err.Number) & ": " & Err.Description
					Else
						msg "ProcessItems: " & strType & ": (copyto): Copied with RDO Okay"
						totalMessagesWritten = totalMessagesWritten + 1
					End If
					Set rdosrc = Nothing
				Else
					totalMessagesWritten = totalMessagesWritten + 1
				End If
			End If
			On Error Goto 0

			Set objMessage = Nothing
		Next
	Next

	msg "ProcessItems: " & strType & ": Source - Number of Items : " & totalMessagesRead & _
	    " Destination - Number of Items : " & totalMessagesWritten

	iMessageCount     = iMessageCount + totalMessagesRead

	Set dfDestinationFolder = Nothing
	Set objMessages         = Nothing
	Set srcFld              = Nothing
End Sub

Until next time…

If there are things you would like to see written about, please let me know!


Follow me on twitter: @EssentialExch

Exchange Server 2010 – Administrative Access to All Mailboxes

In Exchange 2010, the storage group has disappeared. Instead, the properties of a database and of a storage group have merged – the result being referred to as a database. Effectively, a database has been promoted to be as important as a storage group used to be.

You may could have predicted this coming from changes which happened in Exchange 2007, as a number of features required that a storage group only have a single database contained within those storage groups.

Regardless of which, a mailboxdatabase is unique within an Exchange 2010 organization. That means you can no longer have a mailboxdatabase named “Mailbox Database” or “Mailbox (servername)” on each and every server within your Exchange organization. Instead, each and every mailboxdatabase name is unique. This is guaranteed by a many-digit number suffixed to the end of a mailbox database’s name.

This does simplify some aspects of administration – instead of having to specify server\storage-group\database in order to name a specific database, you can now specify simply the database name. However, the name of that database may be something like “Mailbox Database 1015374940” (which is the name of the mailbox database hosting my production domain). That is somewhat more challenging to remember. Just somewhat. HAH.

One of the changes involved in moving databases to be organizational objects instead of server objects makes it practical to (again – after skipping Exchange 2007) allow a single user or group administrative access to all Exchange 2007 mailboxes.

Of course, this can be done from the GUI – however, the GUI you must use is LDP.exe or ADSIEdit.msc – not the Exchange Management Console (EMC).

However, this is probably easier to do from the Exchange Management Shell (EMS), given that you know a couple of key facts: the distinguishedname of your Active Directory domain and any of three formats for a user/group you want to allow this access.

Note that allowing Administrative Access to all mailboxes can be tracked by logging – but only if that logging is enabled – and that logging is not enabled by default. Also note that there may be legal issues associated with allowing specific users or groups access to all mailboxes in your organization – I recommend that every organization have a information access and security policy that includes corporate access and use of electronic mail. Finally, this information is provided for instructional purposes and I accept no liability for providing this information or to any use to which it may be put.

Now that I’ve covered my rear….

If, for example, your forest is named named example.com, then the distinguished name of that forest is DC=example,DC=com. If your forest is named SBS.Example.Local, then the distinguishedname of the forest is DC=SBS,DC=Example,DC=Local. Now, remember that. 🙂

In terms of specifying a user or group name that you are going to provide access, you have three possible formats:

NetBIOS-domain-name\principal-name

Active-Directory-forest-name/container-or-organizational-unit/principal-name

CN=principal-name,OU=organizational-unit,DC=example,DC=local

For example, if your Active Directory forest name is example.local and the NetBIOS domain name is EXAMPLE, and the security principal is named TEST and that principal is located in the Users container, you would have these examples:

EXAMPLE\TEST

example.local/Users/TEST

CN=TEST,CN=Users,DC=example,DC=local

Finally, using the above example, you would have this PowerShell command:

Add-AdPermission –Identity “CN=Databases,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=First Organization,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=example,DC=local” –User EXAMPLE\TEST –InheritedObjectType msExchPrivateMDB –extendedRights Receive-As –inheritanceType Descendents

Or, if we were to expand this out a little bit:

$principal = “EXAMPLE\Test”
$domain = “DC=example,DC=local”
$identity = “CN=Databases,” +
“CN=Exchange Administrative Group (FYDIBOHF23SPDLT),” +
“CN=Administrative Groups,CN=First Organization,” +
“CN=Microsoft Exchange,CN=Services,CN=Configuration,” +
$domain
Add-AdPermission –Identity $identity –User $principal `
–InheritedObjectType msExchPrivateMDB `
–extendedRights Receive-As `
–inheritanceType Descendents

Until next time…

If there are things you would like to see written about, please let me know!


Follow me on twitter: @EssentialExch