This article addresses some common questions about WSUS maintenance for Configuration Manager environments.
Category Archives: Software
How to enable the Disk Cleanup tool on Windows Server 2008 R2
source: https://support.appliedi.net/kb/a110/how-to-enable-the-disk-cleanup-tool-on-windows-server-2008-r2.aspx
How to enable the Disk Cleanup tool:
1) Go to Programs & Features, and in the Features section, enable/install “Desktop Experience”. The downside to this is that you will need to reboot your server after installing this and it installs other components you do not need on a server.
2) [RECOMMENDED] – All you really need to do is copy some files that are already located on your server into specific system folders, as described at http://technet.microsoft.com/en-us/library/ff630161(WS.10).aspx
The location of the files you need to copy depend on your version of Windows:
Operating System | Architecture | File Location |
Windows Server 2008 R2 | 64-bit | C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.1.7600.16385_none_c9392808773cd7da\cleanmgr.exe |
Windows Server 2008 R2 | 64-bit | C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.1.7600.16385_en-us_b9cb6194b257cc63\cleanmgr.exe.mui |
Windows Server 2008 | 64-bit | C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_b9f50b71510436f2\cleanmgr.exe.mui |
Windows Server 2008 | 64-bit | C:\Windows\winsxs\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_c962d1e515e94269\cleanmgr.exe.mui |
Windows Server 2008 | 32-bit | C:\Windows\winsxs\x86_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.0.6001.18000_en-us_5dd66fed98a6c5bc\cleanmgr.exe.mui |
Windows Server 2008 | 32-bit | C:\Windows\winsxs\x86_microsoft-windows-cleanmgr_31bf3856ad364e35_6.0.6001.18000_none_6d4436615d8bd133\cleanmgr.exe |
Windows Server 2012:
C:\Windows\WinSxS\amd64_microsoft-windows-cleanmgr_31bf3856ad364e35_6.2.9200.16384_none_c60dddc5e750072a\cleanmgr.exe
C:\Windows\WinSxS\amd64_microsoft-windows-cleanmgr.resources_31bf3856ad364e35_6.2.9200.16384_en-us_b6a01752226afbb3\cleanmgr.exe.mui
Windows Server 2012 R2: must install Desktop Experience. Use Powershell command:
Install-WindowsFeature Desktop-Experience
Once you’ve located the files move them to the following locations (Server 2012 non-R2 and earlier):
- Copy Cleanmgr.exe to %systemroot%\System32.
- Copy Cleanmgr.exe.mui to %systemroot%\System32\en-US.
You can now launch the Disk cleanup tool by running Cleanmgr.exe from the command prompt.
If an old cleanup manager is used, windows update files will not be cleaned. For this you need Microsoft hotfix 2852386
Create a Dynamics NAV NST Instance with Powershell
How to create a NST instance with a powershell script:
Set-ExecutionPolicy -ExecutionPolicy Unrestricted Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy Unrestricted Import-Module "C:\Program Files\Microsoft Dynamics NAV\100\Service\NavAdminTool.ps1" -DisableNameChecking # Set varaibles for NST $NAVServiceInstance = 'instance-name' $DatabaseName = 'database-name' $DatabaseServer = 'database-server' $NAVServiceUser = 'service-account' $NAVServiceUserPW = 'service-account-password' $DefaultTimeZone = 'Server Time Zone' $MaxUploadSize = 2047 $EnableTaskScheduler = 'False' $UseNTLM = $TRUE $SOAPMaxMsgSize = '5120' $ChangeTimeout = $FALSE $IdleClientTimeout = '01:30:00' $IsNAS = $FALSE $NASArgument = 'JOBQUEUE' $NASCodeunit = '450' $NASMethod = '' $DefaultCompany = '' $IsNOR = $FALSE $LanguageID = '1044' $Language = 'no-NO' # NAV Service Account $secpasswd = ConvertTo-SecureString $NAVServiceUserPW -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($NAVServiceUser, $secpasswd) ##Creating NST New-NAVServerInstance $NAVServiceInstance -DatabaseName $DatabaseName ` -DatabaseServer $DatabaseServer ` -ManagementServicesPort 7045 ` -ClientServicesPort 7046 ` -ODataServicesPort 7048 ` -SOAPServicesPort 7047 ` -ServiceAccount user ` -ServiceAccountCredential $mycreds ` -Verbose Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ServicesDefaultTimeZone ` -KeyValue $DefaultTimeZone ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ClientServicesMaxUploadSize ` -KeyValue $MaxUploadSize ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName EnableTaskScheduler ` -KeyValue $EnableTaskScheduler ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ServicesUseNTLMAuthentication ` -KeyValue $UseNTLM ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName SOAPServicesMaxMsgSize ` -KeyValue $SOAPMaxMsgSize ` -WarningAction SilentlyContinue ##Creating NAS IF ($IsNAS) { Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ClientServicesEnabled ` -KeyValue FALSE ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName SOAPServicesEnabled ` -KeyValue FALSE ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ODataServicesEnabled ` -KeyValue FALSE ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ManagementServicesEnabled ` -KeyValue FALSE ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName NASServicesStartupArgument ` -KeyValue $NASArgument ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName NASServicesStartupCodeunit ` -KeyValue $NASCodeunit ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName NASServicesStartupMethod ` -KeyValue $NASMethod ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ServicesDefaultCompany ` -KeyValue $DefaultCompany ` -WarningAction SilentlyContinue } ##Set Idle Client Timeout IF ($ChangeTimeout) { Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ClientServicesIdleClientTimeout ` -KeyValue $IdleClientTimeout ` -WarningAction SilentlyContinue } ##Set Services Language IF ($IsNOR) { Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName DefaultLanguageId ` -KeyValue $LanguageID ` -WarningAction SilentlyContinue Set-NAVServerConfiguration -ServerInstance $NAVServiceInstance ` -KeyName ServicesLanguage ` -KeyValue $Language ` -WarningAction SilentlyContinue } #Add NAVService to portsharing and start Service. #Import-Module $PSScriptRoot\NAVServerInstancePortSharing.ps1 #Enable-NAVServerInstancePortSharing $NAVServiceInstance
Update a HA Netscaler environment
source:http://support.citrix.com/article/CTX127455
To update an HA netscaler, do the following steps.
Upgrade the secondary netscaler appliance
- Save the config: save config
- Switch to shell: shell
- Change to the installation directory: cd /var/nsinstall
- Create a temporary directory: mkdir x.xnsinstall
- Chang to the created directory: cd x.xnsinstall
- Upload the files to the temporary directory (e.g. pscp build-11.0-66.11_nc.tgz nsroot@192.168.1.1:/var/nsinstall/11.0nsinstall/build-11.0-66.11_nc.tgz)
- Extract the files: tar -zxvf ns-x.0-xx.x-doc.tgz
- Install the software: # ./installns
- Press y to restart the appliance
- Check the state of the appliance: > show ha node
This should state that it is the secondary node and synchronization is disabled. To disable the synchronization manually run the command: > set node -hasync disable - Check the configuration
The version can be found with the command: > show version - Failover the appliance: > force failover
Upgrade the primary netscaler appliance
- Follow step 1 to 9 mentioned in the previous section
- Check if the appliance is UP and it is the primary node: > show ha node
If the appliance isn’t the primary application a failover can be initiated: > force failover
Enable Synchronization
- Log on the secondary node, check if it is the secondary node: > show node
- Enable synchronization: > set node -hasync enable
- Check synchronization status: > show ns runningconfig
The update has been completed. Additional backups can be removed as well the downloaded files in the created directory (step 4).
Implementing Content Freshness protection in DFSR
https://blogs.technet.microsoft.com/askds/2009/11/18/implementing-content-freshness-protection-in-dfsr/
Background
Content Freshness is an admin-defined setting that you can set on a per-computer basis when using DFSR on Win2008 or Win2008 R2 – it does not exist on Windows Server 2003 R2. The DFSR database has a record for each Replicated Folder (RF) called CONTENT_SET_RECORD. This record contains a timestamp called “LastConnected”. We store this record on a per-Replicated-Folder basis because it’s possible for a replicated folder to be current when it’s connected to other members in that replication group. At the same time, another replicated folder can be stale because it is not connected with other members in its replication group. Every day, DFSR updates this timestamp to show the opportunity for replication occurred. When attempting replication for an RF between computers, the DFSR service checks if the last time replication was allowed is older than the freshness date. If the last-allowed-replicated date is newer, it replicates. If it’s not, we block replication.
By now, you’re asking yourself “why would I want to block replication.” Good question. DFSR has a JET database just like Active Directory, and it uses multi-master replication just like AD. This means that it must implement tombstones to deleted items to replicate. When a file is deleted in DFSR, the local database records the deletion as a tombstone in the database – a logical deletion. After 60 days DFSR garbage collects the record from the database and it is truly gone – a physical deletion. Online defragmentation of the database can now reclaim that whitespace. The 60 days allows all the replication partners to learn about the deletion and act on it.
And herein lays the problem. If a DFSR server cannot replicate an RF for more than 60 days, but then replication is allowed later, it can replicate out old deletions for files that are actually live or replicate out stale data and overwrite existing files. If you’ve ever worked on an Active Directory “lingering object” issue, you have seen what can happen when a DC that was offline for months is brought back up. This is why Strict Replication Consistency was invented for AD – Content Freshness protection is the same thing.
Being “unable to replicate” can mean any one of these scenarios:
- Disabling the replication connections.
- Deleting the replication connections (either one-way or in both directions).
- Stopping the DFSR service.
- Closing the schedule (i.e. setting “no replication”)
- Keeping the server shut off.
This whole content freshness idea is novel enough that we went to the trouble of applying for a patent on it.
Implementing Content Freshness Protection
Content Freshness protection is not enabled by default. To turn it on you simply modify the DfsrMachineConfig setting for MaxOfflineTimeInDays on each DFSR server with:
wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=<some value>
The recommendation is to set the value to 60:
wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig set MaxOfflineTimeInDays=60
Remember, this has to be done on all DFSR servers, as this change only affects the computer itself. This value is not stored in a central AD location, but instead in the DfsrMachineConfig.XML file that resides in the hidden operating system folder “%systemdrive%\system volume information\dfsr\config”:
You can also view your existing MaxOfflineTimeInDays with:
wmic.exe /namespace:\\root\microsoftdfs path DfsrMachineConfig get MaxOfflineTimeInDays
Remember, by default this protection is OFF and be assumed to be zero if there are no entries in the DfsrMachineConfig.xml.
Note: Sharp-eyed admins may notice that we actually have an AD attribute stamped on every Replication Group called ms-DFSR-TombstoneExpiryInMin that appears to control tombstone lifetime. It even has the value – in minutes – for 60 days. Sorry to disappoint you, but this attribute is never read by DFSR and changing it has no effect – tombstone lifetime garbage collection is always hard-coded to 60 days in the service and cannot be changed.
Protection in Action
Let’s see how all this works. My repro environment:
- A pair of Windows Server 2008 R2 computers named 2008r2-fresh-01 and 2008r2-fresh-02
- Replicating in a Replication Group named “RG1”
- Using a Replicated Folder named “RF1”
- Keeping a few user files in sync.
- MaxOfflineTimeInDays set to 60 on 2008r2-fresh-02
Important note: I am going to simulate the offline time by rolling clocks forward. Never ever do this in production – this is for testing and demonstration purposes only. Also, I only set MaxOfflineTimeInDays on one server – you would do this on all servers.
So here’s my data:
Now I stop DFSR on 2008r2-fresh-02 and roll time forward to January 1st, 2010 on both servers – about 75 days from this writing. I then make a few changes on 2008r2-fresh-02.
And then I start the DFSR service back up on 2008r2-fresh-02.
- My changed files do not replicate out
- New files do not replicate in
I now have this event:
Log Name: DFS Replication
Source: DFSR
Date: 1/1/2010 3:37:14 PM
Event ID: 4012
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: 2008r2-fresh-02.blueyonderairlines.com
Description:
The DFS Replication service stopped replication on the replicated folder at local path c:\rf1. It has been disconnected from other partners for 76 days, which is longer than the MaxOfflineTimeInDays parameter. Because of this, DFS Replication considers this data to be stale, and will replace it with data from other members of the replication group during the next replication. DFS Replication will move the stale files to the local Conflict folder. No user action is required.
Additional Information:
Error: 9061 (The replicated folder has been offline for too long.)
Replicated Folder Name: rf1
Replicated Folder ID: 5856C18F-CA72-4D2D-9D89-4CC1D8042D86
Replication Group Name: rg1
Replication Group ID: BC5976EF-997E-4149-819D-57193F21EC76
Member ID: FAEC4B17-E81F-4036-AAD9-78AA46814606
Note: this event has incorrect wording. The first two sentences in the description are good, but the following sentences are wrong. DFSR does not self-correct this situation, it does not move files into the ConflictAndDeleted folder, and you, the user, have actions you need to take. More on this later.
The DFSR Debug logs will show (edited for brevity):
20100101 15:37:14.410 1008 CSMG 5504 [WARN] ContentSetManager::CheckContentSetState This replicated folder has not connected to other partners for a long time. lastOnlineTime: [*** Logger Runtime Error:-114757888 ***]
20100101 15:37:14.410 1008 CSMG 7492 [ERROR] ContentSetManager::Initialize Failed to initialize ContentSetManager csId:{5856C18F-CA72-4D2D-9D89-4CC1D8042D86} csName:rf1 Error:
+ [Error:9061(0x2365) ContentSetManager::CheckContentSetState contentsetmanager.cpp:5596 1008 C The replicated folder has been offline for too long.]
20100101 15:37:14.410 1008 CSMG 7972 ContentSetManager::Run csId:{5856C18F-CA72-4D2D-9D89-4CC1D8042D86} csName:rf1 state:InitialBuilding
20100101 15:37:14.504 1948 SRTR 957 [WARN] SERVER_EstablishSession Failed to establish a replicated folder session. connId:{5E05AE2A-6117-4206-B745-7785DB316F74} csId:{5856C18F-CA72-4D2D-9D89-4CC1D8042D86} Error:
+ [Error:9028(0x2344) UpstreamTransport::EstablishSession upstreamtransport.cpp:808 1948 C The content set was not found]
The state of the replicated folder will be “In Error” – i.e. set to 5:
wmic.exe /namespace:\\root\microsoftdfs path DfsrReplicatedFolderInfo get ReplicationGroupName,ReplicatedFolderName,State
ReplicatedFolderName ReplicationGroupName State
rf1 rg1 5
The above is Content Freshness protection in action. It is protecting your DFSR environment from sending divergent data out to the rest of your working servers.
Recovering DFSR from Content Protection
Important note: Before repairing the blocked replication, get a backup of the data on the affected server and its partners. Failure to do will tempt Murphy’s Law to disastrous new heights. Understand that by following these steps below, any DFSR data that was on this server and never replicated will be moved to PreExisting and/or ConflictAndDeleted – this server goes through non-authoritative sync again and loses all conflicts with other DFSR servers. You have been warned!!!
Also, whatever is being done to stop replication from working needs to be ironed out – whether it is leaving the service off for months on end or not having any connections. Otherwise this is just going to happen again.
To get things back in order, do the following:
1. Start DFSMGMT.MSC on the affected server.
2. On any affected replication groups this server is a member of, select the computer on the Membership tab and “Disable” it.
3. Accept the warning prompt.
4. If the reason for replication never occurring was the schedule being set to “no replication” on the RG or RF, or no bi-directional connections being place between servers, fix that situation now.
5. Force AD Replication and verify it has converged.
6. On the affected server, run:
DFSRDIAG.EXE POLLAD
7. Wait for the 4008 and 4114 events being written to the DFSR event log to confirm that the replicated folder(s) are no longer being replicated.
8. In DFSMGMT.MSC, “Enable” the replication again on the affected replicated folders for that server.
9. Force AD replication and POLLAD again.
The server goes through non-authoritative initial sync, as if it was setup the first time. All matching data is unchanged and does not replicate. Any files on the server that do not exist on its authoritative partner are moved to the PreExisting folder. Any files on the server that have been changed locally are moved to the ConflictAndDeleted folder and the authoritative server’s copy is replicated inbound.
The Sum Up
Content Freshness protection is a good thing and putting it in place may someday save you some real pain. Trust me – we work cases here where Content Freshness being enabled would have stopped huge problems. All it takes is Windows Server 2008 or later, and a few moments of your time.
– Ned “Kool and the Gang” Pyle
Consolidated list of VirusScan Enterprise exclusion articles
The following list contains the most frequently used articles on configuring File and Folder exclusions for VSE 8.x. The list does not contain specific issues that you might be experiencing when setting exclusions. Search the KnowledgeBase using the error received or describing the issue experienced.
https://kc.mcafee.com/corporate/index?page=content&id=KB66909
ACO5426E The SQL log on does not have the Sysadmin role
Problem(Abstract)
A Data Protection for Microsoft SQL backup run manually and completes sucessfully. When the backup is run with the client scheduler, the backup fails with the ACO5426E error.
Symptom
The following error is logged in the Data Protection for Microsoft SQL log file :
ACO5426E The SQL log on does not have the Sysadmin role: CSqlApi::LogonServer:6638:1904 Server:IsSysadmin:false
The QUERY EVENT Tivoli Storage Manager administrative command shows the schedule as Failed with return code 1904. For example : Q EV * * F=D Policy Domain Name: SQLDOMAIN Schedule Name: SQL_FULL Node Name: MY_SQL Scheduled Start: MM/DD/YYYY HH:MM:SS Actual Start: MM/DD/YYYY HH:MM:SS Completed: MM/DD/YYYY HH:MM:SS Status: Failed Result: 1,904 Reason: Failed
Cause
Windows account running the scheduler service does not have the sysadmin role on the SQL server.
Diagnosing the problem
Review the “C:\Program files\Tivoli\TSM\tdpsql\tdpsql.cfg” file. Verify what value is specified for the SQLAUTHentication option. In this case, the option was not configured and the following default value was used : [SQLAUTHentication INTegrated
Review the Tivoli Storage Manager client scheduler in Windows and verify which account is used to run the service. In this case, the scheduler service was configured to run with the “Local System Account”.
Resolving the problem
When using the INTegrated option, it means that the userid doing the backup (or the account running the Data Protection for Microsoft SQL scheduler service) must have the sysadmin role on the SQL server. When the scheduler service is run with the “Local System Account”, you need to “NT AUTHORITY\SYSTEM” to the sysadmin role on the SQL server otherwise run the scheduler service with a different Windows account which has sysadmin role to the SQL server.
source: http://www-01.ibm.com/support/docview.wss?uid=swg21691523
Downloading patches with VMware vCenter Update Manager fails to one of the selected sources (2009000)
source: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009000
Downloading patches from one of the software sites configured in VMware vCenter Update Manager fails.
The VMware vCenter Update Manager server logs contain entries similar to:
[2011-11-01 15:24:57:425 ‘httpDownload’ 4440 ERROR] [httpDownload, 732] Error 12175 from WinHttpSendRequest for url https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Error 12175 deals with failed certificate validation. This issue can occur if one of the CA certificates used to sign the patch sites certificate is not trusted by the computer running the VMware vCenter Update Manager server software.
Solution 1
- Click Start > Run, type regedit, and click OK. The Registry Editor window opens.
- Navigate to the HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware Update Manager key.
- Right-click the value of the SslVerifyDownloadCertificate key and click Modify.
- Change the Value data field value to 0.
- Click OK.
- Click Start > Run, type services.msc , and click OK.
- Right-click VMware vSphere Update Manager Service and click Restart.
Solution 2
OneNote 2010 does not logoff Onedrive account
Convert Disk to VDH
http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx
download: http://download.sysinternals.com/files/Disk2vhd.zip
Introduction
Disk2vhd is a utility that creates VHD (Virtual Hard Disk – Microsoft’s Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). The difference between Disk2vhd and other physical-to-virtual tools is that you can run Disk2vhd on a system that’s online. Disk2vhd uses Windows’ Volume Snapshot capability, introduced in Windows XP, to create consistent point-in-time snapshots of the volumes you want to include in a conversion. You can even have Disk2vhd create the VHDs on local volumes, even ones being converted (though performance is better when the VHD is on a disk different than ones being converted).
It will create one VHD for each disk on which selected volumes reside. It preserves the partitioning information of the disk, but only copies the data contents for volumes on the disk that are selected. This enables you to capture just system volumes and exclude data volumes, for example.
Note: Virtual PC supports a maximum virtual disk size of 127GB. If you create a VHD from a larger disk it will not be accessible from a Virtual PC VM.
To use VHDs produced by Disk2vhd, create a VM with the desired characteristics and add the VHDs to the VM’s configuration as IDE disks. On first boot, a VM booting a captured copy of Windows will detect the VM’s hardware and automatically install drivers, if present in the image. If the required drivers are not present, install them via the Virtual PC or Hyper-V integration components. You can also attach to VHDs using the Windows 7 or Windows Server 2008 R2 Disk Management or Diskpart utilities.
Command Line Usage
Disk2vhd includes command-line options that enable you to script the creation of VHDs. Specify the volumes you want included in a snapshot by drive letter (e.g. c:) or use “*” to include all volumes.
Usage: disk2vhd <[drive: [drive:]…]|[*]> <vhdfile>
Example: disk2vhd * c:\vhd\snapshot.vhd