Choose your language

Choose your login

Contact us

Microsoft Failover Cluster Manager (MSFCM) on Windows 2008 R2

This page applies to:

Microsoft Windows 2008 R2 offers clustering capabilities through its Microsoft Failover Cluster Manager (MSFCM).

This section documents the setup for Microsoft Failover Cluster Manager. Sites using Veritas Cluster Server (VCS) on Windows or Novell Cluster Services (NCS) on Novell OES Linux should jump to their respective sections.

This section assumes that you have an already installed and working clustered server environment.

Mode 1 is the simplest configuration and is suitable for most organizations. It implements clustering in the front line, that is, the printer and print monitoring layer. The cluster print server is configured as a secondary print server reporting back to a primary PaperCut NG/MF server hosted on another system outside the cluster.

Mode 2 implements clustering on all levels of the application for maximum fault tolerance - In addition to the print queues, the PaperCut NG/MF Application Server is also hosted in the cluster. Mode 2 is somewhat more demanding to configure and should only be attempted by organizations with staff experienced with advanced cluster and database management.

Refer to the subsequent sections for an explanation on how to set up Mode 1 or Mode 2 in your environment.

Mode 1 - Clustering at the print layer

Step 1 - Application Server (primary server) setup

Install the Application Server component (Standard Install option) on your nominated system. This system is responsible for providing PaperCut NG/MF’s web based interface and storing data. In most cases this system will not host any printers and is dedicated to the roll of hosting the PaperCut Application Server. It might be one of the nodes in the cluster; however, a separate system outside the cluster is generally recommended. An existing domain controller, member server or file server will suffice.

Step 2 - Installing the Print Provider components on each node

The Print Provider component needs to be separately installed on each node involved in the print spooler cluster. This is done by selecting the Secondary Print Server option in the installer. Follow the secondary server set up notes as detailed in Configuring secondary print servers and locally attached printers . Take care to define the correct name or IP address of the nominated Application Server set up in step 1.

Step 3 - Decouple service management from nodes

By default the Print Provider component is installed under the management of the node. To hand over management to the cluster, the service start-up type needs to be set to manual. On each node navigate to Control Panel > Administrative Tools > Services, locate the PaperCut Print Provider service. Stop the service and set the start-up type to Manual. Repeat for each node in the cluster.

Step 4 - Adding the Print Provider service as a resource under the print spooler’s cluster group
  1. Open the Failover Cluster Manager.

  2. Right-click the cluster group hosting the spooler service; then select Add a resource > 4 - Generic Service.

    Adding a new Generic Service Resource

  3. In the new resource wizard, select the name PaperCut Print Provider; then click Next.

  4. Click Next at Confirmation.

  5. Click Finish at Summary.

  6. Right-click the PaperCut Print Provider; then click Properties. On the PaperCut Print Provider Properties page ensure the Use Network Name for computer name option is selected.

    Cluster service parameters configuration

  7. Next, click the Dependencies tab and ensure the Print Spooler is added as a resource.

  8. Click OK.

  9. Right-click the PaperCut Print Provider; then click Bring this resource online.

Step 5 - Shared active job spool

To ensure the state of jobs currently active (e.g. held in a hold/release queue) are not lost during a failover event, PaperCut NG/MF is able to save job state in a shared drive/directory. If a shared disk resource is available and can be added to the cluster resource, PaperCut can use this to host a shared spool directory to ensure no active job state is lost.

  1. Add a shared drive to the cluster resource. e.g. (Q: drive). It is advisable to use the same drive as used for the shared print spool directory.

  2. Create a directory in this drive called PaperCut\Spool

  3. On each node, edit the file: [app-path]/providers/print/win/print-provider.conf and add a line pointing to the shared active job spool directory:

    SpoolDir=Q:\\PaperCut\\Spool
    

    Change the drive letter as appropriate.

  4. Restart the cluster resource to ensure the change is picked up.

Step 6 - Test

Perform operations to verify that:

  1. Print jobs are logged as expected.

  2. There is no error message in the Print Providers text log located at: C:\Program Files\PaperCut NG/MF\providers\print\win\print-provider.log on each node.

Active/Active clustering - multiple virtual servers

On large networks it is common to distribute load by hosting print spooler services under two or more virtual servers. For example, two virtual servers might each host half of the organization’s printers and share the load. This is sometimes referred to as Active/Active clustering - albeit not an entirely correct term, as the print spooler is still running in Active/Passive.

Virtual servers cannot share the same service on any given node. For this reason if the virtual servers share nodes, you need to manually install the PaperCut Print Provider service a second time under a different name. You can do this with the following command line:

cd \Program Files\PaperCut NG/MF\providers\print\win pc-print.exe PCPrintProvider2 /install

The argument preceding /install is the unique name to assign to the service. The recommended procedure is to suffix the standard service name with a sequential number. Repeat this on each physical node. Use a unique service name for each “active” virtual server hosted in the cluster group.

Make sure that you have unique SpoolDir settings for each node of your cluster. Ensure that the SpoolDir setting in the print-provider.conf file has a %service-name% expansion variable as explained above. This ensures that each service has it’s own spool directory.

Mode 2 - Clustering at all application layers

Mode 2 implements failover clustering at all of PaperCut NG/MF’s Service Oriented Architecture software layers, including:

  • Clustering at the Print monitoring layer

  • Clustering at the Application Server layer

  • Optional clustering at the database layer

Mode 2 builds upon Mode 1 by introducing failover (Active/Passive) clustering in the Application Server layer. This involves having an instance of the Application Server on each of the cluster nodes. When one node fails, the other automatically takes over the operation. Both instances use a share data source in the form of an external database (see Deployment on an external database (RDBMS) ). Large sites should consider using a clustered database such as Microsoft SQL Server.

This section assumes that you have an already installed and working Service and Application group hosting a clustered printing environment.

Step 1: Application Server installation

On one of the cluster’s nodes, install the PaperCut Application Server component by selecting the Standard Install option in the installer. Follow the setup wizard and complete the process of importing all users into the system.

Step 2: Convert the system over to an external database

The system needs to be configured to use an external database as this database is shared between both instances of the Application Server. Convert the system over to the required external database by following the procedure detailed in Deployment on an external database (RDBMS) . The database can be hosted on another system, or inside a cluster. As per the external database set up notes, reference the database server by IP address by entering the appropriate connection string in the server.properties file.

Step 3: Setup of 2nd node

Repeat steps 1 and 2 on the second and any subsequent cluster nodes.

Step 4: Distributing application license

Install your license on your 2nd node and any subsequent cluster nodes. To do this:

  1. Log in to the Admin web interface by browsing to http://[IP-Address-Of-Your-Node]:9191/admin.

  2. Click the About tab.

  3. In the Register area, click Browse.

  4. Locate the license file.

  5. Click Install license.

  6. Verify that the license information is correctly listed on the About page.

Step 5: Decouple service management from the nodes

By default the PaperCut Application Server component is installed under the management of the node. It needs to be managed inside the cluster, so set the service’s start-up type to manual. On each node navigate to Control Panel > Administrative Tools > Services locate the PaperCut Application Server. Stop the service and set its start-up type to Manual. Repeat this on all nodes.

Step 6: Create a new Services and Applications group

Create a new Services and Applications group containing the two nodes. Make note of the IP Address that you assign as you will use it later. Add the Generic Service PaperCut Application Server. Give the Client Access Point an appropriate title such as PaperCutAppCluster.

This Services and Applications group is separate to the existing clustered printing environment. It is recommended to set up two Services and Application groups where you can later set the node affinity to better distribute the application load across nodes.

Step 7: Configure the PaperCut Application Server

  1. Right-click the PaperCut Application Server; then click Properties.

  2. On the PaperCut Application Server Properties page, select the Use Network Name for computer name.

     Application Server Properties configuration

  3. On the Dependencies tab make sure the Service and Application group is added as a Resource.

     dependencies configuration

  4. Click OK.

Step 8: Confirm PaperCut Application Server active

Right-click the Service and Applications group; then select Bring online. Wait until the Application Server has started, then verify that you can access the system by pointing a web browser to:

http://[Virtual Server Name]:9191/admin

Log in, and perform some tasks such as basic user management and User/Group Synchronization to verify the system works as expected.

Step 9: Set up the Print Provider layer

Set up the Print Provider as described in Mode 1 - Clustering at the print layer . The exception being that the IP address of the Application Server is the IP address assigned to the Virtual Server.

Step 10: Client configuration

The client and Release Station programs are located in the directories:

  • [app-path]/client/

  • [app-path]/release/

These directories contain configuration files that instruct the client to the whereabouts of the server. Update the IP address and the server name in the following set of files to the Virtual Server’s details (Name and IP address):

  • [app-path]/client/win/config.properties

  • [app-path]/client/linux/config.properties

  • [app-path]/client/mac/PCClient.app/Contents/Resources/config.properties

  • [app-path]/release/connection.properties

Edit the files using Notepad or equivalent and repeat this for each node. Also see Client/workstation configuration .

Step 11: Test

Mode 2 setup is about as complex as it gets! Take some time to verify all is working and that PaperCut NG/MF is tracking printing on all printers and all virtual servers.

Advanced: Load distribution and independent groups

Separating these resources into to groups, running on different IP addresses allows you to set up different node affinities so the two groups usually run on separate physical nodes during normal operation. This ensures the load is spread across multiple nodes.

To make this change after setting up the single group Mode 2 configuration:

  1. Set the Preferred owners of each Services and Applications group to different physical nodes.

  2. Restart or bring on line each group, and independently test operation and operation after fail-over.

##Clustering tips

  1. Take some time to simulate node failure. Monitoring can stop for a few seconds while the passive server takes over the role. Simulating node failure is the best way to ensure both sides of the Active/Passive setup is configured correctly.

  2. It is important that the version of PaperCut running on each node is identical. Ensure that any version updates are applied to all nodes so versions are kept in sync.

  3. The PaperCut installation sets up a read-only share exposing client software to network users. If your organization is using the zero-install deployment method, the files in this share are accessed each time a user logs onto the network. Your network might benefit from exposing the contents of this share via a clustered file share resource.

  4. PaperCut regularly saves transient state information (such as print job account selections) to disk so that this state can be recovered on server restart. If failing over to a new cluster server, you should ensure this state information is saved to a location available to the new server.

    By default the state information is located in [app-path]/server/data/internal/state/systemstate. You can change this location if required by setting the property server.internal-state-path in your server.properties file.

Additional configuration to support Web Print

By default the Application Server looks in [app-path]\server\data\web-print-hot-folder for Web Print files. This location is generally available only on one node in the cluster. To support Web Print in a cluster, add a Shared Folder on the Shared Storage in your cluster. This can be done on the same disk that the spool files reside and the Print Provider point to.

To change this location, use the Config editor and modify the web-print.hot-folder key.

  1. Add a Shared Folder on the Shared Storage, an example would be E:\web-print-hot-folder and share it as \\clustername\web-print-hot-folder\.

  2. In the Admin web interface, click the Options tab.

  3. In the Actions menu, select Config editor.

  4. Modify web-print.hot-folder to E:\web-print-hot-folder

  5. Map your selected network drive on the Web Print Sandbox machine to \\clustername\web-print-hot-folder\

  6. Add all relevant printer queues from \\clustername\web-print-hot-folder\ to the Web Print Sandbox server.

Additional configuration to support Print Archiving

If you have enabled Print Archiving (viewing and content capture) , the Applications Server stores archived print jobs in [app-path]\server\data\archive. This location is generally only available on one node in the cluster. To support Print Archiving in a cluster, add a Shared Folder on the Shared Storage in your cluster. This location must be accessible to all cluster nodes and also any print servers that are collecting print archives.

For instructions for moving the archive location see Phase 1: Moving the central archive: . This describes how to configure both the Application Server and your print servers to use the same shared storage location.

Comments