Commvault

Performance Tuning Settings

Quick Links to Topics:

Credits:

Great thanks to Mike Byrne for his hard work with the screen captures!

File System Backup

Consider the following key points when backing up the File System Agent:

  • For backups on Windows operating systems, ensure source disks are defragmented.
  • Ensure all global and local filters are properly configured. Consult Commvault online documentation for recommended filters.
  • If source data is on multiple physical drives, increase the number of data readers to multi-stream protection jobs.
  • For larger high-speed disk, a maximum of two data readers can be set for an individual disk. Enable 'Allow Multiple Data Readers within a Drive or Mount Point' to allow multiple streams on a single disk.
  • If source data is on a RAID volume, create subclient(s) for the volume and increase the number of data readers to improve performance. Enable the 'Allow Multiple Data Readers within a Drive or Mount Point' option.
  • Consider using synthetic full, or better, DASH Full backups over traditional full backups.
  • Consider using the Commvault OnePass® agent to archive older 'stale' data.
  • For large volumes containing millions of objects use the File System Block-Level Backup.
  • Consider using multiple subclients and stagger backup operations over a weekly or even monthly time period.
  • For supported hardware, consider using the Commvault IntelliSnap® feature to snap and backup volumes using a proxy server.
  • Increase the 'Application Read Size' from the default of 64KB to 512KB.

Data Readers

Disk I/O is the most costly, time-consuming portion of a data movement job. Using multiple data readers (also called data streams) can improve performance.

Conditions that can degrade performance for the File System Agent:

  • In some configurations, such as concurrent backups that use embedded agents on multiple virtual machines (VMs) in a hypervisor environment, using multiple data readers for each backup might overwhelm the disk I/O and degrade performance. In this situation, using only one data reader for each VM might achieve the best performance.
  • Internal algorithms determine the maximum number of data readers that can read concurrently from a single physical drive. Too many data readers on a single physical drive can degrade performance.
  • Subclient content is divided between data readers based on physical drives. Thus, the first data reader reads from the first physical drive, the second data reader reads from the second physical drive, and so on. By default, only one data reader is allowed per physical drive, regardless of how many data readers are configured. Often, a data reader completes before the other data reader completes, which reduces the performance gain of using multiple data readers.

Allow Multiple Readers within a Drive or Mount Point

For the File System Agent, the Number of Data Readers value determines the number of parallel read operations from the data source.
The 'Allow multiple data readers within a drive or mount point' option helps you to use data readers more efficiently.

For example, if you have subclient content that spans 4 physical drives, and you configure 8 data readers. Each physical drive gets 2 data readers. When one data reader completes its task, it assists another physical drive. This process continues until all data is read. This process maximizes the time that multiple data streams are moving data, which can improve performance.



To view subclient properties

1 - Right-click client | Properties.

2 - Click Advanced to view the number of Data Readers.

3 - Define the number of readers.

4 - Select to add multiple readers per drive.


Application Read Size

The application read size is the size of the application data that is read from the clients during backup jobs.
Values for the application read size must be in the power of 2; the minimum value is 64 KB, and the maximum value is 4,096KB (4MB).

Recommended values for Application Read Size

  • NTFS volume 512KB
  • ReFS volume 2,048KB

When the size of the application data that is read during backup jobs matches the source application's internal buffer allocation, the overhead is minimized, and performance is improved. To achieve the optimal rate of data transfer during backup jobs, configure the application read size based on the source application's internal buffer allocation. You can increase the application read size to reduce the amount of data that is read from the given application. Reducing the amount of data that is read also reduces the number of I/O jobs that are performed against the application. As a result, overall backup performance might improve. However, backup memory usage might also increase, which might inadvertently consume additional resources from the application.

Commvault recommends that you set the application read size at either the default value or at the cluster size that is directed by the application.

Microsoft NTFS uses a default cluster size (allocation unit) of 4KB by default. The 4KB cluster size was established when 2GB disks were considered large. Today, Microsoft recommends using a cluster size of 16KB or higher for NTFS volumes on servers. Commvault recommends that you use 64KB clusters, which matches the Microsoft ReFS default cluster size. With source data on volumes that have a 64KB cluster size, Commvault recommends using an application read size of at least 2,048KB for NTFS and ReFS.

For information about cluster sizes, see the Microsoft support article "Default cluster size for NTFS, FAT, and exFAT".



To configure application read size

1 - Expand the client | Right-click the
subclient | Properties.

2 - Click the Advanced
 button to view the Application Read Size.

3 - Set the application read size.




Virtual Server Agent Backup

General guidelines

  • To optimize virtual environment data protection and recovery performance, contact Commvault Professional Services for the latest guidance and assistance.
  • Use the Commvault Virtual Server Agent (VSA) to protect most VMs. Specific I/O intensive VMs may require more advanced protection methods.
  • Use backup set or subclient VM filters to filter VMs that don't require protection.
  • Use subclient VM rules to group priority VMs for protection. For example, use the power state rule to set infrequent schedules of VMs that are not powered on.
  • Maximize VM backup concurrency by increasing the 'Data Readers' option. Use caution as setting the readers option too high can cause performance degradation on backups and datastores or volumes hosting the VMs. As a general starting point, start with two VM backups per datastore or volume.
  • It is preferred to use physical VSA MediaAgent proxies versus virtual server MA proxies.
  • Ensure there are enough proxies to handle data movement load.
  • Use Commvault® software client-side deduplication and DASH Full backups.
  • For larger VMs, consider using the Commvault OnePass® feature to archive older 'stale' data.
  • Consider using multiple subclients and staggering schedules for when incremental and full or synthetic (DASH) full backups run.
  • Ensure Change Block Tracking (CBT) is enabled for all virtual machines, when applicable.


VMware specific guidelines

  • Ensure VSA proxies can access storage using the preferred transport mode. SAN transport and HotAdd will fall back to NBD mode if they cannot access VMs from the SAN or DataStore.

When protecting applications in a virtual environment:

  • Using the VSA to protect applications without the Application Aware feature or agents installed within the VM may result in crash consistent backups.
  • For low to medium I/O applications, use the Application Aware feature. Check the Commvault Online Documentation for a list of applications supported by the VSA Application Aware feature.
  • For I/O intensive applications, it is still preferred to use application agents installed in the VMs.

Commvault IntelliSnap® for VSA:

  • Use IntelliSnap for VSA to protect I/O intensive VMs.
  • Define subclients by datastore affinity. When hardware snaps are performed the entire datastores is snapped regardless of whether the VM is being backed up.
  • For smaller Exchange or MS-SQL databases (less than 500GB), application consistent snapshots can be performed using the IntelliSnap feature and VSA.




Database Agents

General Guidelines

  • For large databases that are being dumped by application administrators, consider using Commvault database agents to provide multi-streamed backup and restores.
  • When using Commvault database agents for instances with multiple databases, consider creating multiple subclients to manage databases.
  • For large databases, consider increasing the number of data streams for backing up the database. For multi-streamed subclient backups of SQL and Sybase databases, the streams should not be multiplexed. During auxiliary copy operations to tape if the streams are combined to a tape, they must be pre-staged to a secondary disk target before they can be restored.
  • For MS-SQL databases using file/folder groups, separate subclients can be configured to manage databases and file/folder groups.

Database Agent Streams

Disk I/O is the most costly, time-consuming portion of a data movement operation. Using parallel data readers (also called data streams) can improve performance. For databases, the Number of Data Readers value determines the number of parallel read operations that are requested from the database application.

Before you modify the number of data readers, Commvault recommends recording baseline throughput performance using the default settings, which are the recommended settings. You can then modify the number of data readers until you achieve the fastest throughput performance.




To configure SQL subclient streams

1 - Expand the client | Right-click the subclient | Choose Properties.

2 - Define the number of writers for database backup jobs.

3 - Define the number of writers for transaction log backup jobs.



Microsoft Exchange Database Agent

Application Read Size

The performance of both regular backup operations and IntelliSnap backup operations of an Exchange Database can benefit greatly from an application read size of 4MB (4,096 KB). The default value is 64KB.

For most Data Availability Group (DAG) environments, backup operations are performed on the passive node, and memory usage for the application read size is not a concern. If production performance problems occur, then you can decrease the application read size.

Multi-streamed Exchange Database Backups

Multi-streamed backups of an Exchange database reduces backup time by allocating streams on a per database level. The maximum number of streams that is used by a backup is determined by the number of databases in the Exchange environment. If a subclient's content contains four databases, then four streams could be used – each stream protecting one database.

In a DAG environment, the stream allocation is based on the number of nodes. When the job starts, the stream logic automatically assigns one stream to each node. If there are additional streams remaining, they are allocated based on which node has the most databases. The stream allocation process continues in order of the Exchange servers in the DAG environment containing the most databases to fewest in a prioritized round-robin method until all streams are allocated.




To configure backup streams for an Exchange database

1 - Right-click the subclient | Properties.

2 - Configure the number of streams for a streaming backup job.

3 - Configure the number of streams when backing up from an IntelliSnap® snapshot copy.



Network Settings

Pipeline Buffers

By default, Commvault software establishes 30 Data Pipeline buffers for each data movement connection. You can increase the data transfer throughput from the client by increasing or even decreasing the number of Data Pipeline buffers. The number of the Data Pipeline buffers depends largely on the transport medium.

To set the number of pipeline buffers, use the 'nNumPipelineBuffers' additional setting.

Although the maximum value for 'nNumPipelineBuffers' is 1,024, if you use a value that is greater than 300, you should consult with Commvault Support. When you increase the number of Data Pipeline buffers, the client or MediaAgent consumes more shared memory. When available memory is low, this consumption of shared memory might degrade the server performance for other operations.

Recommended values for nNumPipelineBuffers:

  • Internet - 30 buffers
  • 100BASE - 30 buffers
  • 1000BASE - 120 buffers




To configure pipeline buffers

1 - Expand Client Computers | Right-click the client | Choose Properties.

2 - Click Advanced.

3 - Click Add to include additional settings.

4 - Lookup for the
nNumPipelineBuffers additional setting.

5 - Define a value between 30 and 1020.

6 - Click OK to add the setting.


Network Agents

Network agents are threads or processes that transfer data to and from the network transport layer. Each network agent spends half its time reading and half its time writing. For higher speed networks, having multiple networks agents can improve performance.

Default values and valid values for the number of network agents:

  • Windows default – 2. Valid options 1 – 4
  • Unix default – 1. Valid options 1 – 2




To configure network agents

1 - Expand the client | Right-click the subclient | Properties.

2 - Click the Advanced button to define the number of Network Agents.

3 - Set the number of Network Agents.



Disk Storage

Chunk Size

Chunk sizes define the size of data chunks that are written to media and is also a checkpoint in a job. The default size for disk is 4GB. The default size for tape is 8GB for indexed based operations or 16GB for non-indexed database backups. The data path 'Chunk Size' setting can override the default settings. A higher chunk size results in a more efficient data movement process. In highly reliable networks, increasing chunk size can improve performance. However, for unreliable networks, any failed chunks must be rewritten, so a larger chunk size could have a negative effect on performance.


Chunk size recommendation for disk storage

Storage media

Job type

Default chunk size

Recommended chunk size

Disk

All data protection jobs

4 GB

512 MB – 8 GB

Direct-attached NDMP

All data protection jobs

8 GB

N / A

Commvault HyperScale® scale out storage

All data protection jobs

8 GB

N / A



To change the data path chunk size

1 - Expand the storage policy | Right-click the policy copy | Properties.

2 - Select the path to the drive and choose Properties.

3 - Define the data path chunk size.


Chunk size configuration for MediaAgents

Use the 'DMMBCHUNKSIZE' additional setting to control the chunk size of the data write jobs that go to the MediaAgent on which the additional setting is created.

The chunk size that you specify in the additional setting overrides the values that you specify in the chunk size that you specify for the CommCell® in the Media Management configuration.




To configure the MediaAgent Chunk Size

1 - Expand MediaAgents | Right-click the desired MediaAgent | Properties.

2 - Click to add a new setting.

3 - Lookup for the DMMBCHUNKSIZE key.

4 - Set the desired chunk size.


Block Size

MediaAgents can write to media that is formatted with different block allocation sizes or file allocation sizes if the MediaAgent operating system supports those sizes. Using a larger block size for disk library volumes can reduce overhead and thus increase the speed of write operations to media.

Linux ext3 and Microsoft NTFS use a default block (allocation unit) of 4KB. The 4KB block size was established when 2GB disks were considered large. Today, Microsoft recommends using at least a 16KB block size or higher for NTFS volumes. Commvault recommends that you use 64KB, which matches the Microsoft default value for the ReFS block size.

You can increase the Linux ext3 block size only on an Itanium system. For other file systems, consult your OS vendor documentation for your file system's available block sizes.




To change the data path block size

1 - Expand the storage policy | Right-click the policy copy | Properties.

2 - Select the path to the disk library and choose Properties.

3 - Define the data path block size.


Unbuffered I/O for Windows® MediaAgent

If the source copy is on disk and is managed by a Windows MediaAgent, then enable the Use Unbuffered I/O option for each mount path. Using unbuffered I/O can significantly improve performance.

To increase the speed of jobs that access the mount path, you can configure the MediaAgent to bypass the Microsoft Windows file system buffering.

You can make this configuration for Windows MediaAgents and for disks that are mounted directly (not for UNC paths).




To use unbuffered I/O on a mount path

1 - Right-click the mount path | Properties.

2 - Check to enable the use of unbuffered I/O.


Unbuffered I/O for Unix/Linux MediaAgent

A similar option is available for UNIX/Linux based MediaAgent, however, it must be enforced at the operating system level and not through the Commvault® software GUI. It can be achieved using two methods:

  • Method one – Use the GFS tool provided by most UNIX/Linux based OS. This tool sets a direct I/O flag to a directory and all its current subdirectories and files. Once enabled, any new directory or files created will also inherit the direct I/O attribute. It can be turned on (using the setflag parameter) or off (clearflag) as desired.
  • Method two – Use the Unbuffered I/O configuration for Linux MediaAgent:
    • Gfs_tool setflag inherit_directio MyDirectory

Mount the NFS filesystem using the force direct I/O flag (forcedirectio). For as long as the filesystem is mounted, it will bypass the operating system buffer.

For more information on the GFS tool or the mount direct I/O option, refer to your operating system vendor's documentation.




Tape Storage

Chunk Size

A chunk is the unit of data that the MediaAgent software uses to store data on media. For sequential access media, a chunk is defined as data between two file markers. By default, the chunk size is configured for optimal throughput to the storage media.

Job type

Default chunk size

Recommended chunk size

Granular (index based) job

8 GB

8 – 32 GB

Database (non-indexed) job

16 GB

8 – 32 GB


Chunk Size for tape libraries can be modified on the data path for a specific tape library, or globally, using the Media Management applet. Global chunk size settings are configured per agent type.




To change the data path chunk size

1 - Expand the storage policy | Right-click the policy copy | Properties.

2 - Select the path to the drives and choose Properties.

3 - Define the data path chunk size.



To change the tape media chunk size

1 - From the Storage menu | Choose Media Management.

2 - From the Chunk Size tab, change size of any agent chunk size.


Block Size

Before changing tape block size, ensure that the following criteria are satisfied:

  • Block size is supported by the MediaAgent OS, Host Bus Adapter (HBA), and the tape device.
  • All the MediaAgents that are associated with a storage policy support the block size that is configured on that storage policy. Consider the support and the compatibility of MediaAgent platforms at any disaster recovery site.
  • If you use different MediaAgents for backup operations and restore operations, and if the backup MediaAgent has a higher block size, then ensure that the restore MediaAgent can read data that is written with a higher block size.

Many streaming tape drives perform a read-after-write check. If the drive detects a bad block, then the drive puts a discard token after the block, and repeats the entire buffer write. If the drive detects a discard token, then the read cycle has corresponding logic to replace the bad block with the replacement block.

All tapes will have media defects. If you write 1,024KB blocks instead of 256KB blocks, then the chance of any block spanning a media defect are increased by a factor of 4. Because of the larger block size, the rewrite time is 4 times as long as well.

Increasing block size can improve the performance of writing to tape by minimizing the overhead associated with accessing and recording each block. If you select the data path's Use Media Type Setting option, then the data path's default block size for tape is 64KB. Refer to the Commvault Online Documentation: Use Media Type Setting section for more information.

Important notes on configuring tape block size:

  • Use caution when you select large block sizes. Large block sizes can vastly increase error rates and retries.
  • Block size applies only to tape media in direct-attached libraries.
  • Changes to the block size settings take effect when the next spare tape media is used.
  • Ensure hardware at the data center and other location, including DR sites support higher block sizes.



To change the data path block size

1 - Expand the storage policy | Right-click the policy copy | Properties.

2 - Select the path to the drive and choose Properties.

3 - Define the data path block size.




Cloud

Deduplication Block Size and Async IO

Commvault recommends that you always use deduplication for efficiency and performance when writing to cloud libraries. The default deduplication block size is 128KB, but for the fastest performance to cloud libraries, Commvault recommends using a deduplication block size of 512KB.

Commvault recommends using the 'SILookAheadAsyncIOBlockSizeKB' additional setting to set the block size that is used by Async IO in the look-ahead reader. The recommended value is 2 times the deduplication block size.

The default value of 'SILookAheadAsyncIOBlockSizeKB' is 128KB for cloud libraries. For the recommended deduplication block size of 512KB, set the AsyncIO block size to 1,024KB.

Configure the 'SILookAheadAsyncIOBlockSizeKB' additional setting on all source data mover MediaAgents that are associated with the source storage policy copy. For instructions about adding additional settings from the CommCell® console, refer to the Commvault Online Documentation: Add or Modify an Additional Setting section for more information.



To set the look ahead reader block size

1 - Expand MediaAgents | Right-click the desired  MediaAgent | Properties.

2 - Click to add a new setting.

3 - Lookup for the SILookAheadAsyncIOBlockSizeKB key.

4 - Set the desired block size.



Auxiliary Copy Performance

If the network utilization during a DASH copy job or a backup job with deduplication is low, then apply all the following settings on all source data mover MediaAgents that are associated with the source storage policy copy. For auxiliary copy operations, these settings are enabled by default.

For instructions about adding additional settings from the CommCell® console, refer to the Commvault Online Documentation: Add or Modify an Additional Setting section for more information.

Additional Settings for Auxiliary Copy performance

Additional Setting

Description

Values

DataMoverUseLookAh eadLinkReader

Enables the reading of multiple data signatures from the deduplication database

Default value: 1 (enabled)

SignaturePerBatch

Ensures that the DDB signature look-ups are batched with multiple signatures per look-up

Modify this value only if there is a large latency between the client and the deduplication MediaAgent.

  • Default value: 1
  • Maximum value: 32
  • Recommended values: 16 and 32. If the latency is between 100 and 200 ms, then use 16. If the latency is more than 200 ms, use 32.

DataMoverLookAhead LinkReaderSlots

Performs more lookups on the destination deduplication database for deduplication block signatures

  • Default value: 16
  • Valid values: 16, 32, 64, 128, and 256
  • Recommendation: Set the look-ahead slot value to as high as the MediaAgent memory and concurrent operations allow.
    Each look-ahead slot uses a memory size of one block of deduplicated data. So, with a default deduplication block size of 128KB and 16 look-ahead slots, each reader stream in an auxiliary copy operation uses 2MB of memory. Setting lookahead slots to 128 uses 16MB of memory for each reader stream. So, if 200 auxiliary copy streams run on the MediaAgent, then the total memory overheard for the lookahead slots only is 3.2GB.


Copyright © 2021 Commvault | All Rights Reserved.