Commvault

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1

Quick Links to Topics:

Credits:

Great thanks to Satish Kilaru and William Katcher for their technical expertise and great explanations!

Commvault® ObjectStore Overview

The Commvault® ObjectStore feature is used to create shares directly on Commvault® storage, accessible by users or applications. The following use cases are examples for implementing the Commvault® ObjectStore:

  • User shares
  • Application repository
  • Third-party backups

Commvault® ObjectStore for User Shares

The majority of organizations offer a centralized location to store data. Whether it's a simple file server, a NAS unit publishing shares, or complex replicated file system (Microsoft DFS replication), the data is centralized and subsequently protected by a backup system. However, this kind of architecture can be expensive, especially when three or more copies of the data must be stored. For instance, consider the cost of acquisition and maintenance for storage containing user shares (first copy of data). Added to this expense is the storage costs of the local disk copy of the backup software (second copy of data). In addition, the backup data must then be sent off-site, either to another storage unit at a remote site, to the cloud, or on tapes (third copy of the data). 

The Commvault® ObjectStore feature avoids maintaining the cost of acquisition and maintenance. With Commvault® software, it is now possible to create shares that are hosted directly on a Commvault® storage target. This technology includes versioning options that are commonly looked for in a centralized file system. These shares are also managed by a Commvault® storage policy which protects the local copy of data by sending a secondary copy off-site. Shares are accessible with any Network File System (NFS). Not only does this solution significantly reduce a company's storage costs, but it also avoids having to schedule backups, since data is created or copied directly into the storage managed by Commvault software.

Typical implementation for user shares


Implementation for user shares using the Commvault® ObjectStore technology

Commvault® ObjectStore for Application Repository

The Commvault® ObjectStore can be used as a repository for a third-party application. For example, you can store the Salesforce files that are associated with records directly in a Commvault® ObjectStore. This means you don't have to provide storage to the Salesforce app and then back up that same data to the Commvault® storage. The Commvault® ObjectStore is used by any application that supports writing to a Network File System (NFS) share.

Plugins are available for the following applications:

  • Jive
  • NetSuite
  • Salesforce

Implementation of Commvault® ObjectStore as a repository for Salesforce


Commvault® ObjectStore for Third-Party Backups

Although an organization usually procures the main backup software for data protection, the administrator of this software often contends with third-party backups. For example, let's take database administrators who prefer to fall back on exports rather than using an agent to protect their databases. Frequently, they keep multiple versions in a directory that will be protected by the backup software. In the long run, this requires a lot of storage space for this task. With the Commvault® ObjectStore, the Database Administrator (DBA) creates their exports directly on Commvault® storage. This saves considerable storage costs. Another advantage is the export's backup does not have to be scheduled. As soon as it's created, it's immediately protected.

Another example are applications that are not supported by Commvault® software, but have built-in backup options. The application performs its consistent backup directly in the Commvault® ObjectStore.

Finally, the last example is common for an organization that has just acquired Commvault® software. Questions often emerge, like "What will happen to the backups performed by the old backup software?"  Well, the backups can also be restored directly in the Commvault® ObjectStore. The advantage is that these backups will be indexed making it possible to search for the data if needed. It is also possible to automate the restoration of this data using a workflow available on the Commvault® Store. This method prevents you from having to maintain the old backup software and hardware just to phase-out the old system. For more information about the Workflows, see the Commvault Online Documentation. 

Implementation of Commvault® ObjectStore to use for third-party backups



Commvault® ObjectStore Installation and Configuration

This solution is based on two crucial components, the Index Server and the ObjectStore server. The Index Server is used to index the data that will be created in the ObjectStore and can be installed on any Windows MediaAgent.The ObjectStore component must be installed on a Linux MediaAgent.

Setting up the Commvault® ObjectStore solution requires several important steps that must be executed accurately to ensure the proper operation of the solution. The steps are as follows:

  1. Install and configure the Index Server.
  2. If not done already, install the MediaAgent on the ObjectStore server (the 3DFS package used for the Commvault® ObjectStore shares are part of the MediaAgent installation).
  3. Install the NFS-Ganesha RPM and the Samba RPM on the Commvault® ObjectStore server.
  4. Configure the ObjectStore server NFS cache.
  5. Create a storage device.
  6. Create a storage policy.

In the examples provided, a Windows MediaAgent (CVMA-1) is used as an Index Server and a Linux MediaAgent (LinuxMA) is used as an ObjectStore server. 

Install and Configure the Index Server

The index server contains a Solr database of all the metadata stored in the ObjectStore. The volume used to host the index database should be on a dedicated performant volume. For more information on Sizing guidelines, refer to the Commvault Online Documentation.

The index server component can be clustered across multiple nodes to provide resilience. A minimum of three nodes is required for full fault-tolerance. However, in this example, you will use only a single node.

There are a number of steps to create an index server:

  1. Create an empty directory for the index data.
  2. Install the Index Store, Web Server, and High Availability Computing (HAC) components.
  3. Add a HAC cluster.
  4. Add an Index Store Pool.
  5. Add an Index Cloud Server.

Installing the components

On the MediaAgent that is used as an Index Server, install the Index Store, Web Server, and HAC components. They can be installed all at once. You must use the interactive installation method as the Web Server component is not available through a push installation. A reboot may be required at some point. Simply reboot the server and launch the installation again. The process resumes where it left off. Once the Index Server is installed, you can move its index directory to the dedicated location. In order to do so, download and execute the IndexServerDirectoryMove workflowfrom the Commvault® store.

NOTE: Repeat these steps for all nodes that are part of the Index cluster.




To install the index server components

1 - Right-click Setup.exe | Run as administrator.

2 - Select the language to use during installation.

3 - Accept the license agreement.



4 - Select Install packages on this computer.

5 - Since the MediaAgent package is already installed, select Add Packages.

6 - Select the Web Server…

7 - …and Index Store components from the Server section.

8 - From the Tools section…

9 - …select the High Availability Computing component.



10 - Provide a path for the SQL database instance required by the Web Server component.

11 - Provide a path for the Web Server database.

12 - Provide a path for the Web Server Cache.



13 - Reboot the server if needed.

14 - Re-launch the install and provide credentials with install privileges.

15 - Click Finish to complete the installation.


Add a High Availability Cluster

The next step is to create the index server cluster. To provide full fault tolerance, a minimum of three nodes is required. If you plan to use only one node, you still have to configure the cluster but it will not provide any resilience.




To create the index server cluster

1 - Right-click Client Computers | New Client.

2 - Select to create a HAC client.



3 - Provide a name for the cluster.

4 - Add the node or nodes to the cluster.

5 - Click to create the cluster.


Create the Index Store Pool

After the cluster is created, an index store pool must be configured. This index store pool is distributed on the cluster nodes which are selected from the Nodes tab of the index store pool creation wizard. From here you can select them all or in part.




To create the index store pool

1 - Right-click Index Store Pools | Add Index Store Pool.

2 - Provide a name for the index store pool.

3 - Select the cluster to use from the list.



4 - Add the nodes to the Nodes window.

5 - Click OK to create the index store pool.



6 - The newly created index store pool is listed in the window.


Create the Index Server Cloud Client

The last step in setting up the index server is to configure the index server cloud client, which uses the previously created index store pool. This enforces indexing through the cluster of index server nodes. This last step defines the directory to use to store the indexes, which uses the dedicated volume folder created previously.

Next define the replication factor, which defines the number of nodes between which the indexes are replicated. For example, if a replication factor of three is set, up to two nodes may be unavailable while allowing the indexes to be accessible from the third node.

The indexing role must also be selected, in this case, the NFS Index role. Once the wizard has completed, everything is in place to index the content of files created by users in NFS ObjectStore shares.




To create the Index Server Cloud client

1 - Right-click Client Computers | New Client.

2 - Click Index Server Cloud.



3 - Provide a descriptive name for the index server cloud client.

4 - Define the dedicated volume folder to store the indexes.

5 - Check to use an Index Store Pool.

6 - Select the index store pool.

7 - If using more than one node, define the replication factor.



8 - Add the NFS Index role to the window.

9 - Add the nodes to use in the window.

10 - Click to complete the client configuration.


Install and Configure the ObjectStore Server

Once the index server is deployed, it is now time to configure the ObjectStore server. On the system that is used as the ObjectStore server, simply install the MediaAgent component that also includes the 3DFS server that is used to publish the NFS shares. An existing MediaAgent can be used, as long as it is a Linux platform, which is a prerequisite for the ObjectStore component.

On that same server, install NFS-Ganesha.

Then, it is important to define the folder that serves as a NFS chase. This cache must be on a dedicated volume of at least 1 TB. It is recommended to make sure you have available space that can accommodate changes made during a whole day. This will provide more than enough time to the MediaAgent server to back up the data. This volume must be performant, such as a disk group configured in RAID.

Finally, if not done already, create a disk library and a storage policy. As with traditional backups, the storage policy dictates the storage target for the primary copy of the NFS Shares, additional copies, as well as the retention for each copy.


Install the NFS-Ganesha RPM on the ObjectStore Server

The NFS-Ganesha RPM can be downloaded from the Download Center of the cloud.commvault.com website. Once downloaded, extract it:

[root@linuxma Downloads]# ls
Nfs-Ganesha_2.6.3_for_RHEL_7.tgz
[root@linuxma Downloads]# tar xf Nfs-Ganesha_2.6.3_for_RHEL_7.tgz

[root@linuxma Downloads]# ls
Nfs-Ganesha_2.6.3_for_RHEL_7 Nfs-Ganesha_2.6.3_for_RHEL_7.tgz


Then, follow the instructions contained in the install.txt file to install the RPMs and dependencies. But first, validate that no other installation of NFS-Ganesha or dependencies currently exists on the server.

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -qa | grep ganesha
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]#

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -qa | grep libntirpc
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]#

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# yum install policycoreutils-python.x86_64
Complete!
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# yum install pyparsing
Complete!
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# yum install jemalloc
No package jemalloc available.
Error: Nothing to do


If you encounter that error, it means that the system can't find the required RPM. If so, simply install it:

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -i jemalloc-3.6.0-1.el7.x86_64.rpm


Then install the NFS-Ganesha RPMs:

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -i libntirpc-1.6.3-cv.el7.centos.x86_64.rpm
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -i nfs-ganesha-2.6.3-cv.el7.centos.x86_64.rpm
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -i nfs-ganesha-utils-2.6.3-cv.el7.centos.x86_64.rpm
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -qa | grep libntirpc
libntirpc-1.6.3-cv.el7.centos.x86_64
[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# rpm -qa | grep nfs-ganesha
nfs-ganesha-2.6.3-cv.el7.centos.x86_64
nfs-ganesha-utils-2.6.3-cv.el7.centos.x86_64


Installing Samba on the Object Store Server

[root@linuxma Nfs-Ganesha_2.6.3_for_RHEL_7]# yum install samba-client-libs samba-winbind samba samba-winbind-modules samba-common-libs samba-client samba-common samba-common-tools samba-libs tdb-tools ntp
Complete!


Size and configure the ObjectStore NFS Cache

The NFS cache is used to store changes made to files contained in NFS shares, as well as files newly created by users. Cached files are backed up by the MediaAgent as soon as they are created or modified. This cache should be hosted on a dedicated high-performance volume.

The size of the volume depends on the change rate of files. Although files are backed up as soon as they are created or modified, they are purged from the NFS cache once a day. The system looks at the space usage and if it is higher than 80%, the oldest files are deleted first until the space usage is below the threshold. This means that recent files are left in the cache, so if you need to modify a file, it prevents the system from having to recall it. 

NOTE: It is recommended to have an NFS cache that can handle a day and a half worth of changes, and it should be at least 1 TB in size.




To relocate the NFS cache

1 - Right-click the ObjectStore MediaAgent | Properties.

2 - Browse to the new cache folder.



Configure an NFS ObjectStore

The NFS ObjectStore configuration is done in two steps. First, create the user to provide secure access to the share. Then, create an NFS ObjectStore client.

Create the NFS ObjectStore User

Each NFS ObjectStore must have a user associated with it. There must be one user per share and one user cannot be used for more than one share. The reason is simple, the name of the share will be the same as the username, and therefore requires to be unique.




To create the NFS ObjectStore user

1 - Right-click CommCell Users | New User.

2 - Provide a descriptive username that will also be the share name.

3 - Type and confirm the password.

4 - Add any additional information if needed.

5 - Click OK to create the user.



6 - The user for the share is displayed in the CommCell Users window.


Create an NFS ObjectStore Client

The last step is to create the NFS ObjectStore client. One client per share must be configured, allowing logical isolation of the data. The storage policy used to manage the data must also be selected during the configuration. The storage policy dictates the storage targets used to store the NFS ObjectStore data copies, as well as the retention of each copy.




To create the NFS ObjectStore client

1 - Right-click Client Computers | New Client.

2 - Select the NFS ObjectStore client type.



3 - Add the user created for the share.

4 - Select the storage policy that will manage the ObjectStore data.

5 - Select the index server cluster that will index the ObjectStore data.

6 - Click OK to create the client.



7 - Check to enable versioning.

8 - Check to secure access to data using Access Control Lists.

9 - Select the MediaAgent acting as the NFS ObjectStore server.

10 - Provide access from any client computer (default) or…

11 - … provide IP addresses or host name of specific clients.


Testing the NFS ObjectStore

After setting up an NFS ObjectStore make sure that the server did publish the share. To do this, it is possible to open a bash session and validate using the 'show exports' command. If the share appears there, it is published and available to users. Next, make sure the share can be mounted. The ObjectStore server itself can be used to test whether the share can be mounted on a mount point. In the example below, the share is mounted and a text file is created to validate write access to the share.

Validating the published share

[root@linuxma /]# ganesha_mgr show_exports

Show exports

Timestamp:  Mon Jul 15 22:41:21 2019 726841056  nsecs

Exports:

  Id, path,    nfsv3, mnt, nlm4, rquota,nfsv40, nfsv41, nfsv42, 9p, last

 1,  /cvshare,  0,  0,  0,  0,  0,  0,  0,  0, Fri Jul  5 17:53:38 2019, 167508167 nsecs

 0,  /,  0,  0,  0,  0,  0,  0,  0,  0, Fri Jul  5 17:53:38 2019, 167508167 nsecs

 9032,  /BackupShare,  0,  0,  0,  0,  0,  0,  0,  0, Fri Jul  5 17:53:38 2019, 167508167 nsecs

 27518,  /MyShare,  0,  0,  0,  0,  0,  0,  0,  0, Fri Jul  5 17:53:38 2019, 167508167 nsecs


Mounting the share on a test server

[root@linuxma mnt]# mount 127.0.0.1:/MyShare /mnt/nfs

[root@linuxma mnt]# cd nfs

[root@linuxma nfs]# ls

[root@linuxma nfs]# date > /mnt/nfs/date.txt

[root@linuxma nfs]# cat /mnt/nfs/date.txt

Mon Jul 15 23:25:18 EDT 2019



  • No labels