User Tools

Site Tools


storage:emc_celerra

EMC NAS Operations & Management

Components

  • Control Station (CS)
    • max. 2 stations per environment
    • REL OS
    • CLI commands over ssh
    • Web GUI for Celerra Mgmt
    • manages fail-over of Data Movers etc.
    • no local storage, all config is stored on stoarge (Calriion)
  • Data Mover (DM)
    • max. 8 DM per environment
    • at least 1 standby DM for fail-over (i.e. 4 primary & 4 standby or 7 primary & 1 standby)
    • min. 30s for fail-over (you have always have to switch back to the primary after incident)
    • configuration of DM is always done over CS

Basic Setup

  1. Establish Network Connection for DM
  2. Attach Storage/File System (typically Clariion Storage)
  3. Export FS over NFS/CIFS

Protocols

  • NFS
  • CIFS
  • FTP (enabled by default)
  • DFTP

File System

From Spindles to File Systems

From Spindles to File Systems

Size

  • max. 16TB per file system
  • best practice ⇒ file systems not bigger than 2TB
    • only 1 backup process per file system
    • duration of FS-check can get too long otherwise
    • avoids backup and replication performance problems

Mapping of credentials

As under the hood the file systems on the storage are always of the kind Unix, UID and GUID have to be mapped to Windows credentials (SID) in case of CIFS.

Unix UID/GID ⇔ Usermapper ⇔ Windows SID

  • Normally there is one Usermapper per Celerra cabinet
  • in case of replication there is a second (or more) read only Usermapper
  • it's recommended to run the primary Usermapper on the collocation and the secondary on the main site

Quotas

Types of Quotas

  • User Quota
  • Group Quota (Unix only)
  • Tree Quota

Quota Policies

  • Volume Based
    • per 8k block (default)
    • Filesize (recommended)
  • inode limit (number of files)

If you activate quotas on a existing file system, performance can degrease significantly during the creation of the quotas. It's recommendet to activate quotas per default on new file systems with no limit (limit = 0) to avoid this behavior.

Terminology Description
Hard Limit absolute limit of file system usage, if exceeded there will be a save error
Soft Limit if this limit is exceeded, the user is warned and has time to reduce usage ⇒ grace period (called “warning level” on Windows)
Grace Period definition of time the user has to go below the soft limit until the soft limit becomes a hard limit (per default 7 days)

Changed default quotas only affect new user. For existing users you have to adapt the per user policies. Old default policies are converted to user policies if new defaults are introduced.

File Mover

Celerra ⇔ Disk Extender (policy) ⇔ Centera

  • Hierarchical Storage
  • Move files based on policy on cheaper storage (i.e. Centera or from FC to ATA disks)

CLI Commands

  • Unordered List ItemClI commands for Celerra are run on the control station
  • EMC recommends using the GUI for one time config and the CLI for scripting
  • you find all Celerra specific commands in
    • /nas/bin
    • /nas/sbin
  • always use nasadmin in order the required environment variables are set (or login with nasadmin and su ⇒ NOT su -)
  • the control station runs REL5
  • everything stored on the CS (i.e. scripts) if the CS must be replaced (local storage)

Commands executed from the GUI are logged in the file /nas/log/cmd_log, very useful to learn about more complex commands: execute in GUI ⇒ lookup in log and script

Important Commands

Check out the man pages or Powerlink documentation for more information.

Command Description
nas_server -list show all physical and virtual data movers
nas_disk -list list all volumes
getreason get boot status of primary CS and DM's
server_sysstat server_2 show system stats of DM (CPU & Mem usage etc.)
nas_fs create, extend, delete and list file systems
server_mount mount a file system
server_export export a fs (i.e. NFS or CIFS)
server_dns configure and check dns server config
server_date configure and list ntp server config

Networking

celerra network

Trunking

combine multiple interfaces in order to have better throughput and failover in case one interface fails

FailSave Network

One Layer higher combine i.e. a trunk and a physical interface (preferably on a different switch) in order to have redundancy

Replication

Celerra Replication always runs over a TCP/IP connection. For FC replication use the tools of your storage directly (i.e. on Clariion).

In case of a failover the secondary site changes from R/O to R/W. This can lead to inconsistency because the secondary site is never up to date (asynchronous replication).

Process of Replication

  • First Replication
    • create FS on the destination
    • create 2 checkpoints on source/destination each
    • create a full copy of the fs from check point 1
  • Second Replication
    • update checkpoint 2
    • copy diff between checkpoint 1 and to to secondary site
  • Third Replication
    • update checkpoint 1
    • copy diff between checkpoint 1 and to to secondary site

Celerra replication

Terminology

Type Description
Loopback Replication Replication on the same DM over the Loopback interface
Local Replication Local Replication to another primary DM in the same cabinet
Remote Replication Replication between two Celerra cabinets

Performance

  • use dedicated storage network
  • use jumboframes
  • combine multiple interfaces to trunks

Tips & Tricks

manually delete checkpoint schedules over CLI

Schedules are located under /nbsnas/tasks/schedule on the Control Station and can be deleted manually over the CLI if needed.

rm /nbsnas/tasks/schedule/my_schedule
/srv/wiki.niwos.com/data/pages/storage/emc_celerra.txt · Last modified: 2010/05/14 11:36 (external edit)