Amazon Ad

Featured Post

Wednesday, September 13, 2006

Setting VERITAS NetBackup with a non-root NDMP user

VERITAS NetBackup NDMP setup with filers - How do I change the NDMP authentication from using root to another non-root user?
Add a user called ndmpuser for NDMP usage on the filer:
useradmin user add <ndmpuser> -g Users
ndmpd password ndmpuser

Type the challenge password into the following command on the NBU server:
set_ndmp_attr -insert -auth <filer_hostname> ndmpuser <password>

In case things go bad, delete and recreate the filer entries:
set_ndmp_attr -delete -robot

Saturday, September 09, 2006

Sharing of Oracle Environments thru NFS

Question: What can be shared in Oracle environments?
Dr. Toaster's Answer:
There are 3 types of sharing models available when thinking of Oracle and NFS:
1. Shared Oracle Binaries - Sharing a single Oracle DB installation and configuring multiple databases to mount and use that single directory via NFS mounts.
2. Shared Oracle_HOME - Enabling single databases to share the same binaries, similar to RAC. Oracle 9i originally did not support Shared Oracle_HOME, but at this point it is supported to use a Shared Oracle_HOME over NFS mounts. Nevertheless, it is suggested to use it for testing and development environments. The Network Appliance™ Best Practice Guidelines for Oracle® recommends not to use it for production and HA environments.
3. Shared APPL_TOP - Sharing the Oracle E-Business Suite binaries. See Reducing Administration Costs for Oracle® E-Business Suite Using NetApp Filers for more information.

Thursday, September 07, 2006

What is space reservation?

Question: I have 100GB LUN inside a 200GB volume - I am trying to expand it using SnapDrive but I can only increase it by a few GB. Can't I expand it by more?
Dr. Toaster's Answer: It is important to first understand the concept of disk space reservation. The simplest way to explain that is to think of a regular magnetic disk drive - it has addresses that hosts can refer to in order to read or write data - and hosts can always send I/O write commands to these data block addresses again and again - that's the whole idea of an address space.
The WAFL filesystem has a different write allocation policy - when you keep snapshots - point-in-time views of the same filesystem, WAFL keeps blocks untouched, so that one can recover from snapshots that reference these blocks.
So to connect the story here - LUNs are implemented on top of WAFL - so when snapshots are being taken, if a host (that is totally unaware of this virtualization) is writing data consistently into a LUN, more and more data blocks will be held "captive" by the snapshots. This is where space reservation algorithms kick in - to protect this behaviour from causing SCSI write errors to LUNs - by default, every LUN consumes its original size plus another 100% of its size as well as a protection against this rare case of frequent writes into a LUN on a volume with many snapshots created.

The simple solution to enable the expansion of the LUN is to enlarge the size of the underlying volume - with FlexVols that would be an easy change. If there is not enough disk space in the aggregate, one can reduce the amount of space reservations per volume using a command such as:
vol options vol_name fractional_reserve 80

where 80 is a number below 100. Note that if snapshots will be taken and the rate of changes in the volume will be higher than 80% then SCSI writes to the LUNs may fail - in which case the filer will take the LUNs in the volume offline, and manual action will have to take place to clean up space (most likely by deleting some snapshots) and online the LUNs again (lun online lun_pathname).

A few other important notes:
1. Use the following commands to review the status of space reservation:

df
df -r
snap delta

2. Data ONTAP 7.2 adds another solution which is to allow volumes to automatically clean up old snapshots and/or grow:

vol options vol_name volume_grow on
vol autosize vol_name -m 1000g -i 1g on

where 1000g is the maximum size that the autosize feature will allow the volume to grow to, and 1g is the increment.
The autosize feature will try to increment the volume size by 1GB increments, and if the aggregate is full it will try to delete snapshots. It is also possible to start with deleting snapshots by using the snap_delete policy instead of the volume_grow I suggest above.