February 1, 2008

CGD/IS began tracking major data stores in January, 2006.
At that time there was about 25 TB of storage available. That
number has more than doubled to 80 TB as of January, 2008, and
will continue to grow due primarily to increased model output
and analysis data sets.

http://quotamon.cgd.ucar.edu/quotamon/growth.html

(Historical data indicates the division started with 1 TB of
data in January, 2003.)

This growth has begun to strain our current infrastructure.
Over the next few months, CGD/IS will be over-hauling significant
portions of the infrastructure to support continued growth and
at the same time maintain stability and reliability. Current
storage allocations will also be increased, some significantly.

The planned increases in space will also require a new implementation
of backup software and hardware, which will add cost and time.
However, the backup policies will remain in effect, and possibly
be expanded.

http://www.cgd.ucar.edu/systems/documentation/faqs/backups_quotas.html
Note: Backups are held for 3 months, only.

For the most part, users will only notice (and enjoy) the increased
space. However, there are a few areas where division practices will
be changed significantly. Please read below.

This plan remains
flexible, but is now under way. Please feel free to comment or
ask questions. Progress and details of the changes will be in
the CGD/IS SysWrk notifications.

/home (H:)
----------
    Backup: daily to tape, snapshots at 0:00, 6:00, 12:00, 18:00
    Quota:  3 GB per person
    Size:   1 TB

    This is the most critical space within the division, and
    is served by a highly reliable system. It is also the
    single most expensive piece of equipment, and the most
    difficult to upgrade. The unit is at its maximum capacity,
    and will have to remain in service for at least another year,
    and possibly longer.

    The eventual goal is to increase the standard quota to 10 GB
    per user once new hardware is acquired. Until then, CGD/IS will be
    shaping the quotas to address users' individual immediate needs.

/web/web-data/
----------------------- Backup: daily to tape Quota: None. Allocation manage by section. Current Size: Varies by section: acacia 1.1G cam 7.0G cas 3.2G ccr 50G cdp 8.4G cms 63G cseg 2.2G csm 32G oce 12G pcm 1.1G systems 1.1G tss 58G vemap 23G Historical note: Circa Jan, 2002, /web/web-data was 17 GB of common space and shared by the entire div. This resulted in the ubiquitous e-mails pleading for users to clean up their space. The space was increased several times, and filled several times over the succeeding years. Finally, in 2006, each section was given its own partition to make usage management a local issue, not division wide. The space allocated was based on existing usage plus 10%. This resulted in lop-sided allocations based on the each section's use of the web, but it has served rather well until now. Note: A few soft links exist from the current /web/web-data directories into /project/
. These links will be broken and the data (as of today) migrated to /web/web-data. External mounts (i.e., /project/
) will no longer be available from the web server. This is for stability reasons as well as security. /project/
------------------ Backup: None. Quota: None. Allocation managed by section. Size: 1 TB per section. The /project/
partitions will be increased initially by 500GB to 1.5 TB each, with an eventual goal of 2 TB per section by the beginning of summer, 2008. The /project/
partitions seem to be the most successful allocation/use of data storage in the division and will continue to be grown as long they are useful. /fs/cgd ------- Backup: Daily. Quota: None. Allocation unmanaged. Size: /csm 126 GB /data0 369 GB /home0 29 GB The space for each of these will be increased after consultation with CSEG and Dennis Shea (the traditional keeper of the space). A group quota scheme based on sections will be implmented on /home0 and /data0 in an attempt to avoid the partitions from being filled (and the resulting e-mails to the division to please clean up). If group quotas fail, individual quotas will be implemented. Ownership on static data files in /csm will be changed to a common owner to ease management issues for CSEG. /var/mail --------- Backup: daily to tape, snapshot at 6:00, 12:00, 18:00 Quota: 1 GB per person Size: 100 GB Unlike the other partitions listed here, /var/mail is adequately sized and the quotas are reasonable. However, access time via Thunderbird and other mail clients has become intolerable when performing any mail operations (reading, sending, deleting messages). This is due to the increase e-mail traffic load, which increases the disk I/O. To hopefully fix this problem, /var/mail will be moved to new hardware, placed into a new highly available cluster configuration. NOTE: Additionally, /var/mail will no longer be mounted on all systems within the division. This will affect users who still access e-mail with command-line interfaces such as mutt or pine. These e-mail clients can be re-configured to use the IMAP protocols. (Details later.)