*** !!! Warning !!! *** This page has been obsoleted by the general [[InstallationGuide| Installation Guide]]. = Introduction = QCG-Computing service (the successor of the OpenDSP project) is an open source service acting as a computing provider exposing on demand access to computing resources and jobs over the HPC Basic Profile compliant Web Services interface. In addition the QCG-Computing offers remote interface for Advance Reservations management. This document describes installation of the QCG-Computing service in the PL-Grid environment. The service should be deployed on the machine (or virtual machine) that: * has at least 1GB of memory (recommended value: 2 GB) * has 10 GB of free disk space (most of the space will be used by the log files) * has any modern CPU (if you plan to use virtual machine you should dedicated to it one or two cores from the host machine) * is running under Scientific Linux 5.5 (in most cases the provided RPMs should work with any operating system based on Redhat Enterpise Linux 5.x, e.g. CentOS 5) = Prerequisites = We assume that you have the local resource manager/scheduler already installed. This would be typically a frontend machine (i.e. machine where the pbs_server and maui daemons are running). If you want to install the QCG-Computing service on a separate submit host you should read this [[InstallationOnSeparateMachine| notes]]. Since version 2.4 the QCG-Computing services discovers installed application using the [http://modules.sourceforge.net/ Environment Modules] package. For this reason you should install modules at the qcg host and mount directories that contain all module files used at your cluster and make sure that user `qcg-comp` can see all modules. The !QosCosGrid services do not require from you to install any QCG component on the worker nodes, however application wrapper scripts need the following software to be available on worker nodes: * bash, * rsync, * zip/unzip, * dos2unix, * nc, * python. Which are usually available out of the box on most of the HPC systems. = Firewall configuration = In order to expose the !QosCosGrid services externally you need to open the following incoming ports in the firewall: * 19000 (TCP) - QCG-Computing * 19001 (TCP) - QCG-Notification * 2811 (TCP) - GridFTP server * 20000-25000 (TCP) - GridFTP port-range (if you want to use different port-range adjust the `GLOBUS_TCP_PORT_RANGE` variable in the `/etc/xinetd.d/gsiftp` file) You may also want to allow SSH access from white-listed machines (for administration purpose only). The following outgoing trafic should be allowed in general: * NTP, DNS, HTTP, HTTPS services * gridftp (TCP ports: 2811 and port-ranges: 20000-25000) Also the PL-Grid QCG-Accounting publisher plugin (BAT) need access the following port and machine: * acct.plgrid.pl 61616 (TCP) = Related software = * Install database backend (PostgresSQL): {{{ #!div style="font-size: 90%" {{{#!sh yum install postgresql postgresql-server }}} }}} * UnixODBC and the PostgresSQL odbc driver: {{{ #!div style="font-size: 90%" {{{#!sh yum install unixODBC postgresql-odbc }}} }}} The X.509 host certificate (signed by the Polish Grid CA) and key is already installed in the following locations: * `/etc/grid-security/hostcert.pem` * `/etc/grid-security/hostkey.pem` Most of the grid services and security infrastructures are sensitive to time skews. Thus we recommended to install a Network Time Protocol daemon or use any other solution that provides accurate clock synchronization. = Installation using provided RPMS = * Create the following users: * `qcg-comp` - needed by the QCG-Computing service * `qcg-broker` - the user that the [http://apps.man.poznan.pl/trac/qcg-broker QCG-Broker] service would be mapped to * The users must be also created (and having the same uid) on the batch server machine (but not necessarily the worker nodes). {{{ #!div style="font-size: 90%" {{{#!sh useradd -r -d /var/log/qcg/qcg-comp/ qcg-comp useradd -r -d /var/log/qcg/qcg-broker/ qcg-broker }}} }}} * and the following group: * `qcg-dev` - this group is allowed to read the configuration and log files. Please add the qcg services' developers to this group. {{{ #!div style="font-size: 90%" {{{#!sh groupadd -r qcg-dev }}} }}} * install !QosCosGrid repository (latest version, including new features and latest bug fixes, but may be unstable) {{{ #!div style="font-size: 90%" {{{#!sh cat > /etc/yum.repos.d/qcg.repo << EOF [qcg] name=QosCosGrid YUM repository baseurl=http://www.qoscosgrid.org/qcg-packages/sl5/x86_64/ #repo for SL6 baseurl=http://www.qoscosgrid.org/qcg-packages/sl6/x86_64/ enabled=1 gpgcheck=0 EOF }}} }}} * install QCG-Computing using YUM Package Manager: {{{ #!div style="font-size: 90%" {{{#!sh yum install qcg-comp qcg-comp-client qcg-comp-logrotate }}} }}} * install grid-ftp server using this [[GridFTPInstallation|instruction]]. * setup QCG-Computing database using provided script: {{{ #!div style="font-size: 90%" {{{#!sh /usr/share/qcg-comp/tools/qcg-comp-install.sh Welcome to qcg-comp installation script! This script will guide you through process of configuring proper environment for running the QCG-Computing service. You have to answer few questions regarding parameters of your database. If you are not sure just press Enter and use the default values. Use local PostgreSQL server? (y/n) [y]: y Database [qcg-comp]: User [qcg-comp]: Password [RAND-PASSWD]: MojeTajneHaslo Create database? (y/n) [y]: y Create user? (y/n) [y]: y Checking for system user qcg-comp...OK Checking whether PostgreSQL server is installed...OK Checking whether PostgreSQL server is running...OK Performing installation * Creating user qcg-comp...OK * Creating database qcg-comp...OK * Creating database schema...OK * Checking for ODBC data source qcg-comp... * Installing ODBC data source...OK Remember to add appropriate entry to /var/lib/pgsql/data/pg_hba.conf (as the first rule!) to allow user qcg-comp to access database qcg-comp. For instance: host qcg-comp qcg-comp 127.0.0.1/32 md5 and reload Postgres server. }}} }}} Add a new rule to the pg_hba.conf as requested: {{{ #!div style="font-size: 90%" {{{#!sh vim /var/lib/pgsql/data/pg_hba.conf /etc/init.d/postgresql reload }}} }}} Install EGI Accepted CA certificates (this also install the Polish Grid CA): {{{ #!div style="font-size: 90%" {{{ cd /etc/yum.repos.d/ wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo yum clean all yum install ca-policy-egi-core }}} }}} The above instructions were based on this [https://wiki.egi.eu/wiki/EGI_IGTF_Release manual] Install PL-Grid Simpla-CA certificate (not part of IGTF): {{{ #!div style="font-size: 90%" {{{#!sh wget http://software.plgrid.pl/packages/general/ca_PLGRID-SimpleCA-1.0-2.noarch.rpm rpm -i ca_PLGRID-SimpleCA-1.0-2.noarch.rpm #install certificate revocation list fetching utility wget https://dist.eugridpma.info/distribution/util/fetch-crl/fetch-crl-2.8.5-1.noarch.rpm rpm -i fetch-crl-2.8.5-1.noarch.rpm #get fresh CRLs now /usr/sbin/fetch-crl #install cron job for it cat > /etc/cron.daily/fetch-crl.cron << EOF #!/bin/sh /usr/sbin/fetch-crl EOF chmod a+x /etc/cron.daily/fetch-crl.cron }}} }}} = The Grid Mapfile = This tutorial assumes that the QCG-Computing service is configured in such way, that every authenticated user must be authorized against the `grid-mapfile`. This file can be created manually by an administrator (if the service is run in "test mode") or generated automatically based on the LDAP directory service. === Manually created grid mapfile (for testing purpose only) === {{{ #!div style="font-size: 90%" {{{#!default #for test purpose only add mapping for your account echo '"MyCertDN" myaccount' >> /etc/grid-security/grid-mapfile }}} }}} === LDAP generated grid mapfile (PL-Grid only) === {{{ #!div style="font-size: 90%" {{{#!default # 0. install PL-Grid repository rpm -Uvh http://software.plgrid.pl/packages/repos/plgrid-repos-2010-2.noarch.rpm # # 1. install qcg grid-mapfile generator # yum install qcg-gridmapfilegenerator # # 2. configure gridmapfilegenerator - remember to change # * url property to your local ldap replica # * search base # * filter expression # * security context vim /etc/qcg/qcg-gridmapfile/plggridmapfilegenerator.conf # # 3. run the gridmapfile generator in order to generate gridmapfile now # /usr/sbin/qcg-gridmapfilegenerator.sh }}} }}} After installing and running this tool one can find three files: * /etc/grid-security/grid-mapfile.local - here you can put list of DN and local unix accounts name that will be merged with data acquired from local LDAP server * /etc/grid-security/grid-mapfile.deny - here you can put list od DN's (only DNs!) that you want to deny access to the QCG-Computing service * /etc/grid-security/grid-mapfile - the final gridmap file generated using the above two files and information available in local LDAP server. Do not edit this file as it is generated automatically! This gridmapfile generator script is run every 10 minutes. Moreover its issues `su - $USERNAME -c 'true' > /dev/null` for every new user that do not have yet home directory (thus triggering pam_mkhomedir if installed). At the end add mapping in the `grid-mapfile.local` for the purpose of QCG-Broker. {{{ "/C=PL/O=GRID/O=PSNC/CN=qcg-broker/qcg-broker.man.poznan.pl" qcg-broker }}} = Scheduler configuration = == !Maui/Moab == Add appropriate rights for the `qcg-comp` and `qcg-broker` users in the Maui scheduler configuaration file: {{{ #!div style="font-size: 90%" {{{#!default vim /var/spool/maui/maui.cfg # primary admin must be first in list ADMIN1 root ADMIN2 qcg-broker ADMIN3 qcg-comp }}} }}} == SLURM == The QCG-Broker certificate should be mapped on the SLURM user that is authorized to create advance reservation. = Service certificates = Copy the service certificate and key into the `/opt/plgrid/qcg/etc/qcg-comp/certs/`. Remember to set appropriate rights to the key file. {{{ #!div style="font-size: 90%" {{{#!default cp /etc/grid-security/hostcert.pem /opt/plgrid/qcg/etc/qcg-comp/certs/qcgcert.pem cp /etc/grid-security/hostkey.pem /opt/plgrid/qcg/etc/qcg-comp/certs/qcgkey.pem chown qcg-comp /opt/plgrid/qcg/etc/qcg-comp/certs/qcgcert.pem chown qcg-comp /opt/plgrid/qcg/etc/qcg-comp/certs/qcgkey.pem chmod 0600 /opt/plgrid/qcg/etc/qcg-comp/certs/qcgkey.pem }}} }}} = DRMAA library = == Torque/PBS Professional == Install via YUM repository: {{{ #!div style="font-size: 90%" {{{#!default yum install pbs-drmaa #Torque yum install pbspro-drmaa #PBS Proffesional }}} }}} Alternatively compile DRMAA using source package downloaded from [http://sourceforge.net/projects/pbspro-drmaa/ SourceForge]. After installation you need '''either''': * configure the DRMAA library to use Torque logs ('''RECOMMENDED'''). Sample configuration file of the DRMAA library (`/opt/plgrid/qcg/etc/pbs_drmaa.conf`): {{{ #!div style="font-size: 90%" {{{#!default # pbs_drmaa.conf - Sample pbs_drmaa configuration file. wait_thread: 1, pbs_home: "/var/spool/pbs", cache_job_state: 600, }}} }}} '''Note:''' Remember to mount server log directory as described in the eariler [[InstallationOnSeparateMachine|note]]. '''or''' * configure Torque to keep information about completed jobs (e.g.: by setting: `qmgr -c 'set server keep_completed = 300'`). If running in such configuration try to provide more resources (e.g. two cores instead of one) for the VM that hosts the service. Moreover tune the DRMAA configuration in order to throttle polling rate: {{{ #!div style="font-size: 90%" {{{#!default wait_thread: 1, cache_job_state: 60, pool_delay: 60, }}} }}} It is possible to set the default queue by setting default job category (in the `/opt/plgrid/qcg/etc/pbs_drmaa.conf` file): {{{ #!div style="font-size: 90%" {{{#!default job_categories: { default: "-q plgrid", }, }}} }}} == SLURM == Install DRMAA for SLURM using source package available at [http://apps.man.poznan.pl/trac/slurm-drmaa SLURM DRMAA home page] = Service configuration = Edit the preinstalled service configuration file (`/opt/plgrid/qcg/etc/qcg-comp/qcg-comp`**d**`.xml`): {{{ #!div style="font-size: 90%" {{{#!xml /opt/plgrid/qcg/lib/qcg-core/modules/ /opt/plgrid/qcg/lib/qcg-comp/modules/ /opt/plgrid/var/log/qcg-comp/qcg-compd.log INFO frontend.example.com 19000 false /opt/plgrid/qcg/etc/qcg-comp/certs/qcgcert.pem /opt/plgrid/qcg/etc/qcg-comp/certs/qcgkey.pem /etc/grid-security/grid-mapfile https://frontend.example.com:19011/ http://localhost:19001/ /opt/plgrid/qcg/etc/qcg-comp/application_mapfile qcg-comp qcg-comp qcg-comp qcg-comp klaster.plgrid.pl PL Grid cluster }}} }}} == Common == In most cases it should be enough to change only following elements: `Transport/Module/Host` :: the hostname of the machine where the service is deployed `Transport/Module/Authentication/Module/X509CertFile` and `Transport/Module/Authentication/Module/X509KeyFile` :: the service private key and X.509 certificate. Make sure that the key and certificate is owned by the `qcg-comp` user. If you installed cert and key file in the recommended location you do not need to edit these fields. `Module[type="smc:notification_wsn"]/PublishedBrokerURL` :: the external URL of the QCG-Notification service (You can do it later, i.e. after [http://www.qoscosgrid.org/trac/qcg-notification/wiki/installation_in_PL-Grid installing the QCG-Notification service]) `Module[type="smc:notification_wsn"]/Module/ServiceURL` :: the localhost URL of the QCG-Notification service (You can do it later, i.e. after [http://www.qoscosgrid.org/trac/qcg-notification/wiki/installation_in_PL-Grid installing the QCG-Notification service]) `Module[type="submission_drmaa"]/@path` :: path to the DRMAA library (the `libdrmaa.so`). Also, if you installed the DRMAA library using provided SRC RPM you do not need to change this path. `Module[type="reservation_python"]/@path` :: path to the reservation module. Change this if you are using different scheduler than Maui (e.g. use `reservation_moab.py` for Moab, `reservation_pbs.py` for PBS Pro) `Database/Password` :: the `qcg-comp` database password `UseScratch` :: set this to `true` if you set QCG_SCRATCH_DIR_ROOT in `sysconfig` so any job will be started from scratch directory (instead of default home directory) `FactoryAttributes/CommonName` :: a common name of the cluster (e.g. reef.man.poznan.pl). You can use any name that is unique among all systems (e.g. cluster name + domain name of your institution) `FactoryAttributes/LongDescription` :: a human readable description of the cluster == Torque == `Module[type="reservation_python"]/@path` :: path to the reservation module. Change this if you are using different scheduler than Maui (e.g. use `reservation_moab.py` for Moab) == PBS Professional == `Module[type="reservation_python"]/@path` :: path to the reservation module. Change this to `reservation_pbs.py`. == SLURM == `Module[type="reservation_python"]/@path` :: path to the reservation module. Change this to `reservation_slurm.py`. and replace: {{{ }}} with: {{{ }}} = Restricting advance reservation = By default the QCG-Computing service can reserve any number of hosts. One can limit it by configuring the !Maui/Moab scheduler and the QCG-Computing service properly: * In !Maui/Moab mark some subset of nodes, using the partition mechanism, as reservable for QCG-Computing: {{{ #!div style="font-size: 90%" {{{#!default # all users can use both the DEFAULT and RENABLED partition SYSCFG PLIST=DEFAULT,RENABLED #in Moab you should use 0 instead DEFAULT #SYSCFG PLIST=0,RENABLED # mark some set of the machines (e.g. 64 nodes) as reservable NODECFG[node01] PARTITION=RENABLED NODECFG[node02] PARTITION=RENABLED NODECFG[node03] PARTITION=RENABLED ... NODECFG[node64] PARTITION=RENABLED }}} }}} * Tell the QCG-Computing to limit reservation to the aforementioned partition by editing the `/opt/plgrid/qcg/etc/qcg-comp/sysconfig/qcg-compd` configuration file: {{{ #!div style="font-size: 90%" {{{#!default export QCG_AR_MAUI_PARTITION="RENABLED" }}} }}} * Moreover the QCG-Computing (since version 2.4) can enforce limits on maximal reservations duration length (default: one week) and size (measured in number of slots reserved): {{{ #!div style="font-size: 90%" {{{#!default ... 24 100 ... }}} }}} = Restricted node access (Torque/PBS-Professional only) = Read this section only if the system is configured in such way that not all nodes are accesible using any queue/user. In such case you should provide nodes filter expression in the sysconfig file (`/opt/plgrid/qcg/etc/qcg-comp/sysconfig/qcg-compd`). Examples: * Provide information about nodes that was taged with `qcg` property {{{ export QCG_NODE_FILTER=properties:qcg }}} * Provide information about all nodes except those tagged as `gpgpu` {{{ export QCG_NODE_FILTER=properties:~gpgpu }}} * Provide information only about resources that have `hp` as the `epoch` value: {{{ export QCG_NODE_FILTER=resources_available.epoch:hp }}} In general the `QCG_NODE_FILTER` must adhere the following syntax: {{{ pbsnodes-attr:regular-expression }}} or if you want to reverse semantic (i.e. all nodes except those matching the expression) {{{ pbsnodes-attr:~regular-expression }}} = Configuring QCG-Accounting = Please use [http://www.qoscosgrid.org/trac/qcg-computing/wiki/QCG-Accounting QCG-Accounting] agent. You must enable `bat` as one of the publisher plugins. = Creating applications' script space = A common case for the QCG-Computing service is that an application is accessed using abstract app name rather than specifying absolute executable path. The application name/version to executbale path mappings are stored in the file `/opt/plgrid/qcg/etc/qcg-comp/application_mapfile`: {{{ #!div style="font-size: 90%" {{{#!default cat /opt/plgrid/qcg/etc/qcg-comp/application_mapfile # ApplicationName ApplicationVersion Executable date * /bin/date LPSolve 5.5 /usr/local/bin/lp_solve QCG-OMPI /opt/QCG/qcg/share/qcg-comp/tools/cross-mpi.starter }}} }}} It is also common to provide here wrapper scripts rather than target executables. The wrapper script can handle such aspects of the application lifetime like: environment initialization, copying files from/to scratch storage and application monitoring. It is recommended to create separate directory for those wrapper scripts (e.g. the application partition) for an applications and add write permission to them to the QCG Developers group. This directory must be readable by all users and from every worker node (the application partition usually fullfils those requirements). For example: {{{ #!div style="font-size: 90%" {{{#!default mkdir /opt/exp_soft/plgrid/qcg-app-scripts chown :qcg-dev /opt/exp_soft/plgrid/qcg-app-scripts chmod g+rwx /opt/exp_soft/plgrid/qcg-app-scripts }}} }}} More on [ApplicationScripts Application Scripts]. = Note on the security model = The QCG-Computing can be configured with various authentication and authorization modules. However in the typical deployment we assume that the QCG-Computing is configured as in the above example, i.e.: * authentication is provided on basics of ''httpg'' protocol, * authorization is based on the local `grid-mapfile` mapfile. = Starting the service = As root type: {{{ #!div style="font-size: 90%" {{{#!sh /etc/init.d/qcg-compd start }}} }}} The service logs can be found in: {{{ #!div style="font-size: 90%" {{{#!sh /opt/plgrid/var/log/qcg-comp/qcg-comp.log }}} }}} The service assumes that the following commands are in the standard search path: * `pbsnodes` * `showres` * `setres` * `releaseres` * `checknode` If any of the above commands is not installed in a standard location (e.g. `/usr/bin`) you may need to edit the `/opt/plgrid/qcg/etc/qcg-comp/sysconfig/qcg-compd` file and set the `PATH` variable appropriately, e.g.: {{{ #!div style="font-size: 90%" {{{#!sh # INIT_WAIT=5 # # DRM specific options export PATH=$PATH:/opt/maui/bin }}} }}} If you compiled DRMAA with logging switched on you can set there also DRMAA logging level: {{{ #!div style="font-size: 90%" {{{#!sh # INIT_WAIT=5 # # DRM specific options export DRMAA_LOG_LEVEL=INFO }}} }}} Also provide the location of the root scratch directory if its accessible from the QCG-Computing machine: {{{ #!div style="font-size: 90%" {{{#!sh # INIT_WAIT=5 # export QCG_SCRATCH_DIR_ROOT="/mnt/lustre/scratch/people/" }}} }}} **Note:** In current version, whenever you restart the PosgreSQL server you need also restart the QCG-Computing and QCG-Notification service: {{{ #!div style="font-size: 90%" {{{#!sh /etc/init.d/qcg-compd restart /etc/init.d/qcg-ntfd restart }}} }}} = Stopping the service = The service can be stopped using the following command: {{{ #!div style="font-size: 90%" {{{#!sh /etc/init.d/qcg-compd stop }}} }}} = Verifying the installation = * For convenience you can install the qcg environment module: {{{ #!div style="font-size: 90%" {{{#!sh cp /opt/plgrid/qcg/share/qcg-core/misc/qcg.module /usr/share/Modules/modulefiles/qcg module load qcg }}} }}} * Edit the QCG-Computing client configuration file (`/opt/plgrid/qcg/etc/qcg-comp/qcg-comp.xml`): * set the `Host` and `Port` to reflects the changes in the service configuration file (`qcg-compd.xml`). {{{ #!div style="font-size: 90%" {{{#!sh /opt/qcg/lib/qcg-core/modules/ /opt/qcg/lib/qcg-comp/modules/ httpg://frontend.example.com:19000/ }}} }}} * Initialize your credentials: {{{ #!div style="font-size: 90%" {{{#!sh grid-proxy-init -rfc Your identity: /O=Grid/OU=QosCosGrid/OU=PSNC/CN=Mariusz Mamonski Enter GRID pass phrase for this identity: Creating proxy .................................................................. Done Your proxy is valid until: Wed Apr 6 05:01:02 2012 }}} }}} * Query the QCG-Computing service: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -G | xmllint --format - # the xmllint is used only to present the result in more pleasant way true IT cluster IT department cluster for public use 0 1 worker.example.com x86_32 41073741824 http://schemas.ggf.org/bes/2006/08/bes/naming/BasicWSAddressing http://schemas.ogf.org/hpcp/2007/01/bp/BasicFilter http://schemas.qoscosgrid.org/comp/2011/04 http://example.com/SunGridEngine http://localhost:2211/ }}} }}} * Submit a sample job: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -c -J /opt/plgrid/qcg/share/qcg-comp/doc/examples/jsdl/date.xml Activity Id: ccb6b04a-887b-4027-633f-412375559d73 }}} }}} * Query it status: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 status = Executing qcg-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 status = Executing qcg-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 status = Finished exit status = 0 }}} }}} * Create an advance reservation: * copy the provided sample reservation description file (expressed in ARDL - Advance Reservation Description Language) {{{ #!div style="font-size: 90%" {{{#!sh cp /opt/plgrid/qcg/share/qcg-comp/doc/examples/ardl/oneslot.xml oneslot.xml }}} }}} * Edit the `oneslot.xml` and modify the `StartTime` and `EndTime` to dates that are in the near future, * Create a new reservation: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -c -D oneslot.xml Reservation Id: aab6b04a-887b-4027-633f-412375559d7d }}} }}} * List all reservations: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -l Reservation Id: aab6b04a-887b-4027-633f-412375559d7d Total number of reservations: 1 }}} }}} * Check which hosts where reserved: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -s -r aab6b04a-887b-4027-633f-412375559d7d Reserved hosts: worker.example.com[used=0,reserved=1,total=4] }}} }}} * Delete the reservation: {{{ #!div style="font-size: 90%" {{{#!sh qcg-comp -t -r aab6b04a-887b-4027-633f-412375559d7d Reservation terminated. }}} }}} * Check if the grid-ftp is working correctly: {{{ #!div style="font-size: 90%" {{{#!sh globus-url-copy gsiftp://your.local.host.name/etc/profile profile diff /etc/profile profile }}} }}} = Maintenance = The historic usage information is stored in two relations of the QCG-Computing database: `jobs_acc` and `reservations_acc`. You can always archive old usage data to a file and delete it from the database using the psql client: {{{ #!div style="font-size: 90%" {{{#!sh psql -h localhost qcg-comp qcg-comp Password for user qcg-comp: Welcome to psql 8.1.23, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit qcg-comp=> \o jobs.acc qcg-comp=> SELECT * FROM jobs_acc where end_time < date '2010-01-10'; qcg-comp=> \o reservations.acc qcg-comp=> SELECT * FROM reservations_acc where end_time < date '2010-01-10'; qcg-comp=> \o qcg-comp=> DELETE FROM jobs_acc where end_time < date '2010-01-10'; qcg-comp=> DELETE FROM reservation_acc where end_time < date '2010-01-10'; }}} }}} you should also install logrotate configuration for QCG-Computing: {{{ #!div style="font-size: 90%" {{{#!sh yum install qcg-comp-logrotate }}} }}} **Important**: On any update/restart of the PostgreSQL database you must restart also the qcg-compd and qcg-ntfd services. {{{ /etc/init.d/qcg-compd restart /etc/init.d/qcg-ntfd restart }}} On scheduled downtimes we recommend to disable submission in the service configuration file: {{{ ... false }}} = PL-Grid Grants Support = Since version 2.2.7 QCG-Computing is integrated with PL-Grid grants system. The integration with grant system has three main interaction points: * QCG-Computing can accept jobs which has grant id set explicitly. One must use the `` element, e.g.: {{{ #!div style="font-size: 90%" {{{#!sh Manhattan ... }}} }}} * QCG-Computing can provide information about the local grants to the upper layers (e.g. QCG-Broker), so they can use for scheduling purpose. One can enable it by adding the following line to the QCG-Computing configuration file (qcg-compd.xml): {{{ #!div style="font-size: 90%" {{{#!sh ... }}} }}} Please note that this module requires the [#LDAPgeneratedgridmapfile qcg-gridmapfilegenerator] to be installed. * The grant id is provided in resource usage record sent to the BAT accounting service == Configuring PBS DRMA submit filter == In order to enforce PL-Grid grant policy you must configure PBS DRMAA submit filter by editing the `/opt/plgrid/qcg/etc/qcg-comp/sysconfig/qcg-compd` and adding variable pointing to the DRMAA submit filter, e.g.: {{{ export PBSDRMAA_SUBMIT_FILTER="/software/grid/plgrid/qcg-app-scripts/app-scripts/tools/plgrid-grants/pbsdrmaa_submit_filter.py" }}} An example submit filter can be found in !QosCosGrid svn: {{{ svn co https://apps.man.poznan.pl/svn/qcg-computing/trunk/app-scripts/tools/plgrid-grants }}} More about PBS DRMAA submit filters can be found [[http://apps.man.poznan.pl/trac/pbs-drmaa/wiki/WikiStart#Submitfilter|here]]. = GOCDB = Please remember to register the QCG-Computing and QCG-Notification services in the GOCDB using the QCG.Computing and QCG.Notification services types respectively.