Changes between Initial Version and Version 1 of installation_QCG_BES_AR_in_PLGrid

Show
Ignore:
Timestamp:
05/19/11 14:22:02 (13 years ago)
Author:
bartek
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • installation_QCG_BES_AR_in_PLGrid

    v1 v1  
     1[[PageOutline]]  
     2 
     3= QCG BES/AR Installation in PL-Grid=  
     4QCG BES/AR service (the successor of the OpenDSP project) is an open source service acting as a computing provider exposing on demand access to computing resources and jobs over the HPC Basic Profile compliant Web Services interface. In addition the QCG BES/AR offers remote interface for Advance Reservations management.  
     5 
     6This document describes installation of the QCG BES/AR service in the PL-Grid environment. The service should be deployed on the machine (or virtual machine) that: 
     7* has at least 1GB of memory (recommended value: 2 GB) 
     8* has 10 GB of free disk space (most of the space will be used by the log files) 
     9* has any modern CPU (if you plan to use virtual machine you should dedicated to it one or two cores from the host machine) 
     10* is running under Scientific Linux 5.5 (in most cases the provided RPMs should work with any operating system based on Redhat Enterpise Linux 5.x, e.g. CentOS 5) 
     11 
     12 IMPORTANT: :: 
     13   The implementation name of the QCG BES/AR service is '''Smoa Computing''' and this name is used as a common in this guide. 
     14 
     15== Prerequisites == 
     16We assume that you have the Torque local resource manager and the Maui scheduler already installed. This would be typically a frontend machine (i.e. machine where the pbs_server and maui daemons are running). If you want to install the Smoa Computing service on a separate submit host you should read this [[Smoa_Computing_on_separate_machine| notes]]. Moreover the following packages must be installed before you proceed with the Smoa Computing installation. 
     17 
     18* Install database backend (PostgresSQL):   
     19{{{ 
     20#!div style="font-size: 90%" 
     21{{{#!sh 
     22  # yum install postgresql postgresql-server 
     23}}} 
     24}}} 
     25* UnixODBC and the PostgresSQL odbc driver: 
     26{{{ 
     27#!div style="font-size: 90%" 
     28{{{#!sh 
     29  # yum install unixODBC postgresql-odbc 
     30}}} 
     31}}} 
     32* Expat (needed by the BAT updater - a PL-Grid accounting module): 
     33{{{ 
     34#!div style="font-size: 90%" 
     35{{{#!sh 
     36  # yum install expat-devel 
     37}}} 
     38}}} 
     39* Torque devel package and the rpmbuild package (needed to build DRMAA): 
     40{{{ 
     41#!div style="font-size: 90%" 
     42{{{#!sh 
     43  # rpm -i torque-devel-your-version.rpm 
     44  # yum install rpm-build 
     45}}} 
     46}}} 
     47The X.509 host certificate (signed by the Polish Grid CA) and key is already installed in the following locations: 
     48* /etc/grid-security/hostcert.pem 
     49* /etc/grid-security/hostkey.pem 
     50 
     51Most of the grid services and security infrastructures are sensitive to time skews. Thus we recommended to install a Network Time Protocol daemon or use any other solution that provides accurate clock synchronization. 
     52 
     53== Configuring WP4 queue == 
     54Sample Maui configuration that gives 8 machines to exclusive use of the Work Package 4: 
     55{{{ 
     56#!div style="font-size: 90%" 
     57{{{#!default 
     58  # WP4 
     59  # all users by default can use only DEFAULT partition (i.e. ALL minus WP4) 
     60  SYSCFG           PLIST=DEFAULT 
     61   
     62   
     63  # increase priority of the plgrid-wp4-produkcja queue 
     64  CLASSCFG[plgrid-wp4-produkcja] PRIORITY=90000 
     65  # jobs submitted to the plgrid-wp4 queue CAN use and CAN ONLY (note the &) use the wp4 partition 
     66  CLASSCFG[plgrid-wp4] PLIST=wp4& 
     67   
     68  # devote some machines to the Work Package 4 
     69  NODECFG[r512] PARTITION=wp4 
     70  NODECFG[r513] PARTITION=wp4 
     71  NODECFG[r514] PARTITION=wp4 
     72  NODECFG[r515] PARTITION=wp4 
     73  NODECFG[r516] PARTITION=wp4 
     74  NODECFG[r517] PARTITION=wp4 
     75  NODECFG[r518] PARTITION=wp4 
     76  NODECFG[r519] PARTITION=wp4 
     77}}} 
     78}}} 
     79 
     80Now you need also to add the two queues in the Torque resource manager: 
     81 
     82  # 
     83  # Create and define queue plgrid-wp4 
     84  # 
     85  create queue plgrid-wp4 
     86  set queue plgrid-wp4 queue_type = Execution 
     87  set queue plgrid-wp4 resources_max.walltime = 72:00:00 
     88  set queue plgrid-wp4 resources_default.ncpus = 1 
     89  set queue plgrid-wp4 resources_default.walltime = 72:00:00 
     90  set queue plgrid-wp4 acl_group_enable = True 
     91  set queue plgrid-wp4 acl_groups = plgrid-wp4 
     92  set queue plgrid-wp4 acl_group_sloppy = True 
     93  set queue plgrid-wp4 enabled = True 
     94  set queue plgrid-wp4 started = True 
     95   
     96  # 
     97  # Create and define queue plgrid-wp4-produkcja 
     98  # 
     99  create queue plgrid-wp4-produkcja 
     100  set queue plgrid-wp4-produkcja queue_type = Execution 
     101  set queue plgrid-wp4-produkcja resources_max.walltime = 72:00:00 
     102  set queue plgrid-wp4-produkcja resources_max.ncpus = 256 
     103  set queue plgrid-wp4-produkcja resources_default.ncpus = 1 
     104  set queue plgrid-wp4-produkcja resources_default.walltime = 72:00:00 
     105  set queue plgrid-wp4-produkcja acl_group_enable = True 
     106  set queue plgrid-wp4-produkcja acl_groups = plgrid-wp4 
     107  set queue plgrid-wp4-produkcja acl_group_sloppy = True 
     108  set queue plgrid-wp4-produkcja enabled = True 
     109  set queue plgrid-wp4-produkcja started = True 
     110 
     111== Installation using provided RPMS == 
     112* Create the following users: 
     113# smoa_comp - needed by the Smoa Computing service 
     114# grms - the user that the GRMS (i.e. the QosCosGrid Brokering service) would be mapped to  
     115  useradd -d /opt/plgrid/var/log/smoa-comp/ -m smoa_comp  
     116  useradd -d /opt/plgrid/var/log/grms/ -m grms   
     117* and the following group: 
     118# smoa_dev - this group is allowed to read the configuration and log files. Please add the Smoa services' developers to this group. 
     119  groupadd smoa_dev 
     120* install PL-Grid (official) and QCG (testing) repositories: 
     121 # QosCosGrid testing repository 
     122 cat > /etc/yum.repos.d/qcg.repo << EOF 
     123 [qcg] 
     124 name=QosCosGrid YUM repository 
     125 baseurl=http://fury.man.poznan.pl/qcg-packages/sl/x86_64/ 
     126 enabled=1 
     127 gpgcheck=0 
     128 EOF 
     129 # Official PL-Grid repository 
     130 rpm -Uvh http://software.plgrid.pl/packages/repos/plgrid-repos-2010-2.noarch.rpm 
     131 
     132* install Smoa Computing using YUM Package Manager: 
     133  yum install smoa-comp 
     134 
     135* setup Smoa Computing database using provided script: 
     136  # /opt/plgrid/qcg/smoa/share/smoa-comp/tools/smoa-comp-install.sh 
     137  Welcome to smoa-comp installation script! 
     138   
     139  This script will guide you through process of configuring proper environment 
     140  for running the Smoa Computing service. You have to answer few questions regarding 
     141  parameters of your database. If you are not sure just press Enter and use the 
     142  default values. 
     143   
     144  Use local PostgreSQL server? (y/n) [y]: y 
     145  Database [smoa_comp]:  
     146  User [smoa_comp]:  
     147  Password [smoa_comp]: MojeTajneHaslo 
     148  Create database? (y/n) [y]: y 
     149  Create user? (y/n) [y]: y 
     150   
     151  Checking for system user smoa_comp...OK 
     152  Checking whether PostgreSQL server is installed...OK 
     153  Checking whether PostgreSQL server is running...OK 
     154   
     155  Performing installation 
     156  * Creating user smoa_comp...OK 
     157  * Creating database smoa_comp...OK 
     158  * Creating database schema...OK 
     159  * Checking for ODBC data source smoa_comp... 
     160  * Installing ODBC data source...OK 
     161     
     162  Remember to add appropriate entry to /var/lib/pgsql/data/pg_hba.conf (as the first rule!) to allow user smoa_comp to 
     163  access database smoa_comp. For instance: 
     164   
     165  host    smoa_comp       smoa_comp       127.0.0.1/32    md5 
     166   
     167  and reload Postgres server. 
     168  
     169 
     170Add a new rule to the pg_hba.conf as requested: 
     171  vim /var/lib/pgsql/data/pg_hba.conf  
     172  /etc/init.d/postgresql reload 
     173 
     174Install Polish Grid and PL-Grid Simpla-CA certificates: 
     175 wget https://dist.eugridpma.info/distribution/igtf/current/accredited/RPMS/ca_PolishGrid-1.38-1.noarch.rpm 
     176 rpm -i ca_PolishGrid-1.38-1.noarch.rpm  
     177 wget http://software.plgrid.pl/packages/general/ca_PLGRID-SimpleCA-1.0-2.noarch.rpm 
     178 rpm -i ca_PLGRID-SimpleCA-1.0-2.noarch.rpm  
     179 #install certificate revocation list fetching utility 
     180 wget https://dist.eugridpma.info/distribution/util/fetch-crl/fetch-crl-2.8.5-1.noarch.rpm 
     181 rpm -i fetch-crl-2.8.5-1.noarch.rpm 
     182 #get fresh CRLs now 
     183 /usr/sbin/fetch-crl  
     184 #install cron job for it 
     185 cat > /etc/cron.daily/fetch-crl.cron << EOF 
     186 #!/bin/sh 
     187  
     188 /usr/sbin/fetch-crl 
     189 EOF 
     190 chmod a+x /etc/cron.daily/fetch-crl.cron 
     191 
     192 
     193=== The Grid Mapfile  === 
     194==== Manually created grid mapfile (for testing purpose only) ==== 
     195  #for test purpose only add mapping for your account 
     196  echo '"MyCertDN" myaccount' >> /etc/grid-security/grid-mapfile 
     197==== LDAP based grid mapfile ==== 
     198 #install grid-mapfile generator from PL-Grid repository 
     199 yum install plggridmapfilegenerator 
     200 #configure gridmapfilegenerator - remember to change url property to your local ldap replica 
     201 cat > /opt/plgrid/plggridmapfilegenerator/etc/plggridmapfilegenerator.conf << EOF 
     202 [ldap] 
     203 url=ldaps://10.4.1.39 
     204 #search base 
     205 #base=dc=osrodek,dc=plgrid,dc=pl 
     206 base=ou=People,dc=cyfronet,dc=plgrid,dc=pl 
     207 #filter, specifies which users should be processed 
     208 filter=plgridX509CertificateDN=* 
     209 #timeout for execution of ldap queries 
     210 timeout=10 
     211  
     212 [output] 
     213 format=^plgridX509CertificateDN, uid 
     214 EOF 
     215  
     216 #add the gridmapfile generator as the cron.job 
     217 cat > /etc/cron.hourly/gridmapfile.cron << EOF 
     218 #!/bin/sh 
     219 /opt/plgrid/plggridmapfilegenerator/bin/plggridmapfilegenerator.py -o /etc/grid-security/grid-mapfile 
     220 EOF 
     221 #set executable bit 
     222 chmod a+x /etc/cron.hourly/gridmapfile.cron 
     223 #try it! 
     224 /etc/cron.hourly/gridmapfile.cron 
     225 
     226 
     227Add appropriate rights for the smoa_comp and grms users in the Maui scheduler configuaration file: 
     228  vim /var/spool/maui/maui.cfg 
     229  # primary admin must be first in list 
     230  ADMIN1                root 
     231  ADMIN2                grms 
     232  ADMIN3                smoa_comp 
     233 
     234Copy the service certificate and key into the <code>/opt/plgrid/qcg/smoa/etc/certs/</code>. Remember to set appropriate rights to the key file. 
     235  cp /etc/grid-security/hostcert.pem /opt/plgrid/qcg/smoa/etc/certs/smoacert.pem 
     236  cp /etc/grid-security/hostkey.pem /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem 
     237  chown smoa_comp /opt/plgrid/qcg/smoa/etc/certs/smoacert.pem 
     238  chown smoa_comp /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem  
     239  chmod 0600 /opt/plgrid/qcg/smoa/etc/certs/smoakey.pem 
     240 
     241== DRMAA library == 
     242* DRMAA library must be compiled from SRC RPM:  
     243  wget http://fury.man.poznan.pl/qcg-packages/sl/SRPMS/pbs-drmaa-1.0.6-2.src.rpm 
     244  rpmbuild  --rebuild pbs-drmaa-1.0.6-2.src.rpm 
     245  cd /usr/src/redhat/RPMS/x86_64/ 
     246  rpm -i pbs-drmaa-1.0.6-2.x86_64.rpm  
     247 
     248* however if you are using it for the first time then you should try to compile it with enabled logging: 
     249  wget http://fury.man.poznan.pl/qcg-packages/sl/SRPMS/pbs-drmaa-1.0.6-2.src.rpm 
     250  rpmbuild  --define 'configure_options --enable-debug' --rebuild pbs-drmaa-1.0.6-2.src.rpm 
     251  cd /usr/src/redhat/RPMS/x86_64/ 
     252  rpm -i pbs-drmaa-1.0.6-2.x86_64.rpm 
     253 
     254After installation  you need '''either''': 
     255* configure the DRMAA library to use Torque logs ('''RECOMMENDED'''). Sample configuration file of the DRMAA library (<code>/opt/plgrid/qcg/smoa/etc/pbs_drmaa.conf</code>): 
     256  # pbs_drmaa.conf - Sample pbs_drmaa configuration file. 
     257   
     258  wait_thread: 1, 
     259   
     260  pbs_home: "/var/spool/pbs", 
     261     
     262  cache_job_state: 600, 
     263{{Note}} Remember to mount server log directory as described in the eariler [[Smoa_Computing_on_separate_machine|note]]. 
     264 
     265'''or''' 
     266* configure Torque to keep information about completed jobs (e.g.: by setting: qmgr -c 'set server keep_completed = 300'). 
     267   
     268It is possible to limit users to submit job to predefined queue by setting default job category (in the <code>/opt/plgrid/qcg/smoa/etc/pbs_drmaa.conf</code> file): 
     269 
     270  job_categories: { 
     271        default: "-q plgrid", 
     272  }, 
     273 
     274== Restricting advance reservation == 
     275In some deployments enabling advance reservation for the whole cluster is not desirable. In such cases one can limit advance reservation to particular partition by editing <code>/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/reservation_maui.py</code> file and changing the following line: 
     276  cmd = "setres -x BYNAME -r PROCS=1" 
     277to 
     278  cmd = "setres -x BYNAME -r PROCS=1 -p wp4" 
     279 
     280= Service configuration = 
     281Edit the preinstalled service configuration file (<code>/opt/plgrid/qcg/smoa/etc/smoa-compd.xml</code>): 
     282 
     283  <?xml version="1.0" encoding="UTF-8"?> 
     284  <sm:SMOACore 
     285        xmlns:sm="http://schemas.smoa-project.com/core/2009/01/config" 
     286        xmlns="http://schemas.smoa-project.com/comp/2009/01/config" 
     287        xmlns:smc="http://schemas.smoa-project.com/comp/2009/01/config" 
     288        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
     289         
     290        <Configuration> 
     291                <sm:ModuleManager> 
     292                        <sm:Directory>/opt/plgrid/qcg/smoa/lib/smoa-core/modules/</sm:Directory> 
     293                        <sm:Directory>/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/</sm:Directory> 
     294                </sm:ModuleManager> 
     295   
     296                <sm:Service xsi:type="smoa-compd" description="SMOA Computing"> 
     297                        <sm:Logger> 
     298                                <sm:Filename>/opt/plgrid/var/log/smoa-comp/smoa-comp.log</sm:Filename> 
     299                                <sm:Level>INFO</sm:Level> 
     300                        </sm:Logger> 
     301   
     302                        <sm:Transport> 
     303                        <sm:Module xsi:type="sm:ecm_gsoap.service"> 
     304                           <sm:Host>frontend.example.com</sm:Host> 
     305                           <sm:Port>19000</sm:Port> 
     306                           <sm:KeepAlive>false</sm:KeepAlive> 
     307                           <sm:Authentication> 
     308                                   <sm:Module xsi:type="sm:atc_transport_gsi.service"> 
     309                                           <sm:X509CertFile>/opt/plgrid/qcg/smoa/etc/certs/smoacert.pem</sm:X509CertFile> 
     310                                           <sm:X509KeyFile>/opt/plgrid/qcg/smoa/etc/certs/smoakey.pem</sm:X509KeyFile> 
     311                                   </sm:Module> 
     312                           </sm:Authentication> 
     313                           <sm:Authorization> 
     314                                   <sm:Module xsi:type="sm:atz_mapfile"> 
     315                                           <sm:Mapfile>/etc/grid-security/grid-mapfile</sm:Mapfile> 
     316                                   </sm:Module> 
     317                           </sm:Authorization> 
     318                        </sm:Module> 
     319                            <sm:Module xsi:type="smc:smoa-comp-service"/> 
     320                        </sm:Transport> 
     321                         
     322                        <sm:Module xsi:type="pbs_jsdl_filter"/> 
     323                        <sm:Module xsi:type="atz_ardl_filter"/> 
     324                        <sm:Module xsi:type="sm:general_python" path="/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/monitoring.py"/> 
     325   
     326                        <sm:Module xsi:type="submission_drmaa" path="/opt/plgrid/qcg/smoa/lib/libdrmaa.so"/> 
     327                        <sm:Module xsi:type="reservation_python" path="/opt/plgrid/qcg/smoa/lib/smoa-comp/modules/python/reservation_maui.py"/> 
     328                         
     329                        <sm:Module xsi:type="notification_wsn"> 
     330                                <sm:Module xsi:type="sm:ecm_gsoap.client"> 
     331                                                <sm:ServiceURL>http://localhost:19001/</sm:ServiceURL> 
     332                                                        <sm:Authentication> 
     333                                                                <sm:Module xsi:type="sm:atc_transport_http.client"/> 
     334                                                        </sm:Authentication> 
     335                                                <sm:Module xsi:type="sm:ntf_client"/> 
     336                                </sm:Module> 
     337                        </sm:Module> 
     338                                 
     339                        <sm:Module xsi:type="application_mapper"> 
     340                                <ApplicationMapFile>/opt/plgrid/qcg/smoa/etc/application_mapfile</ApplicationMapFile> 
     341                        </sm:Module> 
     342   
     343                        <Database> 
     344                                <DSN>smoa_comp</DSN> 
     345                                <User>smoa_comp</User> 
     346                                <Password>smoa_comp</Password> 
     347                        </Database> 
     348   
     349                        <UnprivilegedUser>smoa_comp</UnprivilegedUser> 
     350   
     351                        <FactoryAttributes> 
     352                                <CommonName>klaster.plgrid.pl</CommonName> 
     353                                <LongDescription>PL Grid cluster</LongDescription> 
     354                        </FactoryAttributes> 
     355                </sm:Service> 
     356   
     357        </Configuration> 
     358  </sm:SMOACore> 
     359  <!-- vim: set ts=2 sw=2: --> 
     360 
     361In most cases it should be enough to change only following elements: 
     362; ''Transport/Module/Host'' :  the hostname of the machine where the service is deployed  
     363; ''Transport/Module/Authentication/Module/X509CertFile'' and  ''Transport/Module/Authentication/Module/X509KeyFile'' : - the service private key and X.509 certificate (consult the [http://www.globus.org/toolkit/docs/4.0/security/prewsaa/rn01re02.html Globus User Gide] on how to generate service certificate request or use the host certificate/key pair). Make sure that the key and certificate is owned by the <code>smoa_comp</code> user and the private key is not password protected (generating certificate with the <code>-service</code> option implies this). If you installed cert and key file in the recommended location you do not need to edit these fields. 
     364; ''Module[type="smc:notification_wsn"]/Module/ServiceURL'' : the URL of the [[SMOA_Notification_in_PL-Grid|Smoa Notification Service]] (You can do it later, i.e. after installing the Smoa Notification service) 
     365;  Module[type="submission_drmaa"]/@path : path to the DRMAA library (the <code>libdrmaa.so</code>). Also, if you installed the DRMAA library using provided SRC RPM you do not need to change this path. 
     366;  ''Database/Password'' : the <code>smoa_comp</code> database password 
     367;  ''FactoryAttributes/CommonName'' : a common name of the cluster (e.g. reef.man.poznan.pl). You can use any name that is unique among all systems (e.g. cluster name + domain name of your institution) 
     368;  ''FactoryAttributes/LongDescription'' : a human readable description of the cluster 
     369 
     370== Configuring BAT accounting module == 
     371In order to report resource usage to the central PL-Grid accounting service you must enable the <code>bat_updater</code> module. You can do this by including the following snippet in the aforementioned configuration file (<code>/opt/plgrid/qcg/smoa/etc/smoa-comp.xml</code>). Please put the following snippet just before the <code>Database</code> section: 
     372  <sm:Module xsi:type="bat_updater"> 
     373        <BATServiceURL>tcp://acct.grid.cyf-kr.edu.pl:61616</BATServiceURL> 
     374        <SiteName>psnc-smoa-plgrid</SiteName> 
     375        <QueueName>test-jobs</QueueName> 
     376  </sm:Module> 
     377 
     378where: 
     379;BATServiceURL : URL of the BAT accounting service 
     380;SiteName : local site name as reported to the BAT service 
     381;QueueName : queue name to which report usage data 
     382 
     383= Note on the security model = 
     384The Smoa Computing can be configured with various authentication and authorization modules. However in the typical deployment we assume that the Smoa Computing is configured as in the above example, i.e.: 
     385* authentication is provided on basics of ''httpg'' protocol 
     386* authorization is based on the local <code>grid-mapfile</code> mapfile (see [[GridFTP#Users_configuration|  Users configuration]]). 
     387 
     388=Starting the service= 
     389As root type: 
     390  
     391 # /etc/init.d/smoa-compd start 
     392 
     393The service logs can be found in: 
     394  /opt/plgrid/var/log/smoa-comp/smoa-comp.log 
     395 
     396The service assumes that the following commands are in the standard search path: 
     397* pbsnodes 
     398* showres 
     399* setres 
     400* releaseres 
     401* checknode 
     402If any of the above commands is not installed in a standard location (e.g. <code>/usr/bin</code>) you may need to edit the <code>/opt/plgrid/qcg/smoa/etc/sysconfig/smoa-compd</code> file and set the PATH variable appropriately, e.g.: 
     403  # INIT_WAIT=5 
     404  # 
     405  # DRM specific options 
     406   
     407  export PATH=$PATH:/opt/maui/bin 
     408 
     409If you compiled DRMAA with logging switched on you can set there also DRMAA logging level: 
     410  # INIT_WAIT=5 
     411  # 
     412  # DRM specific options 
     413 
     414  export DRMAA_LOG_LEVEL=INFO 
     415 
     416=Stopping the service= 
     417The service can be stopped using the following command: 
     418  # /etc/init.d/smoa-compd stop 
     419 
     420=Verifying the installation= 
     421 
     422*  For convenience you can add the <code>/opt/plgrid/qcg/smoa/bin</code> and <code>/opt/plgrid/qcg/smoa-dep/globus/bin/</code> to your <code>PATH</code> variable. 
     423*  Edit the Smoa Computing client configuration file (<code>/opt/plgrid/qcg/smoa/etc/smoa-comp.xml</code>): 
     424**  set the ''Host'' and ''Port''  to reflects the changes in the service configuration file (<code>smoa-compd.xml</code>). 
     425 
     426 <?xml version="1.0" encoding="UTF-8"?> 
     427 <sm:SMOACore 
     428        xmlns:sm="http://schemas.smoa-project.com/core/2009/01/config" 
     429        xmlns="http://schemas.smoa-project.com/comp/2009/01/config" 
     430        xmlns:smc="http://schemas.smoa-project.com/comp/2009/01/config" 
     431        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 
     432   
     433        <Configuration> 
     434                <sm:ModuleManager> 
     435                        <sm:Directory>/opt/QCG/smoa/lib/smoa-core/modules/</sm:Directory> 
     436                        <sm:Directory>/opt/QCG/smoa//lib/smoa-comp/modules/</sm:Directory> 
     437                </sm:ModuleManager> 
     438   
     439                <sm:Client xsi:type="smoa-comp" description="SMOA Computing client"> 
     440                        <sm:Transport> 
     441                                <sm:Module xsi:type="sm:ecm_gsoap.client"> 
     442                                        <sm:ServiceURL>httpg://frontend.example.com:19000/</sm:ServiceURL> 
     443                                        <sm:Authentication> 
     444                                                <sm:Module xsi:type="sm:atc_transport_gsi.client"/> 
     445                                        </sm:Authentication> 
     446                                        <sm:Module xsi:type="smc:smoa-comp-client"/> 
     447                                </sm:Module> 
     448                        </sm:Transport> 
     449                </sm:Client> 
     450        </Configuration> 
     451 </sm:SMOACore> 
     452 
     453* Initialize your credentials: 
     454 
     455 $ grid-proxy-init  
     456 Your identity: /O=Grid/OU=QosCosGrid/OU=PSNC/CN=Mariusz Mamonski 
     457 Enter GRID pass phrase for this identity: 
     458 Creating proxy .................................................................. Done 
     459 Your proxy is valid until: Wed Sep 16 05:01:02 2009 
     460   
     461* Query the SMOA Computing service: 
     462  $ smoa-comp -G | xmllint --format - # the xmllint is used only to present the result in more pleasant way 
     463   
     464  <bes-factory:FactoryResourceAttributesDocument xmlns:bes-factory="http://schemas.ggf.org/bes/2006/08/bes-factory"> 
     465    <bes-factory:IsAcceptingNewActivities>true</bes-factory:IsAcceptingNewActivities> 
     466    <bes-factory:CommonName>IT cluster</bes-factory:CommonName> 
     467    <bes-factory:LongDescription>IT department cluster for public   use</bes-factory:LongDescription> 
     468    <bes-factory:TotalNumberOfActivities>0</bes-factory:TotalNumberOfActivities> 
     469    <bes-factory:TotalNumberOfContainedResources>1</bes-factory:TotalNumberOfContainedResources> 
     470    <bes-factory:ContainedResource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="bes-factory:BasicResourceAttributesDocumentType"> 
     471        <bes-factory:ResourceName>worker.example.com</bes-factory:ResourceName> 
     472        <bes-factory:CPUArchitecture> 
     473            <jsdl:CPUArchitectureName xmlns:jsdl="http://schemas.ggf.org/jsdl/2005/11/jsdl">x86_32</jsdl:CPUArchitectureName> 
     474        </bes-factory:CPUArchitecture> 
     475        <bes-factory:CPUCount>4</bes-factory:CPUCount><bes-factory:PhysicalMemory>1073741824</bes-factory:PhysicalMemory> 
     476    </bes-factory:ContainedResource> 
     477    <bes-factory:NamingProfile>http://schemas.ggf.org/bes/2006/08/bes/naming/BasicWSAddressing</bes-factory:NamingProfile>  
     478    <bes-factory:BESExtension>http://schemas.ogf.org/hpcp/2007/01/bp/BasicFilter</bes-  factory:BESExtension> 
     479    <bes-factory:BESExtension>http://schemas.smoa-project.com/comp/2009/01</bes-factory:BESExtension> 
     480    <bes-factory:LocalResourceManagerType>http://example.com/SunGridEngine</bes-factory:LocalResourceManagerType> 
     481    <smcf:NotificationProviderURL xmlns:smcf="http://schemas.smoa-project.com/comp/2009/01/factory">http://localhost:2211/</smcf:NotificationProviderURL> 
     482 </bes-factory:FactoryResourceAttributesDocument> 
     483 
     484* Submit a sample job: 
     485  $ smoa-comp -c -J /opt/plgrid/qcg/smoa/share/smoa-comp/doc/examples/jsdl/sleep.xml 
     486  Activity Id: ccb6b04a-887b-4027-633f-412375559d73 
     487 
     488* Query it status: 
     489  $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 
     490  status = Executing 
     491  $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 
     492  status = Executing 
     493  $ smoa-comp -s -a ccb6b04a-887b-4027-633f-412375559d73 
     494  status = Finished 
     495  exit status = 0 
     496 
     497* Create an advance reservation: 
     498:* copy the provided sample reservation description file (expressed in ARDL - Advance Reservation Description Language) 
     499 $ cp /opt/plgrid/qcg/smoa/share/smoa-comp/doc/examples/ardl/oneslot.xml oneslot.xml 
     500 
     501:* Edit the <code>oneslot.xml</code> and modify the ''StartTime'' and ''EndTime'' to dates that are in the near future, 
     502:* Create a new reservation: 
     503 $ smoa-comp -c -D oneslot.xml 
     504 Reservation Id: aab6b04a-887b-4027-633f-412375559d7d 
     505:* List all reservations: 
     506 $ smoa-comp -l 
     507 Reservation Id: aab6b04a-887b-4027-633f-412375559d7d 
     508 Total number of reservations: 1 
     509:* Check which hosts where reserved: 
     510 $ smoa-comp -s -r aab6b04a-887b-4027-633f-412375559d7d 
     511 Reserved hosts: 
     512 worker.example.com[used=0,reserved=1,total=4] 
     513:* Delete the reservation: 
     514 $ smoa-comp -t -r aab6b04a-887b-4027-633f-412375559d7d 
     515 Reservation terminated. 
     516:* Check if the grid-ftp is working correctly: 
     517 $ globus-url-copy gsiftp://your.local.host.name/etc/profile profile 
     518 $ diff /etc/profile profile 
     519 
     520= Configuring firewall = 
     521In order to expose the QosCosGrid services externally you need to open the following ports in the firewall: 
     522* 19000 (TCP) - Smoa Computing 
     523* 19001 (TCP) - Smoa Notification 
     524* 2811 (TCP) - GridFTP server 
     525* 9000-9500 (TCP) - GridFTP  port-range (if you want to use different port-range adjust the <code>GLOBUS_TCP_PORT_RANGE</code> variable in the <code>/etc/xinetd.d/gsiftp</code> file) 
     526 
     527= Maintenance = 
     528The historic usage information is stored in two relations of the Smoa Computing database: <code>jobs_acc</code> and <code>reservations_acc</code>. You can always archive old usage data to a file  and delete it from the database using the psql client: 
     529  $ psql -h localhost smoa_comp smoa_comp  
     530  Password for user smoa_comp:  
     531  Welcome to psql 8.1.23, the PostgreSQL interactive terminal. 
     532   
     533  Type:  \copyright for distribution terms 
     534       \h for help with SQL commands 
     535       \? for help with psql commands 
     536       \g or terminate with semicolon to execute query 
     537       \q to quit 
     538 
     539  smoa_comp=> \o jobs.acc 
     540  smoa_comp=> SELECT * FROM jobs_acc where end_time < date '2010-01-10'; 
     541  smoa_comp=> \o reservations.acc 
     542  smoa_comp=> SELECT * FROM reservations_acc where end_time < date '2010-01-10'; 
     543  smoa_comp=> \o 
     544  smoa_comp=> DELETE FROM jobs_acc where end_time < date '2010-01-10'; 
     545  smoa_comp=> DELETE FROM reservation_acc where end_time < date '2010-01-10';