Version 91 (modified by piontek, 6 years ago) (diff)



Agreement between PSNC and [1 November 2012]
Poznan Supercomputing and Networking Centre has signed  an agreement with the to integrate the QosCosGrid middleware stack into the European Grid Infrastructure (EGI). QosCosGrid will extend the current capabilities of the EGI e-infrastructure by providing mechanisms for advance reservation and co-allocation of heterogeneous resources.

QCG-Broker 2.8 [5 September 2012]
After 6 months from the last release of QCG-Broker, the next version of the service has been deployed in the PL-Grid infrastructure. Main enhancements include: improved stability, improved performance, more accurate brokering algorithms and implemented support for interactive jobs.

QCG-Icon 1.0 Beta [5 September 2012]
The beta version of the lightweight desktop application enabling remote access to QosCosGrid has been already published. The 1.0 release of QCG-Icon provides many enhancements, but in its current version, may include minor bugs. All comments are welcomed as they help to improve the tool.

Cluster Inula in QCG [20 August 2012] QosCosGrid has been extended with a cluster Inula from PSNC. Thus the QosCosGrid infrastructure offers currently access to 6 PL-Grid clusters.

QCG services update [3 August 2012]
The 2.6 release of the all QCG services is already available for download. The new version was developed with close cooperation between QCG team and various users communities, especially from the PL-Grid and MAPPER projects. Thus the many of offered enhancements is the response to reported opinions and significantly improves the practical usability of grid and cloud computing.

QCG-Broker 2.6 [3 April 2012]
The version 2.6 of the QCG-Broker service has been released and deployed in PL-GRID infrastructure. Main changes include additional form of description of jobs (a simple script with “#QCG” directives) and a set of “qcg-” commands to interact with the infrastructure. The motivation for both extensions was to simplify the process of defining as well as controlling of experiments and to make the process similar to the approach well known and used in queuing systems.

QosCosGrid 2.4 [20 December 2011]
Just before New Year, a new version of QosCosGrid has been published. We hope the extensions and improvements included in this release will importantly contribute to further development of new and existing Grid infrastructures.

The first MAPPER review [24 November 2011]
During the first review of the MAPPER project, the QosCosGrid stack has been successfully presented in the two demanding multiscale scenarios. Within the loosly-coupled scenario, QCG has been utilized to create advance reservations for the efficient workflow execution, while within the tightly-coupled one to create advance reservations and run co-allocated jobs simultaneously on several clusters.

See all news


The QosCosGrid (QCG) middleware is an integrated system offering advanced job and resource management capabilities to deliver to end-users supercomputer-like performance and structure. By connecting many distributed computing resources together, QCG offers highly efficient mapping, execution and monitoring capabilities for variety of applications, such as parameter sweep, workflows, MPI or hybrid MPI-OpenMP. Thanks to QosCosGrid, large-scale applications, multi-scale or complex computing models written in Fortran, C, C++ or Java can be automatically distributed over a network of computing resources with guaranteed QoS. The middleware provides also a set of unique features, such as advance reservation and co-allocation of distributed computing resources.

QCG Middleware

QosCosGrid provides:

  • the most efficient remote access to computational resources in a single cluster or many clusters in Poland and Europe,
  • automatic steering of various types of complex computing experiments ranging from multi-parameter sweep studies to cross-cluster executions of parallel applications,
  • fully transparent integration with parallel programming and execution environments like OpenMPI and ProActive located on many computing clusters,
  • support for Quality of Service (e.g. start time) based on advance reservation mechanisms,
  • shorter waiting times and improved resource utilization by hierarchical grid- and local-level job scheduling,
  • management of input and output files in distributed computing clusters,
  • efficient integration between services and queuing systems ensuring high performance and reliability of the overall system,
  • extensible, open and standard based architecture supporting OGF DRMAA, JSDL, BES and HPC Profile with pluggable modules,
  • secure communication channels using transport level (SSL/TLS, X.509) and message level (SAML2.0) mechanisms,
  • delivered together with command-line, graphical, web-based or even mobile phone tools for end users and administrators,
  • fast and reliable installation procedures.

QosCosGrid consists of the following components:

Component Main function Home Page
QCG-Computing Basic Execution Service (BES) supporting advance reservation   QCG-Computing Home Page
QCG-Coordinator Supports QCG-Computing in cross-cluster execution of jobs   QCG-Coordinator Home Page
QCG-Notification Notification capabilities based on WS-Notification   QCG-Notification Home Page
QCG-Broker Resource management and brokering service   QCG-Broker Home Page
QCG-Client Text-based client for QosCosGrid   QCG-Client Home Page
QCG-Icon Lightweight desktop client for QosCosGrid   QCG-Icon Home Page
QCG-Tools Various elements extending the QosCosGrid stack   QCG-Tools Home Page
QCG-Nagios Nagios probes for the QosCosGrid stack QCG-Nagios Home Page

Interoperability & standards supported

The QosCosGrid implementation is based on the open, widely accepted standards. In general, QosCosGrid supports OGF DRMAA, JSDL, BES, HPC Basic Profile and OASIS WS-Notification.

Further reading


The QosCosGrid Infrastructude middleware is developed by the team of Applications Department of  Poznan Supercomputing and Networking Center

General questions please send to:
Technical questions please send to: