Skip to Content

Welcome to Supercomputing at Swinburne

Supercomputing Overview

Since its inception in 1998 the Centre for Astrophysics and Supercomputing has run a supercomputing facility on behalf of Swinburne University of Technology. Originally a linux beowulf cluster the supercomputer evolved in 2007 into a fully-integrated rack-mounted system with a theoretical peak speed in excess of 10 Tflop/s (10 trillion floating point operations per second). Further evolution throughout 2011/12 sees the installation of a new supercomputer that incorporates graphics processing unit (GPU) hardware to push the performance well beyond 100 Tflop/s. The Swinburne supercomputers have proven to be excellent research tools in areas of astronomy ranging from simulations of structure formation in the Universe to the processing of data collected from radio telescopes. They are also used by CAS staff to render content for 3-D animations and movies. More generally, the supercomputers are available for use by Swinburne University researchers and their collaborators, as well as a national facility for astronomy use.

Green II - gSTAR and swinSTAR

A new supercomputer is being built at Swinburne in 2011 and 2012 under the banner of Green II. It will incorporate two compute facilities. The first is the GPU Supercomputer for Theoretical Astrophysics Research (gSTAR) purchased through a $1.04M Education Investment Fund grant obtained via Astronomy Australia Limited (AAL). This part of the new supercomputer will be primarily available for national astronomy access but with time available for Swinburne staff and students (of all research fields). The second compute facility will be the Swinburne Supercomputer for Theoretical Academic Research (swinSTAR), funded by Swinburne and available to all Swinburne staff and students. Both compute facilities will be networked together in combination with a petascale data store.

More information on this project can be found at http://supercomputing.swin.edu.au.


The Green Machine

This incarnation of the supercomputer was installed at Swinburne in May 2007. It comprises 145 Dell Power Edge 1950 nodes each with:
  • 2 quad-core Clovertown processors at 2.33 GHz
    (each processor is 64-bit low-volt Intel Xeon 5138)
  • 16 GB RAM
  • 2 x 500 GB drives

The Clovertown processors offer performance gains with increased performance-per-Watt over previous processors - hence the current cluster being named the Green Machine.

To complement the data storage capabilities of the supercomputer the Centre also has over 100 TB of RAID5 disks and 77 TB of magnetic tape (in the form of 3 S4 DLT Tape robots) available for long-term data storage.

The nodes are controlled by a head node which distributes jobs to the cluster via a queue system controlled by Moab cluster management software. The operating system is CentOS 5 and the machine is maintained by the ITS team^ at Swinburne.

Current users can submit and monitor jobs via the Moab Access Portal.
Usage can be monitored at the ganglia interface.
Documentation on available software is available via the Green Machine wiki.

^Thanks to Daniel Buttigieg, Gin Tan, Con Tassios, Wei Hong, Damien Milhuisen and James Skliros.


Information and Access

For more information about the Swinburne supercomputer, or to request an account, contact:
  A/Prof Jarrod Hurley
Centre for Astrophysics & Supercomputing
Swinburne University of Technology
PO Box 218
Hawthorn VIC 3122
Australia

Phone: +61-3 9214 5787
Fax: +61-3 9214 8797

Student Research Focus

Adam Deller is a PhD student who graduated from Swinburne in 2009 and used the supercomputer to push the frontiers of radio astronomy.

"Very Long Baseline Interferometry (VLBI) makes use of widely separated radio telescopes to make the highest resolution images in astronomy. It utilises the fact that the correlation of signals between different pairs of telescopes contains information about the components of the spatial frequencies of the radio brightness in the patch of sky being observed. Since the telescopes each record data streams of up to 1 Gbps (enough to fill nearly 2 DVDs per minute!), the correlation operation is computationally intensive, and has traditionally been implemented in dedicated hardware. Using a portion of the Swinburne supercomputer, I correlate data from the Australian Long Baseline Array with an aggregate data rate of up to 4.5 Gbps in real time - a process requiring hundreds of Gflop/s of computational power"

More Cluster History

Prior to May 2007 the CAS supercomputer was a linux beowulf cluster. In 2002 this became only the second machine in Australia to exceed 1 Tflops in performance and by 2004 further expansion provided a theoretical peak speed of 2 Tflops. The cluster comprised the following hardware:

  • 200 Pentium 4 3.2 GHz nodes
  • 32 Pentium 4 3.0 GHz nodes
  • 90 Dual Pentium 4 2.2 GHz server class nodes.

The operating system on this cluster was SuSE linux and it was networked with gigabit ethernet. The cluster routinely operated at 100% capacity and by 2007, having been over four years since the last major supercomputer upgrade and owing to expansion of CAS (in terms of people and projects), was ripe for replacement. However, some components of this cluster remain in use at Swinburne.